diff --git "a/example/codeql-db/db-python/default/pools/0/pageDump/page-000000001" "b/example/codeql-db/db-python/default/pools/0/pageDump/page-000000001" new file mode 100644--- /dev/null +++ "b/example/codeql-db/db-python/default/pools/0/pageDump/page-000000001" @@ -0,0 +1,31265 @@ +\A +(?: + (?P.)? + (?P[<>=^]) +)? +(?P[-+ ])? +(?P\#)? +(?P0)? +(?P(?!0)\d+)? +(?P,)? +(?:\.(?P0|(?!0)\d+))? +(?P[eEfFgGn%])? +\Z +DOTALL_parse_format_specifier_regexformat_specParse and validate a format specifier. + + Turns a standard numeric format specifier into a dict, with the + following entries: + + fill: fill character to pad field to minimum width + align: alignment type, either '<', '>', '=' or '^' + sign: either '+', '-' or ' ' + minimumwidth: nonnegative integer giving minimum width + zeropad: boolean, indicating whether to pad with zeros + thousands_sep: string to use as thousands separator, or '' + grouping: grouping for thousands separators, in format + used by localeconv + decimal_point: string to use for decimal point + precision: nonnegative integer giving precision, or None + type: one of the characters 'eEfFgG%', or None + + Invalid format specifier: format_dictfillalignzeropadFill character conflicts with '0' in format specifier: "Fill character conflicts with '0'"" in format specifier: "Alignment conflicts with '0' in format specifier: "Alignment conflicts with '0' in ""format specifier: "minimumwidthgGnthousands_sepExplicit thousands separator conflicts with 'n' type in format specifier: "Explicit thousands separator conflicts with ""'n' type in format specifier: "groupingdecimal_pointGiven an unpadded, non-aligned numeric string 'body' and sign + string 'sign', add padding and alignment conforming to the given + format specifier dictionary 'spec' (as produced by + parse_format_specifier). + + padding=^halfUnrecognised alignment field_group_lengthsConvert a localeconv-style grouping into a (possibly infinite) + iterable of integers representing group lengths. + + unrecognised format for grouping_insert_thousands_sepmin_widthInsert thousands separators into a digit string. + + spec is a dictionary whose keys should include 'thousands_sep' and + 'grouping'; typically it's the result of parsing the format + specifier using _parse_format_specifier. + + The min_width keyword argument gives the minimum length of the + result, which will be padded on the left with zeros if necessary. + + If necessary, the zero padding adds an extra '0' on the left to + avoid a leading thousands separator. For example, inserting + commas every three digits in '123456', with min_width=8, gives + '0,123,456', even though that has length 9. + + groupsgroup length should be positiveis_negativeDetermine sign character. +Format a number, given the following data: + + is_negative: true if the number is negative, else false + intpart: string of digits that must appear before the decimal point + fracpart: string of digits that must come after the point + exp: exponent, as an integer + spec: dictionary resulting from parsing the format specifier + + This function uses the information in spec to: + insert separators (decimal separator and thousands separators) + format the sign + format the exponent + add trailing '%' for the '%' type + zero-pad if necessary + fill and align if necessary + altechar{0}{1:+}Inf-Inf# Copyright (c) 2004 Python Software Foundation.# All rights reserved.# Written by Eric Price # and Facundo Batista # and Raymond Hettinger # and Aahz # and Tim Peters# This module should be kept in sync with the latest updates of the# IBM specification as it evolves. Those updates will be treated# as bug fixes (deviation from the spec is a compatibility, usability# bug) and will be backported. At this point the spec is stabilizing# and the updates are becoming fewer, smaller, and less significant.# Two major classes# Named tuple representation# Contexts# Exceptions# Exceptional conditions that trigger InvalidOperation# Constants for use in setting up contexts# Functions for manipulating contexts# Limits for the C version for compatibility# C version: compile time choice that enables the thread local context (deprecated, now always true)# C version: compile time choice that enables the coroutine local context# sys.modules lookup (--without-threads)# For pickling# Highest version of the spec this complies with# See http://speleotrove.com/decimal/# compatible libmpdec version# Rounding# Compatibility with the C version# Errors# List of public traps and flags# Map conditions (per the spec) to signals# Valid rounding modes##### Context Functions ################################################### The getcontext() and setcontext() function manage access to a thread-local# current context.# Don't contaminate the namespace##### Decimal class ######################################################## Do not subclass Decimal from numbers.Real and do not register it as such# (because Decimals are not interoperable with floats). See the notes in# numbers.py for more detail.# Generally, the value of the Decimal instance is given by# (-1)**_sign * _int * 10**_exp# Special values are signified by _is_special == True# We're immutable, so use __new__ not __init__# Note that the coefficient, self._int, is actually stored as# a string rather than as a tuple of digits. This speeds up# the "digits to integer" and "integer to digits" conversions# that are used in almost every arithmetic operation on# Decimals. This is an internal detail: the as_tuple function# and the Decimal constructor still deal with tuples of# digits.# From a string# REs insist on real strings, so we can too.# finite number# NaN# infinity# From an integer# From another decimal# From an internal working value# tuple/list conversion (possibly from as_tuple())# process sign. The isinstance test rejects floats# infinity: value[1] is ignored# process and validate the digits in value[1]# skip leading zeros# NaN: digits form the diagnostic# finite number: digits give the coefficient# handle integer inputs# check for zeros; Decimal('0') == Decimal('-0')# If different signs, neg one is less# self_adjusted < other_adjusted# Note: The Decimal standard doesn't cover rich comparisons for# Decimals. In particular, the specification is silent on the# subject of what should happen for a comparison involving a NaN.# We take the following approach:# == comparisons involving a quiet NaN always return False# != comparisons involving a quiet NaN always return True# == or != comparisons involving a signaling NaN signal# InvalidOperation, and return False or True as above if the# InvalidOperation is not trapped.# <, >, <= and >= comparisons involving a (quiet or signaling)# NaN signal InvalidOperation, and return False if the# This behavior is designed to conform as closely as possible to# that specified by IEEE 754.# Compare(NaN, NaN) = NaN# In order to make sure that the hash of a Decimal instance# agrees with the hash of a numerically equal integer, float# or Fraction, we follow the rules for numeric hashes outlined# in the documentation. (See library docs, 'Built-in Types').# Find n, d in lowest terms such that abs(self) == n / d;# we'll deal with the sign later.# self is an integer.# Find d2, d5 such that abs(self) = n / (2**d2 * 5**d5).# (n & -n).bit_length() - 1 counts trailing zeros in binary# representation of n (provided n is nonzero).# Invariant: eval(repr(d)) == d# self._exp == 'N'# number of digits of self._int to left of decimal point# dotplace is number of digits of self._int to the left of the# decimal point in the mantissa of the output string (that is,# after adjusting the exponent)# no exponent required# usual scientific notation: 1 digit on left of the point# engineering notation, zero# engineering notation, nonzero# -Decimal('0') is Decimal('0'), not Decimal('-0'), except# in ROUND_FLOOR rounding mode.# + (-0) = 0, except in ROUND_FLOOR rounding mode.# If both INF, same sign => same as both, opposite => error.# Can't both be infinity here# If the answer is 0, the sign should be negative, in this case.# Equal and opposite# OK, now abs(op1) > abs(op2)# So we know the sign, and op1 > 0.# Now, op1 > abs(op2) > 0# self - other is computed as self + other.copy_negate()# Special case for multiplying by zero# Fixing in case the exponent is out of bounds# Special case for multiplying by power of 10# Special cases for zeroes# OK, so neither = 0, INF or NaN# result is not exact; adjust to ensure correct rounding# result is exact; get as close to ideal exponent as possible# Here the quotient is too large to be representable# self == +/-infinity -> InvalidOperation# other == 0 -> either InvalidOperation or DivisionUndefined# other = +/-infinity -> remainder = self# self = 0 -> remainder = self, with ideal exponent# catch most cases of large or small quotient# expdiff >= prec+1 => abs(self/other) > 10**prec# expdiff <= -2 => abs(self/other) < 0.1# adjust both arguments to have the same exponent, then divide# remainder is r*10**ideal_exponent; other is +/-op2.int *# 10**ideal_exponent. Apply correction to ensure that# abs(remainder) <= abs(other)/2# result has same sign as self unless r is negative# maximum length of payload is precision if clamp=0,# precision-1 if clamp=1.# decapitate payload if necessary# self is +/-Infinity; return unaltered# if self is zero then exponent should be between Etiny and# Emax if clamp==0, and between Etiny and Etop if clamp==1.# exp_min is the smallest allowable exponent of the result,# equal to max(self.adjusted()-context.prec+1, Etiny)# overflow: exp_min > Etop iff self.adjusted() > Emax# round if self has too many digits# check whether the rounding pushed the exponent out of range# raise the appropriate signals, taking care to respect# the precedence described in the specification# raise Clamped on underflow to 0# fold down if clamp == 1 and self has too few digits# here self was representable to begin with; return unchanged# for each of the rounding functions below:# self is a finite, nonzero Decimal# prec is an integer satisfying 0 <= prec < len(self._int)# each function returns either -1, 0, or 1, as follows:# 1 indicates that self should be rounded up (away from zero)# 0 indicates that self should be truncated, and that all the# digits to be truncated are zeros (so the value is unchanged)# -1 indicates that there are nonzero digits to be truncated# two-argument form: use the equivalent quantize call# one-argument form# compute product; raise InvalidOperation if either operand is# a signaling NaN or if the product is zero times infinity.# deal with NaNs: if there are any sNaNs then first one wins,# (i.e. behaviour for NaNs is identical to that of fma)# check inputs: we apply same restrictions as Python's pow()# additional restriction for decimal: the modulus must be less# than 10**prec in absolute value# define 0**0 == NaN, for consistency with two-argument pow# (even though it hurts!)# compute sign of result# convert modulo to a Python integer, and self and other to# Decimal integers (i.e. force their exponents to be >= 0)# compute result using integer pow()# In the comments below, we write x for the value of self and y for the# value of other. Write x = xc*10**xe and abs(y) = yc*10**ye, with xc# and yc positive integers not divisible by 10.# The main purpose of this method is to identify the *failure*# of x**y to be exactly representable with as little effort as# possible. So we look for cheap and easy tests that# eliminate the possibility of x**y being exact. Only if all# these tests are passed do we go on to actually compute x**y.# Here's the main idea. Express y as a rational number m/n, with m and# n relatively prime and n>0. Then for x**y to be exactly# representable (at *any* precision), xc must be the nth power of a# positive integer and xe must be divisible by n. If y is negative# then additionally xc must be a power of either 2 or 5, hence a power# of 2**n or 5**n.# There's a limit to how small |y| can be: if y=m/n as above# then:# (1) if xc != 1 then for the result to be representable we# need xc**(1/n) >= 2, and hence also xc**|y| >= 2. So# if |y| <= 1/nbits(xc) then xc < 2**nbits(xc) <=# 2**(1/|y|), hence xc**|y| < 2 and the result is not# representable.# (2) if xe != 0, |xe|*(1/n) >= 1, so |xe|*|y| >= 1. Hence if# |y| < 1/|xe| then the result is not representable.# Note that since x is not equal to 1, at least one of (1) and# (2) must apply. Now |y| < 1/nbits(xc) iff |yc|*nbits(xc) <# 10**-ye iff len(str(|yc|*nbits(xc)) <= -ye.# There's also a limit to how large y can be, at least if it's# positive: the normalized result will have coefficient xc**y,# so if it's representable then xc**y < 10**p, and y <# p/log10(xc). Hence if y*log10(xc) >= p then the result is# not exactly representable.# if len(str(abs(yc*xe)) <= -ye then abs(yc*xe) < 10**-ye,# so |y| < 1/xe and the result is not representable.# Similarly, len(str(abs(yc)*xc_bits)) <= -ye implies |y|# < 1/nbits(xc).# case where xc == 1: result is 10**(xe*y), with xe*y# required to be an integer# result is now 10**(xe * 10**ye); xe * 10**ye must be integral# if other is a nonnegative integer, use ideal exponent# case where y is negative: xc must be either a power# of 2 or a power of 5.# quick test for power of 2# now xc is a power of 2; e is its exponent# We now have:# x = 2**e * 10**xe, e > 0, and y < 0.# The exact result is:# x**y = 5**(-e*y) * 10**(e*y + xe*y)# provided that both e*y and xe*y are integers. Note that if# 5**(-e*y) >= 10**p, then the result can't be expressed# exactly with p digits of precision.# Using the above, we can guard against large values of ye.# 93/65 is an upper bound for log(10)/log(5), so if# ye >= len(str(93*p//65))# then# -e*y >= -y >= 10**ye > 93*p/65 > p*log(10)/log(5),# so 5**(-e*y) >= 10**p, and the coefficient of the result# can't be expressed in p digits.# emax >= largest e such that 5**e < 10**p.# Find -e*y and -xe*y; both must be integers# e >= log_5(xc) if xc is a power of 5; we have# equality all the way up to xc=5**2658# Guard against large values of ye, using the same logic as in# the 'xc is a power of 2' branch. 10/3 is an upper bound for# log(10)/log(2).# now y is positive; find m and n such that y = m/n# compute nth root of xc*10**xe# if 1 < xc < 2**n then xc isn't an nth power# compute nth root of xc using Newton's method# initial estimate# now xc*10**xe is the nth root of the original xc*10**xe# compute mth power of xc*10**xe# if m > p*100//_log10_lb(xc) then m > p/log10(xc), hence xc**m ># 10**p and the result is not representable.# by this point the result *is* exactly representable# adjust the exponent to get as close as possible to the ideal# exponent, if necessary# either argument is a NaN => result is NaN# 0**0 = NaN (!), x**0 = 1 for nonzero x (including +/-Infinity)# result has sign 1 iff self._sign is 1 and other is an odd integer# -ve**noninteger = NaN# (-0)**noninteger = 0**noninteger# negate self, without doing any unwanted rounding# 0**(+ve or Inf)= 0; 0**(-ve or -Inf) = Infinity# Inf**(+ve or Inf) = Inf; Inf**(-ve or -Inf) = 0# 1**other = 1, but the choice of exponent and the flags# depend on the exponent of self, and on whether other is a# positive integer, a negative integer, or neither# exp = max(self._exp*max(int(other), 0),# 1-context.prec) but evaluating int(other) directly# is dangerous until we know other is small (other# could be 1e999999999)# compute adjusted exponent of self# self ** infinity is infinity if self > 1, 0 if self < 1# self ** -infinity is infinity if self < 1, 0 if self > 1# from here on, the result always goes through the call# to _fix at the end of this function.# crude test to catch cases of extreme overflow/underflow. If# log10(self)*other >= 10**bound and bound >= len(str(Emax))# then 10**bound >= 10**len(str(Emax)) >= Emax+1 and hence# self**other >= 10**(Emax+1), so overflow occurs. The test# for underflow is similar.# self > 1 and other +ve, or self < 1 and other -ve# possibility of overflow# self > 1 and other -ve, or self < 1 and other +ve# possibility of underflow to 0# try for an exact result with precision +1# usual case: inexact result, x**y computed directly as exp(y*log(x))# compute correctly rounded result: start with precision +3,# then increase precision until result is unambiguously roundable# unlike exp, ln and log10, the power function respects the# rounding mode; no need to switch to ROUND_HALF_EVEN here# There's a difficulty here when 'other' is not an integer and# the result is exact. In this case, the specification# requires that the Inexact flag be raised (in spite of# exactness), but since the result is exact _fix won't do this# for us. (Correspondingly, the Underflow signal should also# be raised for subnormal results.) We can't directly raise# these signals either before or after calling _fix, since# that would violate the precedence for signals. So we wrap# the ._fix call in a temporary context, and reraise# afterwards.# pad with zeros up to length context.prec+1 if necessary; this# ensures that the Rounded signal will be raised.# create a copy of the current context, with cleared flags/traps# round in the new context# raise Inexact, and if necessary, Underflow# propagate signals to the original context; _fix could# have raised any of Overflow, Underflow, Subnormal,# Inexact, Rounded, Clamped. Overflow needs the correct# arguments. Note that the order of the exceptions is# important here.# if both are inf, it is OK# exp._exp should be between Etiny and Emax# raise appropriate flags# call to fix takes care of any necessary folddown, and# signals Clamped if necessary# pad answer with zeros if necessary# too many digits; round and lose data. If self.adjusted() <# exp-1, replace self by 10**(exp-1) before rounding# it can happen that the rescale alters the adjusted exponent;# for example when rounding 99.97 to 3 significant figures.# When this happens we end up with an extra 0 at the end of# the number; a second rescale fixes this.# the method name changed, but we provide also the old one, for compatibility# exponent = self._exp // 2. sqrt(-0) = -0# At this point self represents a positive number. Let p be# the desired precision and express self in the form c*100**e# with c a positive real number and e an integer, c and e# being chosen so that 100**(p-1) <= c < 100**p. Then the# (exact) square root of self is sqrt(c)*10**e, and 10**(p-1)# <= sqrt(c) < 10**p, so the closest representable Decimal at# precision p is n*10**e where n = round_half_even(sqrt(c)),# the closest integer to sqrt(c) with the even integer chosen# in the case of a tie.# To ensure correct rounding in all cases, we use the# following trick: we compute the square root to an extra# place (precision p+1 instead of precision p), rounding down.# Then, if the result is inexact and its last digit is 0 or 5,# we increase the last digit to 1 or 6 respectively; if it's# exact we leave the last digit alone. Now the final round to# p places (or fewer in the case of underflow) will round# correctly and raise the appropriate flags.# use an extra digit of precision# write argument in the form c*100**e where e = self._exp//2# is the 'ideal' exponent, to be used if the square root is# exactly representable. l is the number of 'digits' of c in# base 100, so that 100**(l-1) <= c < 100**l.# rescale so that c has exactly prec base 100 'digits'# find n = floor(sqrt(c)) using Newton's method# result is exact; rescale to use ideal exponent e# assert n % 10**shift == 0# result is not exact; fix last digit as described above# round, and fit to current context# If one operand is a quiet NaN and the other is number, then the# number is always returned# If both operands are finite and equal in numerical value# then an ordering is applied:# If the signs differ then max returns the operand with the# positive sign and min returns the operand with the negative sign# If the signs are the same then the exponent is used to select# the result. This is exactly the ordering used in compare_total.# If NaN or Infinity, self._exp is string# if one is negative and the other is positive, it's easy# let's handle both NaN types# compare payloads as though they're integers# exp(NaN) = NaN# exp(-Infinity) = 0# exp(0) = 1# exp(Infinity) = Infinity# the result is now guaranteed to be inexact (the true# mathematical result is transcendental). There's no need to# raise Rounded and Inexact here---they'll always be raised as# a result of the call to _fix.# we only need to do any computation for quite a small range# of adjusted exponents---for example, -29 <= adj <= 10 for# the default context. For smaller exponent the result is# indistinguishable from 1 at the given precision, while for# larger exponent the result either overflows or underflows.# overflow# underflow to 0# p+1 digits; final round will raise correct flags# general case# compute correctly rounded result: increase precision by# 3 digits at a time until we get an unambiguously# roundable result# at this stage, ans should round correctly with *any*# rounding mode, not just with ROUND_HALF_EVEN# for 0.1 <= x <= 10 we use the inequalities 1-1/x <= ln(x) <= x-1# argument >= 10; we use 23/10 = 2.3 as a lower bound for ln(10)# argument <= 0.1# 1 < self < 10# adj == -1, 0.1 <= self < 1# ln(NaN) = NaN# ln(0.0) == -Infinity# ln(Infinity) = Infinity# ln(1.0) == 0.0# ln(negative) raises InvalidOperation# result is irrational, so necessarily inexact# correctly rounded result: repeatedly increase precision by 3# until we get an unambiguously roundable result# at least p+3 places# assert len(str(abs(coeff)))-p >= 1# For x >= 10 or x < 0.1 we only need a bound on the integer# part of log10(self), and this comes directly from the# exponent of x. For 0.1 <= x <= 10 we use the inequalities# 1-1/x <= log(x) <= x-1. If x > 1 we have |log10(x)| ># (1-1/x)/2.31 > 0. If x < 1 then |log10(x)| > (1-x)/2.31 > 0# self >= 10# self < 0.1# log10(NaN) = NaN# log10(0.0) == -Infinity# log10(Infinity) = Infinity# log10(negative or -Infinity) raises InvalidOperation# log10(10**n) = n# answer may need rounding# correctly rounded result: repeatedly increase precision# until result is unambiguously roundable# logb(NaN) = NaN# logb(+/-Inf) = +Inf# logb(0) = -Inf, DivisionByZero# otherwise, simply return the adjusted exponent of self, as a# Decimal. Note that no attempt is made to fit the result# into the current context.# fill to context.prec# make the operation, and clean starting zeroes# comparison == 1# decide which flags to raise using value of ans# if precision == 1 then we don't raise Clamped for a# result 0E-Etiny.# just a normal, regular, boring number, :)# get values, pad if necessary# let's rotate!# let's shift!# Support for pickling, copy, and deepcopy# I'm immutable; therefore I am my own clone# My components are also immutable# PEP 3101 support. the _localeconv keyword argument should be# considered private: it's provided for ease of testing only.# Note: PEP 3101 says that if the type is not present then# there should be at least one digit after the decimal point.# We take the liberty of ignoring this requirement for# Decimal---it's presumably there to make sure that# format(float, '') behaves similarly to str(float).# special values don't care about the type or precision# a type of None defaults to 'g' or 'G', depending on context# if type is '%', adjust exponent of self accordingly# round if necessary, taking rounding mode from the context# special case: zeros with a positive exponent can't be# represented in fixed point; rescale them to 0e0.# figure out placement of the decimal point# find digits before and after decimal point, and get exponent# done with the decimal-specific stuff; hand over the rest# of the formatting to the _format_number function# Register Decimal as a kind of Number (an abstract base class).# However, do not register it as Real (because Decimals are not# interoperable with floats).##### Context class ######################################################## Set defaults; for everything except flags and _ignored_flags,# inherit from DefaultContext.# raise TypeError even for strings to have consistency# among various implementations.# Don't touch the flag# The errors define how to handle themselves.# Errors should only be risked on copies of the context# self._ignored_flags = []# Do not mutate-- This way, copies of a context leave the original# alone.# We inherit object.__hash__, so we must deny this explicitly# An exact conversion# Apply the context rounding# Methods# sign: 0 or 1# int: int# exp: None, int, or string# assert isinstance(value, tuple)# Let exp = min(tmp.exp - 1, tmp.adjusted() - precision - 1).# Then adding 10**exp to tmp has the same effect (after rounding)# as adding any positive quantity smaller than 10**exp; similarly# for subtraction. So if other is smaller than 10**exp we replace# it with 10**exp. This avoids tmp.exp - other.exp getting too large.##### Integer arithmetic functions used by ln, log10, exp and __pow__ ###### val_n = largest power of 10 dividing n.# The basic algorithm is the following: let log1p be the function# log1p(x) = log(1+x). Then log(x/M) = log1p((x-M)/M). We use# the reduction# log1p(y) = 2*log1p(y/(1+sqrt(1+y)))# repeatedly until the argument to log1p is small (< 2**-L in# absolute value). For small y we can use the Taylor series# expansion# log1p(y) ~ y - y**2/2 + y**3/3 - ... - (-y)**T/T# truncating at T such that y**T is small enough. The whole# computation is carried out in a form of fixed-point arithmetic,# with a real number z being represented by an integer# approximation to z*M. To avoid loss of precision, the y below# is actually an integer approximation to 2**R*y*M, where R is the# number of reductions performed so far.# argument reduction; R = number of reductions performed# Taylor series with T terms# increase precision by 2; compensate for this by dividing# final result by 100# write c*10**e as d*10**f with either:# f >= 0 and 1 <= d <= 10, or# f <= 0 and 0.1 <= d <= 1.# Thus for c*10**e close to 1, f = 0# error < 5 + 22 = 27# error < 1# exact# error < 2.31# error < 0.5# Increase precision by 2. The precision increase is compensated# for at the end with a division by 100.# rewrite c*10**e as d*10**f with either f >= 0 and 1 <= d <= 10,# or f <= 0 and 0.1 <= d <= 1. Then we can compute 10**p * log(c*10**e)# as 10**p * log(d) + 10**p*f * log(10).# compute approximation to 10**p*log(d), with error < 27# error of <= 0.5 in c# _ilog magnifies existing error in c by a factor of at most 10# p <= 0: just approximate the whole thing by 0; error < 2.31# compute approximation to f*10**p*log(10), with error < 11.# error in f * _log10_digits(p+extra) < |f| * 1 = |f|# after division, error < |f|/10**extra + 0.5 < 10 + 0.5 < 11# error in sum < 11+27 = 38; error after division < 0.38 + 0.5 < 1# digits are stored as a string, for quick conversion to# integer in the case that we've already computed enough# digits; the stored digits should always be correct# (truncated, not rounded to nearest).# compute p+3, p+6, p+9, ... digits; continue until at# least one of the extra digits is nonzero# compute p+extra digits, correct to within 1ulp# keep all reliable digits so far; remove trailing zeros# and next nonzero digit# Algorithm: to compute exp(z) for a real number z, first divide z# by a suitable power R of 2 so that |z/2**R| < 2**-L. Then# compute expm1(z/2**R) = exp(z/2**R) - 1 using the usual Taylor# series# expm1(x) = x + x**2/2! + x**3/3! + ...# Now use the identity# expm1(2x) = expm1(x)*(expm1(x)+2)# R times to compute the sequence expm1(z/2**R),# expm1(z/2**(R-1)), ... , exp(z/2), exp(z).# Find R such that x/2**R/M <= 2**-L# Taylor series. (2**L)**T > M# Expansion# we'll call iexp with M = 10**(p+2), giving p+3 digits of precision# compute log(10) with extra precision = adjusted exponent of c*10**e# compute quotient c*10**e/(log(10)) = c*10**(e+q)/(log(10)*10**q),# rounding down# reduce remainder back to original precision# error in result of _iexp < 120; error after division < 0.62# Find b such that 10**(b-1) <= |y| <= 10**b# log(x) = lxc*10**(-p-b-1), to p+b+1 places after the decimal point# compute product y*log(x) = yc*lxc*10**(-p-b-1+ye) = pc*10**(-p-1)# we prefer a result that isn't exactly 1; this makes it# easier to compute a correctly rounded result in __pow__# if x**y > 1:##### Helper Functions ##################################################### Comparison with a Rational instance (also includes integers):# self op n/d <=> self*d op n (for n and d integers, d positive).# A NaN or infinity can be left unchanged without affecting the# comparison result.# Comparisons with float and complex types. == and != comparisons# with complex numbers should succeed, returning either True or False# as appropriate. Other comparisons return NotImplemented.##### Setup Specific Contexts ############################################# The default context prototype used by Context()# Is mutable, so that new contexts can have different default values# Pre-made alternate contexts offered by the specification# Don't change these; the user should be able to select these# contexts and be able to reproduce results from other implementations# of the spec.##### crud for parsing strings ############################################## Regular expression used for parsing numeric strings. Additional# comments:# 1. Uncomment the two '\s*' lines to allow leading and/or trailing# whitespace. But note that the specification disallows whitespace in# a numeric string.# 2. For finite numbers (not infinities and NaNs) the body of the# number between the optional sign and the optional exponent must have# at least one decimal digit, possibly after the decimal point. The# lookahead expression '(?=\d|\.\d)' checks this.##### PEP3101 support functions ############################################### The functions in this section have little to do with the Decimal# class, and could potentially be reused or adapted for other pure# Python numeric classes that want to implement __format__# A format specifier for Decimal looks like:# [[fill]align][sign][#][0][minimumwidth][,][.precision][type]# The locale module is only needed for the 'n' format specifier. The# rest of the PEP 3101 code functions quite happily without it, so we# don't care too much if locale isn't present.# get the dictionary# zeropad; defaults for fill and alignment. If zero padding# is requested, the fill and align fields should be absent.# PEP 3101 originally specified that the default alignment should# be left; it was later agreed that right-aligned makes more sense# for numeric types. See http://bugs.python.org/issue6857.# default sign handling: '-' for negative, '' for positive# minimumwidth defaults to 0; precision remains None if not given# if format type is 'g' or 'G' then a precision of 0 makes little# sense; convert it to 1. Same if format type is unspecified.# determine thousands separator, grouping, and decimal separator, and# add appropriate entries to format_dict# apart from separators, 'n' behaves just like 'g'# how much extra space do we have to play with?# The result from localeconv()['grouping'], and the input to this# function, should be a list of integers in one of the# following three forms:# (1) an empty list, or# (2) nonempty list of positive integers + [0]# (3) list of positive integers + [locale.CHAR_MAX], or# max(..., 1) forces at least 1 digit to the left of a separator##### Useful Constants (internal use only) ################################# Reusable defaults# _SignedInfinity[sign] is infinity w/ that sign# Constants related to the hash implementation; hash(x) is based# on the reduction of x modulo _PyHASH_MODULUS# hash values to use for positive and negative infinities, and nans# _PyHASH_10INV is the inverse of 10 modulo the prime _PyHASH_MODULUSb' +This is an implementation of decimal floating point arithmetic based on +the General Decimal Arithmetic Specification: + + http://speleotrove.com/decimal/decarith.html + +and IEEE standard 854-1987: + + http://en.wikipedia.org/wiki/IEEE_854-1987 + +Decimal floating point has finite precision with arbitrarily large bounds. + +The purpose of this module is to support arithmetic using familiar +"schoolhouse" rules and to avoid some of the tricky representation +issues associated with binary floating point. The package is especially +useful for financial applications or for contexts where users have +expectations that are at odds with binary floating point (for instance, +in binary floating point, 1.00 % 0.1 gives 0.09999999999999995 instead +of 0.0; Decimal('1.00') % Decimal('0.1') returns the expected +Decimal('0.00')). + +Here are some examples of using the decimal module: + +>>> from decimal import * +>>> setcontext(ExtendedContext) +>>> Decimal(0) +Decimal('0') +>>> Decimal('1') +Decimal('1') +>>> Decimal('-.0123') +Decimal('-0.0123') +>>> Decimal(123456) +Decimal('123456') +>>> Decimal('123.45e12345678') +Decimal('1.2345E+12345680') +>>> Decimal('1.33') + Decimal('1.27') +Decimal('2.60') +>>> Decimal('12.34') + Decimal('3.87') - Decimal('18.41') +Decimal('-2.20') +>>> dig = Decimal(1) +>>> print(dig / Decimal(3)) +0.333333333 +>>> getcontext().prec = 18 +>>> print(dig / Decimal(3)) +0.333333333333333333 +>>> print(dig.sqrt()) +1 +>>> print(Decimal(3).sqrt()) +1.73205080756887729 +>>> print(Decimal(3) ** 123) +4.85192780976896427E+58 +>>> inf = Decimal(1) / Decimal(0) +>>> print(inf) +Infinity +>>> neginf = Decimal(-1) / Decimal(0) +>>> print(neginf) +-Infinity +>>> print(neginf + inf) +NaN +>>> print(neginf * inf) +-Infinity +>>> print(dig / 0) +Infinity +>>> getcontext().traps[DivisionByZero] = 1 +>>> print(dig / 0) +Traceback (most recent call last): + ... + ... + ... +decimal.DivisionByZero: x / 0 +>>> c = Context() +>>> c.traps[InvalidOperation] = 0 +>>> print(c.flags[InvalidOperation]) +0 +>>> c.divide(Decimal(0), Decimal(0)) +Decimal('NaN') +>>> c.traps[InvalidOperation] = 1 +>>> print(c.flags[InvalidOperation]) +1 +>>> c.flags[InvalidOperation] = 0 +>>> print(c.flags[InvalidOperation]) +0 +>>> print(c.divide(Decimal(0), Decimal(0))) +Traceback (most recent call last): + ... + ... + ... +decimal.InvalidOperation: 0 / 0 +>>> print(c.flags[InvalidOperation]) +1 +>>> c.flags[InvalidOperation] = 0 +>>> c.traps[InvalidOperation] = 0 +>>> print(c.divide(Decimal(0), Decimal(0))) +NaN +>>> print(c.flags[InvalidOperation]) +1 +>>> +'u' +This is an implementation of decimal floating point arithmetic based on +the General Decimal Arithmetic Specification: + + http://speleotrove.com/decimal/decarith.html + +and IEEE standard 854-1987: + + http://en.wikipedia.org/wiki/IEEE_854-1987 + +Decimal floating point has finite precision with arbitrarily large bounds. + +The purpose of this module is to support arithmetic using familiar +"schoolhouse" rules and to avoid some of the tricky representation +issues associated with binary floating point. The package is especially +useful for financial applications or for contexts where users have +expectations that are at odds with binary floating point (for instance, +in binary floating point, 1.00 % 0.1 gives 0.09999999999999995 instead +of 0.0; Decimal('1.00') % Decimal('0.1') returns the expected +Decimal('0.00')). + +Here are some examples of using the decimal module: + +>>> from decimal import * +>>> setcontext(ExtendedContext) +>>> Decimal(0) +Decimal('0') +>>> Decimal('1') +Decimal('1') +>>> Decimal('-.0123') +Decimal('-0.0123') +>>> Decimal(123456) +Decimal('123456') +>>> Decimal('123.45e12345678') +Decimal('1.2345E+12345680') +>>> Decimal('1.33') + Decimal('1.27') +Decimal('2.60') +>>> Decimal('12.34') + Decimal('3.87') - Decimal('18.41') +Decimal('-2.20') +>>> dig = Decimal(1) +>>> print(dig / Decimal(3)) +0.333333333 +>>> getcontext().prec = 18 +>>> print(dig / Decimal(3)) +0.333333333333333333 +>>> print(dig.sqrt()) +1 +>>> print(Decimal(3).sqrt()) +1.73205080756887729 +>>> print(Decimal(3) ** 123) +4.85192780976896427E+58 +>>> inf = Decimal(1) / Decimal(0) +>>> print(inf) +Infinity +>>> neginf = Decimal(-1) / Decimal(0) +>>> print(neginf) +-Infinity +>>> print(neginf + inf) +NaN +>>> print(neginf * inf) +-Infinity +>>> print(dig / 0) +Infinity +>>> getcontext().traps[DivisionByZero] = 1 +>>> print(dig / 0) +Traceback (most recent call last): + ... + ... + ... +decimal.DivisionByZero: x / 0 +>>> c = Context() +>>> c.traps[InvalidOperation] = 0 +>>> print(c.flags[InvalidOperation]) +0 +>>> c.divide(Decimal(0), Decimal(0)) +Decimal('NaN') +>>> c.traps[InvalidOperation] = 1 +>>> print(c.flags[InvalidOperation]) +1 +>>> c.flags[InvalidOperation] = 0 +>>> print(c.flags[InvalidOperation]) +0 +>>> print(c.divide(Decimal(0), Decimal(0))) +Traceback (most recent call last): + ... + ... + ... +decimal.InvalidOperation: 0 / 0 +>>> print(c.flags[InvalidOperation]) +1 +>>> c.flags[InvalidOperation] = 0 +>>> c.traps[InvalidOperation] = 0 +>>> print(c.divide(Decimal(0), Decimal(0))) +NaN +>>> print(c.flags[InvalidOperation]) +1 +>>> +'b'Decimal'u'Decimal'b'Context'u'Context'b'DecimalTuple'u'DecimalTuple'b'DefaultContext'u'DefaultContext'b'BasicContext'u'BasicContext'b'ExtendedContext'u'ExtendedContext'b'DecimalException'u'DecimalException'b'Clamped'u'Clamped'b'InvalidOperation'u'InvalidOperation'b'DivisionByZero'u'DivisionByZero'b'Inexact'u'Inexact'b'Rounded'u'Rounded'b'Subnormal'u'Subnormal'b'Overflow'u'Overflow'b'Underflow'u'Underflow'b'FloatOperation'u'FloatOperation'b'DivisionImpossible'u'DivisionImpossible'b'InvalidContext'u'InvalidContext'b'ConversionSyntax'u'ConversionSyntax'b'DivisionUndefined'u'DivisionUndefined'b'ROUND_DOWN'b'ROUND_HALF_UP'b'ROUND_HALF_EVEN'b'ROUND_CEILING'b'ROUND_FLOOR'b'ROUND_UP'b'ROUND_HALF_DOWN'b'ROUND_05UP'b'setcontext'u'setcontext'b'getcontext'u'getcontext'b'localcontext'u'localcontext'b'MAX_PREC'u'MAX_PREC'b'MAX_EMAX'u'MAX_EMAX'b'MIN_EMIN'u'MIN_EMIN'b'MIN_ETINY'u'MIN_ETINY'b'HAVE_THREADS'u'HAVE_THREADS'b'HAVE_CONTEXTVAR'u'HAVE_CONTEXTVAR'b'decimal'b'1.70'b'2.4.2'b'sign digits exponent'u'sign digits exponent'b'Base exception class. + + Used exceptions derive from this. + If an exception derives from another exception besides this (such as + Underflow (Inexact, Rounded, Subnormal) that indicates that it is only + called if the others are present. This isn't actually used for + anything, though. + + handle -- Called when context._raise_error is called and the + trap_enabler is not set. First argument is self, second is the + context. More arguments can be given, those being after + the explanation in _raise_error (For example, + context._raise_error(NewError, '(-x)!', self._sign) would + call NewError().handle(context, self._sign).) + + To define a new exception, it should be sufficient to have it derive + from DecimalException. + 'u'Base exception class. + + Used exceptions derive from this. + If an exception derives from another exception besides this (such as + Underflow (Inexact, Rounded, Subnormal) that indicates that it is only + called if the others are present. This isn't actually used for + anything, though. + + handle -- Called when context._raise_error is called and the + trap_enabler is not set. First argument is self, second is the + context. More arguments can be given, those being after + the explanation in _raise_error (For example, + context._raise_error(NewError, '(-x)!', self._sign) would + call NewError().handle(context, self._sign).) + + To define a new exception, it should be sufficient to have it derive + from DecimalException. + 'b'Exponent of a 0 changed to fit bounds. + + This occurs and signals clamped if the exponent of a result has been + altered in order to fit the constraints of a specific concrete + representation. This may occur when the exponent of a zero result would + be outside the bounds of a representation, or when a large normal + number would have an encoded exponent that cannot be represented. In + this latter case, the exponent is reduced to fit and the corresponding + number of zero digits are appended to the coefficient ("fold-down"). + 'u'Exponent of a 0 changed to fit bounds. + + This occurs and signals clamped if the exponent of a result has been + altered in order to fit the constraints of a specific concrete + representation. This may occur when the exponent of a zero result would + be outside the bounds of a representation, or when a large normal + number would have an encoded exponent that cannot be represented. In + this latter case, the exponent is reduced to fit and the corresponding + number of zero digits are appended to the coefficient ("fold-down"). + 'b'An invalid operation was performed. + + Various bad things cause this: + + Something creates a signaling NaN + -INF + INF + 0 * (+-)INF + (+-)INF / (+-)INF + x % 0 + (+-)INF % x + x._rescale( non-integer ) + sqrt(-x) , x > 0 + 0 ** 0 + x ** (non-integer) + x ** (+-)INF + An operand is invalid + + The result of the operation after these is a quiet positive NaN, + except when the cause is a signaling NaN, in which case the result is + also a quiet NaN, but with the original sign, and an optional + diagnostic information. + 'u'An invalid operation was performed. + + Various bad things cause this: + + Something creates a signaling NaN + -INF + INF + 0 * (+-)INF + (+-)INF / (+-)INF + x % 0 + (+-)INF % x + x._rescale( non-integer ) + sqrt(-x) , x > 0 + 0 ** 0 + x ** (non-integer) + x ** (+-)INF + An operand is invalid + + The result of the operation after these is a quiet positive NaN, + except when the cause is a signaling NaN, in which case the result is + also a quiet NaN, but with the original sign, and an optional + diagnostic information. + 'b'Trying to convert badly formed string. + + This occurs and signals invalid-operation if a string is being + converted to a number and it does not conform to the numeric string + syntax. The result is [0,qNaN]. + 'u'Trying to convert badly formed string. + + This occurs and signals invalid-operation if a string is being + converted to a number and it does not conform to the numeric string + syntax. The result is [0,qNaN]. + 'b'Division by 0. + + This occurs and signals division-by-zero if division of a finite number + by zero was attempted (during a divide-integer or divide operation, or a + power operation with negative right-hand operand), and the dividend was + not zero. + + The result of the operation is [sign,inf], where sign is the exclusive + or of the signs of the operands for divide, or is 1 for an odd power of + -0, for power. + 'u'Division by 0. + + This occurs and signals division-by-zero if division of a finite number + by zero was attempted (during a divide-integer or divide operation, or a + power operation with negative right-hand operand), and the dividend was + not zero. + + The result of the operation is [sign,inf], where sign is the exclusive + or of the signs of the operands for divide, or is 1 for an odd power of + -0, for power. + 'b'Cannot perform the division adequately. + + This occurs and signals invalid-operation if the integer result of a + divide-integer or remainder operation had too many digits (would be + longer than precision). The result is [0,qNaN]. + 'u'Cannot perform the division adequately. + + This occurs and signals invalid-operation if the integer result of a + divide-integer or remainder operation had too many digits (would be + longer than precision). The result is [0,qNaN]. + 'b'Undefined result of division. + + This occurs and signals invalid-operation if division by zero was + attempted (during a divide-integer, divide, or remainder operation), and + the dividend is also zero. The result is [0,qNaN]. + 'u'Undefined result of division. + + This occurs and signals invalid-operation if division by zero was + attempted (during a divide-integer, divide, or remainder operation), and + the dividend is also zero. The result is [0,qNaN]. + 'b'Had to round, losing information. + + This occurs and signals inexact whenever the result of an operation is + not exact (that is, it needed to be rounded and any discarded digits + were non-zero), or if an overflow or underflow condition occurs. The + result in all cases is unchanged. + + The inexact signal may be tested (or trapped) to determine if a given + operation (or sequence of operations) was inexact. + 'u'Had to round, losing information. + + This occurs and signals inexact whenever the result of an operation is + not exact (that is, it needed to be rounded and any discarded digits + were non-zero), or if an overflow or underflow condition occurs. The + result in all cases is unchanged. + + The inexact signal may be tested (or trapped) to determine if a given + operation (or sequence of operations) was inexact. + 'b'Invalid context. Unknown rounding, for example. + + This occurs and signals invalid-operation if an invalid context was + detected during an operation. This can occur if contexts are not checked + on creation and either the precision exceeds the capability of the + underlying concrete representation or an unknown or unsupported rounding + was specified. These aspects of the context need only be checked when + the values are required to be used. The result is [0,qNaN]. + 'u'Invalid context. Unknown rounding, for example. + + This occurs and signals invalid-operation if an invalid context was + detected during an operation. This can occur if contexts are not checked + on creation and either the precision exceeds the capability of the + underlying concrete representation or an unknown or unsupported rounding + was specified. These aspects of the context need only be checked when + the values are required to be used. The result is [0,qNaN]. + 'b'Number got rounded (not necessarily changed during rounding). + + This occurs and signals rounded whenever the result of an operation is + rounded (that is, some zero or non-zero digits were discarded from the + coefficient), or if an overflow or underflow condition occurs. The + result in all cases is unchanged. + + The rounded signal may be tested (or trapped) to determine if a given + operation (or sequence of operations) caused a loss of precision. + 'u'Number got rounded (not necessarily changed during rounding). + + This occurs and signals rounded whenever the result of an operation is + rounded (that is, some zero or non-zero digits were discarded from the + coefficient), or if an overflow or underflow condition occurs. The + result in all cases is unchanged. + + The rounded signal may be tested (or trapped) to determine if a given + operation (or sequence of operations) caused a loss of precision. + 'b'Exponent < Emin before rounding. + + This occurs and signals subnormal whenever the result of a conversion or + operation is subnormal (that is, its adjusted exponent is less than + Emin, before any rounding). The result in all cases is unchanged. + + The subnormal signal may be tested (or trapped) to determine if a given + or operation (or sequence of operations) yielded a subnormal result. + 'u'Exponent < Emin before rounding. + + This occurs and signals subnormal whenever the result of a conversion or + operation is subnormal (that is, its adjusted exponent is less than + Emin, before any rounding). The result in all cases is unchanged. + + The subnormal signal may be tested (or trapped) to determine if a given + or operation (or sequence of operations) yielded a subnormal result. + 'b'Numerical overflow. + + This occurs and signals overflow if the adjusted exponent of a result + (from a conversion or from an operation that is not an attempt to divide + by zero), after rounding, would be greater than the largest value that + can be handled by the implementation (the value Emax). + + The result depends on the rounding mode: + + For round-half-up and round-half-even (and for round-half-down and + round-up, if implemented), the result of the operation is [sign,inf], + where sign is the sign of the intermediate result. For round-down, the + result is the largest finite number that can be represented in the + current precision, with the sign of the intermediate result. For + round-ceiling, the result is the same as for round-down if the sign of + the intermediate result is 1, or is [0,inf] otherwise. For round-floor, + the result is the same as for round-down if the sign of the intermediate + result is 0, or is [1,inf] otherwise. In all cases, Inexact and Rounded + will also be raised. + 'u'Numerical overflow. + + This occurs and signals overflow if the adjusted exponent of a result + (from a conversion or from an operation that is not an attempt to divide + by zero), after rounding, would be greater than the largest value that + can be handled by the implementation (the value Emax). + + The result depends on the rounding mode: + + For round-half-up and round-half-even (and for round-half-down and + round-up, if implemented), the result of the operation is [sign,inf], + where sign is the sign of the intermediate result. For round-down, the + result is the largest finite number that can be represented in the + current precision, with the sign of the intermediate result. For + round-ceiling, the result is the same as for round-down if the sign of + the intermediate result is 1, or is [0,inf] otherwise. For round-floor, + the result is the same as for round-down if the sign of the intermediate + result is 0, or is [1,inf] otherwise. In all cases, Inexact and Rounded + will also be raised. + 'b'Numerical underflow with result rounded to 0. + + This occurs and signals underflow if a result is inexact and the + adjusted exponent of the result would be smaller (more negative) than + the smallest value that can be handled by the implementation (the value + Emin). That is, the result is both inexact and subnormal. + + The result after an underflow will be a subnormal number rounded, if + necessary, so that its exponent is not less than Etiny. This may result + in 0 with the sign of the intermediate result and an exponent of Etiny. + + In all cases, Inexact, Rounded, and Subnormal will also be raised. + 'u'Numerical underflow with result rounded to 0. + + This occurs and signals underflow if a result is inexact and the + adjusted exponent of the result would be smaller (more negative) than + the smallest value that can be handled by the implementation (the value + Emin). That is, the result is both inexact and subnormal. + + The result after an underflow will be a subnormal number rounded, if + necessary, so that its exponent is not less than Etiny. This may result + in 0 with the sign of the intermediate result and an exponent of Etiny. + + In all cases, Inexact, Rounded, and Subnormal will also be raised. + 'b'Enable stricter semantics for mixing floats and Decimals. + + If the signal is not trapped (default), mixing floats and Decimals is + permitted in the Decimal() constructor, context.create_decimal() and + all comparison operators. Both conversion and comparisons are exact. + Any occurrence of a mixed operation is silently recorded by setting + FloatOperation in the context flags. Explicit conversions with + Decimal.from_float() or context.create_decimal_from_float() do not + set the flag. + + Otherwise (the signal is trapped), only equality comparisons and explicit + conversions are silent. All other mixed operations raise FloatOperation. + 'u'Enable stricter semantics for mixing floats and Decimals. + + If the signal is not trapped (default), mixing floats and Decimals is + permitted in the Decimal() constructor, context.create_decimal() and + all comparison operators. Both conversion and comparisons are exact. + Any occurrence of a mixed operation is silently recorded by setting + FloatOperation in the context flags. Explicit conversions with + Decimal.from_float() or context.create_decimal_from_float() do not + set the flag. + + Otherwise (the signal is trapped), only equality comparisons and explicit + conversions are silent. All other mixed operations raise FloatOperation. + 'b'decimal_context'u'decimal_context'b'Returns this thread's context. + + If this thread does not yet have a context, returns + a new context and sets this thread's context. + New contexts are copies of DefaultContext. + 'u'Returns this thread's context. + + If this thread does not yet have a context, returns + a new context and sets this thread's context. + New contexts are copies of DefaultContext. + 'b'Set this thread's context to context.'u'Set this thread's context to context.'b'Return a context manager for a copy of the supplied context + + Uses a copy of the current context if no context is specified + The returned context manager creates a local decimal context + in a with statement: + def sin(x): + with localcontext() as ctx: + ctx.prec += 2 + # Rest of sin calculation algorithm + # uses a precision 2 greater than normal + return +s # Convert result to normal precision + + def sin(x): + with localcontext(ExtendedContext): + # Rest of sin calculation algorithm + # uses the Extended Context from the + # General Decimal Arithmetic Specification + return +s # Convert result to normal context + + >>> setcontext(DefaultContext) + >>> print(getcontext().prec) + 28 + >>> with localcontext(): + ... ctx = getcontext() + ... ctx.prec += 2 + ... print(ctx.prec) + ... + 30 + >>> with localcontext(ExtendedContext): + ... print(getcontext().prec) + ... + 9 + >>> print(getcontext().prec) + 28 + 'u'Return a context manager for a copy of the supplied context + + Uses a copy of the current context if no context is specified + The returned context manager creates a local decimal context + in a with statement: + def sin(x): + with localcontext() as ctx: + ctx.prec += 2 + # Rest of sin calculation algorithm + # uses a precision 2 greater than normal + return +s # Convert result to normal precision + + def sin(x): + with localcontext(ExtendedContext): + # Rest of sin calculation algorithm + # uses the Extended Context from the + # General Decimal Arithmetic Specification + return +s # Convert result to normal context + + >>> setcontext(DefaultContext) + >>> print(getcontext().prec) + 28 + >>> with localcontext(): + ... ctx = getcontext() + ... ctx.prec += 2 + ... print(ctx.prec) + ... + 30 + >>> with localcontext(ExtendedContext): + ... print(getcontext().prec) + ... + 9 + >>> print(getcontext().prec) + 28 + 'b'Floating point class for decimal arithmetic.'u'Floating point class for decimal arithmetic.'b'_exp'u'_exp'b'_int'u'_int'b'_sign'u'_sign'b'_is_special'u'_is_special'b'Create a decimal point instance. + + >>> Decimal('3.14') # string input + Decimal('3.14') + >>> Decimal((0, (3, 1, 4), -2)) # tuple (sign, digit_tuple, exponent) + Decimal('3.14') + >>> Decimal(314) # int + Decimal('314') + >>> Decimal(Decimal(314)) # another decimal instance + Decimal('314') + >>> Decimal(' 3.14 \n') # leading and trailing whitespace okay + Decimal('3.14') + 'u'Create a decimal point instance. + + >>> Decimal('3.14') # string input + Decimal('3.14') + >>> Decimal((0, (3, 1, 4), -2)) # tuple (sign, digit_tuple, exponent) + Decimal('3.14') + >>> Decimal(314) # int + Decimal('314') + >>> Decimal(Decimal(314)) # another decimal instance + Decimal('314') + >>> Decimal(' 3.14 \n') # leading and trailing whitespace okay + Decimal('3.14') + 'b'Invalid literal for Decimal: %r'u'Invalid literal for Decimal: %r'b'sign'b'frac'u'frac'b'exp'u'exp'b'diag'u'diag'b'signal'u'signal'b'N'u'N'b'F'u'F'b'Invalid tuple size in creation of Decimal from list or tuple. The list or tuple should have exactly three elements.'u'Invalid tuple size in creation of Decimal from list or tuple. The list or tuple should have exactly three elements.'b'Invalid sign. The first value in the tuple should be an integer; either 0 for a positive number or 1 for a negative number.'u'Invalid sign. The first value in the tuple should be an integer; either 0 for a positive number or 1 for a negative number.'b'The second value in the tuple must be composed of integers in the range 0 through 9.'u'The second value in the tuple must be composed of integers in the range 0 through 9.'b'The third value in the tuple must be an integer, or one of the strings 'F', 'n', 'N'.'u'The third value in the tuple must be an integer, or one of the strings 'F', 'n', 'N'.'b'strict semantics for mixing floats and Decimals are enabled'u'strict semantics for mixing floats and Decimals are enabled'b'Cannot convert %r to Decimal'u'Cannot convert %r to Decimal'b'Converts a float to a decimal number, exactly. + + Note that Decimal.from_float(0.1) is not the same as Decimal('0.1'). + Since 0.1 is not exactly representable in binary floating point, the + value is stored as the nearest representable value which is + 0x1.999999999999ap-4. The exact equivalent of the value in decimal + is 0.1000000000000000055511151231257827021181583404541015625. + + >>> Decimal.from_float(0.1) + Decimal('0.1000000000000000055511151231257827021181583404541015625') + >>> Decimal.from_float(float('nan')) + Decimal('NaN') + >>> Decimal.from_float(float('inf')) + Decimal('Infinity') + >>> Decimal.from_float(-float('inf')) + Decimal('-Infinity') + >>> Decimal.from_float(-0.0) + Decimal('-0') + + 'u'Converts a float to a decimal number, exactly. + + Note that Decimal.from_float(0.1) is not the same as Decimal('0.1'). + Since 0.1 is not exactly representable in binary floating point, the + value is stored as the nearest representable value which is + 0x1.999999999999ap-4. The exact equivalent of the value in decimal + is 0.1000000000000000055511151231257827021181583404541015625. + + >>> Decimal.from_float(0.1) + Decimal('0.1000000000000000055511151231257827021181583404541015625') + >>> Decimal.from_float(float('nan')) + Decimal('NaN') + >>> Decimal.from_float(float('inf')) + Decimal('Infinity') + >>> Decimal.from_float(-float('inf')) + Decimal('-Infinity') + >>> Decimal.from_float(-0.0) + Decimal('-0') + + 'b'argument must be int or float.'u'argument must be int or float.'b'Returns whether the number is not actually one. + + 0 if a number + 1 if NaN + 2 if sNaN + 'u'Returns whether the number is not actually one. + + 0 if a number + 1 if NaN + 2 if sNaN + 'b'Returns whether the number is infinite + + 0 if finite or not a number + 1 if +INF + -1 if -INF + 'u'Returns whether the number is infinite + + 0 if finite or not a number + 1 if +INF + -1 if -INF + 'b'Returns whether the number is not actually one. + + if self, other are sNaN, signal + if self, other are NaN return nan + return 0 + + Done before operations. + 'u'Returns whether the number is not actually one. + + if self, other are sNaN, signal + if self, other are NaN return nan + return 0 + + Done before operations. + 'b'sNaN'u'sNaN'b'Version of _check_nans used for the signaling comparisons + compare_signal, __le__, __lt__, __ge__, __gt__. + + Signal InvalidOperation if either self or other is a (quiet + or signaling) NaN. Signaling NaNs take precedence over quiet + NaNs. + + Return 0 if neither operand is a NaN. + + 'u'Version of _check_nans used for the signaling comparisons + compare_signal, __le__, __lt__, __ge__, __gt__. + + Signal InvalidOperation if either self or other is a (quiet + or signaling) NaN. Signaling NaNs take precedence over quiet + NaNs. + + Return 0 if neither operand is a NaN. + + 'b'comparison involving sNaN'u'comparison involving sNaN'b'comparison involving NaN'u'comparison involving NaN'b'Return True if self is nonzero; otherwise return False. + + NaNs and infinities are considered nonzero. + 'u'Return True if self is nonzero; otherwise return False. + + NaNs and infinities are considered nonzero. + 'b'Compare the two non-NaN decimal instances self and other. + + Returns -1 if self < other, 0 if self == other and 1 + if self > other. This routine is for internal use only.'u'Compare the two non-NaN decimal instances self and other. + + Returns -1 if self < other, 0 if self == other and 1 + if self > other. This routine is for internal use only.'b'Compare self to other. Return a decimal value: + + a or b is a NaN ==> Decimal('NaN') + a < b ==> Decimal('-1') + a == b ==> Decimal('0') + a > b ==> Decimal('1') + 'u'Compare self to other. Return a decimal value: + + a or b is a NaN ==> Decimal('NaN') + a < b ==> Decimal('-1') + a == b ==> Decimal('0') + a > b ==> Decimal('1') + 'b'x.__hash__() <==> hash(x)'u'x.__hash__() <==> hash(x)'b'Cannot hash a signaling NaN value.'u'Cannot hash a signaling NaN value.'b'Represents the number as a triple tuple. + + To show the internals exactly as they are. + 'u'Represents the number as a triple tuple. + + To show the internals exactly as they are. + 'b'Express a finite Decimal instance in the form n / d. + + Returns a pair (n, d) of integers. When called on an infinity + or NaN, raises OverflowError or ValueError respectively. + + >>> Decimal('3.14').as_integer_ratio() + (157, 50) + >>> Decimal('-123e5').as_integer_ratio() + (-12300000, 1) + >>> Decimal('0.00').as_integer_ratio() + (0, 1) + + 'u'Express a finite Decimal instance in the form n / d. + + Returns a pair (n, d) of integers. When called on an infinity + or NaN, raises OverflowError or ValueError respectively. + + >>> Decimal('3.14').as_integer_ratio() + (157, 50) + >>> Decimal('-123e5').as_integer_ratio() + (-12300000, 1) + >>> Decimal('0.00').as_integer_ratio() + (0, 1) + + 'b'cannot convert NaN to integer ratio'u'cannot convert NaN to integer ratio'b'cannot convert Infinity to integer ratio'u'cannot convert Infinity to integer ratio'b'Represents the number as an instance of Decimal.'u'Represents the number as an instance of Decimal.'b'Decimal('%s')'u'Decimal('%s')'b'Return string representation of the number in scientific notation. + + Captures all of the information in the underlying representation. + 'u'Return string representation of the number in scientific notation. + + Captures all of the information in the underlying representation. + 'b'Infinity'u'Infinity'b'NaN'u'NaN'b'e'u'e'b'E'u'E'b'%+d'u'%+d'b'Convert to a string, using engineering notation if an exponent is needed. + + Engineering notation has an exponent which is a multiple of 3. This + can leave up to 3 digits to the left of the decimal place and may + require the addition of either one or two trailing zeros. + 'u'Convert to a string, using engineering notation if an exponent is needed. + + Engineering notation has an exponent which is a multiple of 3. This + can leave up to 3 digits to the left of the decimal place and may + require the addition of either one or two trailing zeros. + 'b'Returns a copy with the sign switched. + + Rounds, if it has reason. + 'u'Returns a copy with the sign switched. + + Rounds, if it has reason. + 'b'Returns a copy, unless it is a sNaN. + + Rounds the number (if more than precision digits) + 'u'Returns a copy, unless it is a sNaN. + + Rounds the number (if more than precision digits) + 'b'Returns the absolute value of self. + + If the keyword argument 'round' is false, do not round. The + expression self.__abs__(round=False) is equivalent to + self.copy_abs(). + 'u'Returns the absolute value of self. + + If the keyword argument 'round' is false, do not round. The + expression self.__abs__(round=False) is equivalent to + self.copy_abs(). + 'b'Returns self + other. + + -INF + INF (or the reverse) cause InvalidOperation errors. + 'u'Returns self + other. + + -INF + INF (or the reverse) cause InvalidOperation errors. + 'b'-INF + INF'u'-INF + INF'b'Return self - other'u'Return self - other'b'Return other - self'u'Return other - self'b'Return self * other. + + (+-) INF * 0 (or its reverse) raise InvalidOperation. + 'u'Return self * other. + + (+-) INF * 0 (or its reverse) raise InvalidOperation. + 'b'(+-)INF * 0'u'(+-)INF * 0'b'0 * (+-)INF'u'0 * (+-)INF'b'Return self / other.'u'Return self / other.'b'(+-)INF/(+-)INF'u'(+-)INF/(+-)INF'b'Division by infinity'u'Division by infinity'b'0 / 0'u'0 / 0'b'x / 0'u'x / 0'b'Return (self // other, self % other), to context.prec precision. + + Assumes that neither self nor other is a NaN, that self is not + infinite and that other is nonzero. + 'u'Return (self // other, self % other), to context.prec precision. + + Assumes that neither self nor other is a NaN, that self is not + infinite and that other is nonzero. + 'b'quotient too large in //, % or divmod'u'quotient too large in //, % or divmod'b'Swaps self/other and returns __truediv__.'u'Swaps self/other and returns __truediv__.'b' + Return (self // other, self % other) + 'u' + Return (self // other, self % other) + 'b'divmod(INF, INF)'u'divmod(INF, INF)'b'INF % x'u'INF % x'b'divmod(0, 0)'u'divmod(0, 0)'b'x // 0'u'x // 0'b'x % 0'u'x % 0'b'Swaps self/other and returns __divmod__.'u'Swaps self/other and returns __divmod__.'b' + self % other + 'u' + self % other + 'b'0 % 0'u'0 % 0'b'Swaps self/other and returns __mod__.'u'Swaps self/other and returns __mod__.'b' + Remainder nearest to 0- abs(remainder-near) <= other/2 + 'u' + Remainder nearest to 0- abs(remainder-near) <= other/2 + 'b'remainder_near(infinity, x)'u'remainder_near(infinity, x)'b'remainder_near(x, 0)'u'remainder_near(x, 0)'b'remainder_near(0, 0)'u'remainder_near(0, 0)'b'self // other'u'self // other'b'INF // INF'u'INF // INF'b'0 // 0'u'0 // 0'b'Swaps self/other and returns __floordiv__.'u'Swaps self/other and returns __floordiv__.'b'Float representation.'u'Float representation.'b'Cannot convert signaling NaN to float'u'Cannot convert signaling NaN to float'b'-nan'u'-nan'b'nan'u'nan'b'Converts self to an int, truncating if necessary.'u'Converts self to an int, truncating if necessary.'b'Cannot convert NaN to integer'u'Cannot convert NaN to integer'b'Cannot convert infinity to integer'u'Cannot convert infinity to integer'b'Decapitate the payload of a NaN to fit the context'u'Decapitate the payload of a NaN to fit the context'b'Round if it is necessary to keep self within prec precision. + + Rounds and fixes the exponent. Does not raise on a sNaN. + + Arguments: + self - Decimal instance + context - context used. + 'u'Round if it is necessary to keep self within prec precision. + + Rounds and fixes the exponent. Does not raise on a sNaN. + + Arguments: + self - Decimal instance + context - context used. + 'b'above Emax'u'above Emax'b'Also known as round-towards-0, truncate.'u'Also known as round-towards-0, truncate.'b'Rounds away from 0.'u'Rounds away from 0.'b'Rounds 5 up (away from 0)'u'Rounds 5 up (away from 0)'b'56789'u'56789'b'Round 5 down'u'Round 5 down'b'Round 5 to even, rest to nearest.'u'Round 5 to even, rest to nearest.'b'02468'u'02468'b'Rounds up (not away from 0 if negative.)'u'Rounds up (not away from 0 if negative.)'b'Rounds down (not towards 0 if negative)'u'Rounds down (not towards 0 if negative)'b'Round down unless digit prec-1 is 0 or 5.'u'Round down unless digit prec-1 is 0 or 5.'b'05'u'05'b'Round self to the nearest integer, or to a given precision. + + If only one argument is supplied, round a finite Decimal + instance self to the nearest integer. If self is infinite or + a NaN then a Python exception is raised. If self is finite + and lies exactly halfway between two integers then it is + rounded to the integer with even last digit. + + >>> round(Decimal('123.456')) + 123 + >>> round(Decimal('-456.789')) + -457 + >>> round(Decimal('-3.0')) + -3 + >>> round(Decimal('2.5')) + 2 + >>> round(Decimal('3.5')) + 4 + >>> round(Decimal('Inf')) + Traceback (most recent call last): + ... + OverflowError: cannot round an infinity + >>> round(Decimal('NaN')) + Traceback (most recent call last): + ... + ValueError: cannot round a NaN + + If a second argument n is supplied, self is rounded to n + decimal places using the rounding mode for the current + context. + + For an integer n, round(self, -n) is exactly equivalent to + self.quantize(Decimal('1En')). + + >>> round(Decimal('123.456'), 0) + Decimal('123') + >>> round(Decimal('123.456'), 2) + Decimal('123.46') + >>> round(Decimal('123.456'), -2) + Decimal('1E+2') + >>> round(Decimal('-Infinity'), 37) + Decimal('NaN') + >>> round(Decimal('sNaN123'), 0) + Decimal('NaN123') + + 'u'Round self to the nearest integer, or to a given precision. + + If only one argument is supplied, round a finite Decimal + instance self to the nearest integer. If self is infinite or + a NaN then a Python exception is raised. If self is finite + and lies exactly halfway between two integers then it is + rounded to the integer with even last digit. + + >>> round(Decimal('123.456')) + 123 + >>> round(Decimal('-456.789')) + -457 + >>> round(Decimal('-3.0')) + -3 + >>> round(Decimal('2.5')) + 2 + >>> round(Decimal('3.5')) + 4 + >>> round(Decimal('Inf')) + Traceback (most recent call last): + ... + OverflowError: cannot round an infinity + >>> round(Decimal('NaN')) + Traceback (most recent call last): + ... + ValueError: cannot round a NaN + + If a second argument n is supplied, self is rounded to n + decimal places using the rounding mode for the current + context. + + For an integer n, round(self, -n) is exactly equivalent to + self.quantize(Decimal('1En')). + + >>> round(Decimal('123.456'), 0) + Decimal('123') + >>> round(Decimal('123.456'), 2) + Decimal('123.46') + >>> round(Decimal('123.456'), -2) + Decimal('1E+2') + >>> round(Decimal('-Infinity'), 37) + Decimal('NaN') + >>> round(Decimal('sNaN123'), 0) + Decimal('NaN123') + + 'b'Second argument to round should be integral'u'Second argument to round should be integral'b'cannot round a NaN'u'cannot round a NaN'b'cannot round an infinity'u'cannot round an infinity'b'Return the floor of self, as an integer. + + For a finite Decimal instance self, return the greatest + integer n such that n <= self. If self is infinite or a NaN + then a Python exception is raised. + + 'u'Return the floor of self, as an integer. + + For a finite Decimal instance self, return the greatest + integer n such that n <= self. If self is infinite or a NaN + then a Python exception is raised. + + 'b'Return the ceiling of self, as an integer. + + For a finite Decimal instance self, return the least integer n + such that n >= self. If self is infinite or a NaN then a + Python exception is raised. + + 'u'Return the ceiling of self, as an integer. + + For a finite Decimal instance self, return the least integer n + such that n >= self. If self is infinite or a NaN then a + Python exception is raised. + + 'b'Fused multiply-add. + + Returns self*other+third with no rounding of the intermediate + product self*other. + + self and other are multiplied together, with no rounding of + the result. The third operand is then added to the result, + and a single final rounding is performed. + 'u'Fused multiply-add. + + Returns self*other+third with no rounding of the intermediate + product self*other. + + self and other are multiplied together, with no rounding of + the result. The third operand is then added to the result, + and a single final rounding is performed. + 'b'INF * 0 in fma'u'INF * 0 in fma'b'0 * INF in fma'u'0 * INF in fma'b'Three argument version of __pow__'u'Three argument version of __pow__'b'pow() 3rd argument not allowed unless all arguments are integers'u'pow() 3rd argument not allowed unless all arguments are integers'b'pow() 2nd argument cannot be negative when 3rd argument specified'u'pow() 2nd argument cannot be negative when 3rd argument specified'b'pow() 3rd argument cannot be 0'u'pow() 3rd argument cannot be 0'b'insufficient precision: pow() 3rd argument must not have more than precision digits'u'insufficient precision: pow() 3rd argument must not have more than precision digits'b'at least one of pow() 1st argument and 2nd argument must be nonzero; 0**0 is not defined'u'at least one of pow() 1st argument and 2nd argument must be nonzero; 0**0 is not defined'b'Attempt to compute self**other exactly. + + Given Decimals self and other and an integer p, attempt to + compute an exact result for the power self**other, with p + digits of precision. Return None if self**other is not + exactly representable in p digits. + + Assumes that elimination of special cases has already been + performed: self and other must both be nonspecial; self must + be positive and not numerically equal to 1; other must be + nonzero. For efficiency, other._exp should not be too large, + so that 10**abs(other._exp) is a feasible calculation.'u'Attempt to compute self**other exactly. + + Given Decimals self and other and an integer p, attempt to + compute an exact result for the power self**other, with p + digits of precision. Return None if self**other is not + exactly representable in p digits. + + Assumes that elimination of special cases has already been + performed: self and other must both be nonspecial; self must + be positive and not numerically equal to 1; other must be + nonzero. For efficiency, other._exp should not be too large, + so that 10**abs(other._exp) is a feasible calculation.'b'Return self ** other [ % modulo]. + + With two arguments, compute self**other. + + With three arguments, compute (self**other) % modulo. For the + three argument form, the following restrictions on the + arguments hold: + + - all three arguments must be integral + - other must be nonnegative + - either self or other (or both) must be nonzero + - modulo must be nonzero and must have at most p digits, + where p is the context precision. + + If any of these restrictions is violated the InvalidOperation + flag is raised. + + The result of pow(self, other, modulo) is identical to the + result that would be obtained by computing (self**other) % + modulo with unbounded precision, but is computed more + efficiently. It is always exact. + 'u'Return self ** other [ % modulo]. + + With two arguments, compute self**other. + + With three arguments, compute (self**other) % modulo. For the + three argument form, the following restrictions on the + arguments hold: + + - all three arguments must be integral + - other must be nonnegative + - either self or other (or both) must be nonzero + - modulo must be nonzero and must have at most p digits, + where p is the context precision. + + If any of these restrictions is violated the InvalidOperation + flag is raised. + + The result of pow(self, other, modulo) is identical to the + result that would be obtained by computing (self**other) % + modulo with unbounded precision, but is computed more + efficiently. It is always exact. + 'b'0 ** 0'u'0 ** 0'b'x ** y with x negative and y not an integer'u'x ** y with x negative and y not an integer'b'Swaps self/other and returns __pow__.'u'Swaps self/other and returns __pow__.'b'Normalize- strip trailing 0s, change anything equal to 0 to 0e0'u'Normalize- strip trailing 0s, change anything equal to 0 to 0e0'b'Quantize self so its exponent is the same as that of exp. + + Similar to self._rescale(exp._exp) but with error checking. + 'u'Quantize self so its exponent is the same as that of exp. + + Similar to self._rescale(exp._exp) but with error checking. + 'b'quantize with one INF'u'quantize with one INF'b'target exponent out of bounds in quantize'u'target exponent out of bounds in quantize'b'exponent of quantize result too large for current context'u'exponent of quantize result too large for current context'b'quantize result has too many digits for current context'u'quantize result has too many digits for current context'b'Return True if self and other have the same exponent; otherwise + return False. + + If either operand is a special value, the following rules are used: + * return True if both operands are infinities + * return True if both operands are NaNs + * otherwise, return False. + 'u'Return True if self and other have the same exponent; otherwise + return False. + + If either operand is a special value, the following rules are used: + * return True if both operands are infinities + * return True if both operands are NaNs + * otherwise, return False. + 'b'Rescale self so that the exponent is exp, either by padding with zeros + or by truncating digits, using the given rounding mode. + + Specials are returned without change. This operation is + quiet: it raises no flags, and uses no information from the + context. + + exp = exp to scale to (an integer) + rounding = rounding mode + 'u'Rescale self so that the exponent is exp, either by padding with zeros + or by truncating digits, using the given rounding mode. + + Specials are returned without change. This operation is + quiet: it raises no flags, and uses no information from the + context. + + exp = exp to scale to (an integer) + rounding = rounding mode + 'b'Round a nonzero, nonspecial Decimal to a fixed number of + significant figures, using the given rounding mode. + + Infinities, NaNs and zeros are returned unaltered. + + This operation is quiet: it raises no flags, and uses no + information from the context. + + 'u'Round a nonzero, nonspecial Decimal to a fixed number of + significant figures, using the given rounding mode. + + Infinities, NaNs and zeros are returned unaltered. + + This operation is quiet: it raises no flags, and uses no + information from the context. + + 'b'argument should be at least 1 in _round'u'argument should be at least 1 in _round'b'Rounds to a nearby integer. + + If no rounding mode is specified, take the rounding mode from + the context. This method raises the Rounded and Inexact flags + when appropriate. + + See also: to_integral_value, which does exactly the same as + this method except that it doesn't raise Inexact or Rounded. + 'u'Rounds to a nearby integer. + + If no rounding mode is specified, take the rounding mode from + the context. This method raises the Rounded and Inexact flags + when appropriate. + + See also: to_integral_value, which does exactly the same as + this method except that it doesn't raise Inexact or Rounded. + 'b'Rounds to the nearest integer, without raising inexact, rounded.'u'Rounds to the nearest integer, without raising inexact, rounded.'b'Return the square root of self.'u'Return the square root of self.'b'sqrt(-x), x > 0'u'sqrt(-x), x > 0'b'Returns the larger value. + + Like max(self, other) except if one is not a number, returns + NaN (and signals if one is sNaN). Also rounds. + 'u'Returns the larger value. + + Like max(self, other) except if one is not a number, returns + NaN (and signals if one is sNaN). Also rounds. + 'b'Returns the smaller value. + + Like min(self, other) except if one is not a number, returns + NaN (and signals if one is sNaN). Also rounds. + 'u'Returns the smaller value. + + Like min(self, other) except if one is not a number, returns + NaN (and signals if one is sNaN). Also rounds. + 'b'Returns whether self is an integer'u'Returns whether self is an integer'b'Returns True if self is even. Assumes self is an integer.'u'Returns True if self is even. Assumes self is an integer.'b'Return the adjusted exponent of self'u'Return the adjusted exponent of self'b'Returns the same Decimal object. + + As we do not have different encodings for the same number, the + received object already is in its canonical form. + 'u'Returns the same Decimal object. + + As we do not have different encodings for the same number, the + received object already is in its canonical form. + 'b'Compares self to the other operand numerically. + + It's pretty much like compare(), but all NaNs signal, with signaling + NaNs taking precedence over quiet NaNs. + 'u'Compares self to the other operand numerically. + + It's pretty much like compare(), but all NaNs signal, with signaling + NaNs taking precedence over quiet NaNs. + 'b'Compares self to other using the abstract representations. + + This is not like the standard compare, which use their numerical + value. Note that a total ordering is defined for all possible abstract + representations. + 'u'Compares self to other using the abstract representations. + + This is not like the standard compare, which use their numerical + value. Note that a total ordering is defined for all possible abstract + representations. + 'b'Compares self to other using abstract repr., ignoring sign. + + Like compare_total, but with operand's sign ignored and assumed to be 0. + 'u'Compares self to other using abstract repr., ignoring sign. + + Like compare_total, but with operand's sign ignored and assumed to be 0. + 'b'Returns a copy with the sign set to 0. 'u'Returns a copy with the sign set to 0. 'b'Returns a copy with the sign inverted.'u'Returns a copy with the sign inverted.'b'Returns self with the sign of other.'u'Returns self with the sign of other.'b'Returns e ** self.'u'Returns e ** self.'b'Return True if self is canonical; otherwise return False. + + Currently, the encoding of a Decimal instance is always + canonical, so this method returns True for any Decimal. + 'u'Return True if self is canonical; otherwise return False. + + Currently, the encoding of a Decimal instance is always + canonical, so this method returns True for any Decimal. + 'b'Return True if self is finite; otherwise return False. + + A Decimal instance is considered finite if it is neither + infinite nor a NaN. + 'u'Return True if self is finite; otherwise return False. + + A Decimal instance is considered finite if it is neither + infinite nor a NaN. + 'b'Return True if self is infinite; otherwise return False.'u'Return True if self is infinite; otherwise return False.'b'Return True if self is a qNaN or sNaN; otherwise return False.'u'Return True if self is a qNaN or sNaN; otherwise return False.'b'Return True if self is a normal number; otherwise return False.'u'Return True if self is a normal number; otherwise return False.'b'Return True if self is a quiet NaN; otherwise return False.'u'Return True if self is a quiet NaN; otherwise return False.'b'Return True if self is negative; otherwise return False.'u'Return True if self is negative; otherwise return False.'b'Return True if self is a signaling NaN; otherwise return False.'u'Return True if self is a signaling NaN; otherwise return False.'b'Return True if self is subnormal; otherwise return False.'u'Return True if self is subnormal; otherwise return False.'b'Return True if self is a zero; otherwise return False.'u'Return True if self is a zero; otherwise return False.'b'Compute a lower bound for the adjusted exponent of self.ln(). + In other words, compute r such that self.ln() >= 10**r. Assumes + that self is finite and positive and that self != 1. + 'u'Compute a lower bound for the adjusted exponent of self.ln(). + In other words, compute r such that self.ln() >= 10**r. Assumes + that self is finite and positive and that self != 1. + 'b'Returns the natural (base e) logarithm of self.'u'Returns the natural (base e) logarithm of self.'b'ln of a negative value'u'ln of a negative value'b'Compute a lower bound for the adjusted exponent of self.log10(). + In other words, find r such that self.log10() >= 10**r. + Assumes that self is finite and positive and that self != 1. + 'u'Compute a lower bound for the adjusted exponent of self.log10(). + In other words, find r such that self.log10() >= 10**r. + Assumes that self is finite and positive and that self != 1. + 'b'231'u'231'b'Returns the base 10 logarithm of self.'u'Returns the base 10 logarithm of self.'b'log10 of a negative value'u'log10 of a negative value'b' Returns the exponent of the magnitude of self's MSD. + + The result is the integer which is the exponent of the magnitude + of the most significant digit of self (as though it were truncated + to a single digit while maintaining the value of that digit and + without limiting the resulting exponent). + 'u' Returns the exponent of the magnitude of self's MSD. + + The result is the integer which is the exponent of the magnitude + of the most significant digit of self (as though it were truncated + to a single digit while maintaining the value of that digit and + without limiting the resulting exponent). + 'b'logb(0)'u'logb(0)'b'Return True if self is a logical operand. + + For being logical, it must be a finite number with a sign of 0, + an exponent of 0, and a coefficient whose digits must all be + either 0 or 1. + 'u'Return True if self is a logical operand. + + For being logical, it must be a finite number with a sign of 0, + an exponent of 0, and a coefficient whose digits must all be + either 0 or 1. + 'b'01'u'01'b'Applies an 'and' operation between self and other's digits.'u'Applies an 'and' operation between self and other's digits.'b'Invert all its digits.'u'Invert all its digits.'b'Applies an 'or' operation between self and other's digits.'u'Applies an 'or' operation between self and other's digits.'b'Applies an 'xor' operation between self and other's digits.'u'Applies an 'xor' operation between self and other's digits.'b'Compares the values numerically with their sign ignored.'u'Compares the values numerically with their sign ignored.'b'Returns the largest representable number smaller than itself.'u'Returns the largest representable number smaller than itself.'b'Returns the smallest representable number larger than itself.'u'Returns the smallest representable number larger than itself.'b'Returns the number closest to self, in the direction towards other. + + The result is the closest representable number to self + (excluding self) that is in the direction towards other, + unless both have the same value. If the two operands are + numerically equal, then the result is a copy of self with the + sign set to be the same as the sign of other. + 'u'Returns the number closest to self, in the direction towards other. + + The result is the closest representable number to self + (excluding self) that is in the direction towards other, + unless both have the same value. If the two operands are + numerically equal, then the result is a copy of self with the + sign set to be the same as the sign of other. + 'b'Infinite result from next_toward'u'Infinite result from next_toward'b'Returns an indication of the class of self. + + The class is one of the following strings: + sNaN + NaN + -Infinity + -Normal + -Subnormal + -Zero + +Zero + +Subnormal + +Normal + +Infinity + 'u'Returns an indication of the class of self. + + The class is one of the following strings: + sNaN + NaN + -Infinity + -Normal + -Subnormal + -Zero + +Zero + +Subnormal + +Normal + +Infinity + 'b'+Infinity'u'+Infinity'b'-Infinity'u'-Infinity'b'-Zero'u'-Zero'b'+Zero'u'+Zero'b'-Subnormal'u'-Subnormal'b'+Subnormal'u'+Subnormal'b'-Normal'u'-Normal'b'+Normal'u'+Normal'b'Just returns 10, as this is Decimal, :)'u'Just returns 10, as this is Decimal, :)'b'Returns a rotated copy of self, value-of-other times.'u'Returns a rotated copy of self, value-of-other times.'b'Returns self operand after adding the second value to its exp.'u'Returns self operand after adding the second value to its exp.'b'Returns a shifted copy of self, value-of-other times.'u'Returns a shifted copy of self, value-of-other times.'b'Format a Decimal instance according to the given specifier. + + The specifier should be a standard format specifier, with the + form described in PEP 3101. Formatting types 'e', 'E', 'f', + 'F', 'g', 'G', 'n' and '%' are supported. If the formatting + type is omitted it defaults to 'g' or 'G', depending on the + value of context.capitals. + 'u'Format a Decimal instance according to the given specifier. + + The specifier should be a standard format specifier, with the + form described in PEP 3101. Formatting types 'e', 'E', 'f', + 'F', 'g', 'G', 'n' and '%' are supported. If the formatting + type is omitted it defaults to 'g' or 'G', depending on the + value of context.capitals. + 'b'G'u'G'b'precision'u'precision'b'eE'u'eE'b'fF%'u'fF%'b'gG'u'gG'b'Create a decimal instance directly, without any validation, + normalization (e.g. removal of leading zeros) or argument + conversion. + + This function is for *internal use only*. + 'u'Create a decimal instance directly, without any validation, + normalization (e.g. removal of leading zeros) or argument + conversion. + + This function is for *internal use only*. + 'b'Context manager class to support localcontext(). + + Sets a copy of the supplied context in __enter__() and restores + the previous decimal context in __exit__() + 'u'Context manager class to support localcontext(). + + Sets a copy of the supplied context in __enter__() and restores + the previous decimal context in __exit__() + 'b'Contains the context for a Decimal instance. + + Contains: + prec - precision (for use in rounding, division, square roots..) + rounding - rounding type (how you round) + traps - If traps[exception] = 1, then the exception is + raised when it is caused. Otherwise, a value is + substituted in. + flags - When an exception is caused, flags[exception] is set. + (Whether or not the trap_enabler is set) + Should be reset by user of Decimal instance. + Emin - Minimum exponent + Emax - Maximum exponent + capitals - If 1, 1*10^1 is printed as 1E+1. + If 0, printed as 1e1 + clamp - If 1, change exponents if too high (Default 0) + 'u'Contains the context for a Decimal instance. + + Contains: + prec - precision (for use in rounding, division, square roots..) + rounding - rounding type (how you round) + traps - If traps[exception] = 1, then the exception is + raised when it is caused. Otherwise, a value is + substituted in. + flags - When an exception is caused, flags[exception] is set. + (Whether or not the trap_enabler is set) + Should be reset by user of Decimal instance. + Emin - Minimum exponent + Emax - Maximum exponent + capitals - If 1, 1*10^1 is printed as 1E+1. + If 0, printed as 1e1 + clamp - If 1, change exponents if too high (Default 0) + 'b'%s must be an integer'u'%s must be an integer'b'-inf'u'-inf'b'%s must be in [%s, %d]. got: %s'u'%s must be in [%s, %d]. got: %s'b'inf'u'inf'b'%s must be in [%d, %s]. got: %s'u'%s must be in [%d, %s]. got: %s'b'%s must be in [%d, %d]. got %s'u'%s must be in [%d, %d]. got %s'b'%s must be a signal dict'u'%s must be a signal dict'b'%s is not a valid signal dict'u'%s is not a valid signal dict'b'prec'u'prec'b'Emin'u'Emin'b'Emax'u'Emax'b'capitals'u'capitals'b'clamp'u'clamp'b'rounding'u'rounding'b'%s: invalid rounding mode'u'%s: invalid rounding mode'b'flags'u'flags'b'traps'u'traps'b'_ignored_flags'u'_ignored_flags'b''decimal.Context' object has no attribute '%s''u''decimal.Context' object has no attribute '%s''b'%s cannot be deleted'u'%s cannot be deleted'b'Show the current context.'u'Show the current context.'b'Context(prec=%(prec)d, rounding=%(rounding)s, Emin=%(Emin)d, Emax=%(Emax)d, capitals=%(capitals)d, clamp=%(clamp)d'u'Context(prec=%(prec)d, rounding=%(rounding)s, Emin=%(Emin)d, Emax=%(Emax)d, capitals=%(capitals)d, clamp=%(clamp)d'b'flags=['u'flags=['b'traps=['u'traps=['b'Reset all flags to zero'u'Reset all flags to zero'b'Reset all traps to zero'u'Reset all traps to zero'b'Returns a shallow copy from self.'u'Returns a shallow copy from self.'b'Returns a deep copy from self.'u'Returns a deep copy from self.'b'Handles an error + + If the flag is in _ignored_flags, returns the default response. + Otherwise, it sets the flag, then, if the corresponding + trap_enabler is set, it reraises the exception. Otherwise, it returns + the default value after setting the flag. + 'u'Handles an error + + If the flag is in _ignored_flags, returns the default response. + Otherwise, it sets the flag, then, if the corresponding + trap_enabler is set, it reraises the exception. Otherwise, it returns + the default value after setting the flag. + 'b'Ignore all flags, if they are raised'u'Ignore all flags, if they are raised'b'Ignore the flags, if they are raised'u'Ignore the flags, if they are raised'b'Stop ignoring the flags, if they are raised'u'Stop ignoring the flags, if they are raised'b'Returns Etiny (= Emin - prec + 1)'u'Returns Etiny (= Emin - prec + 1)'b'Returns maximum exponent (= Emax - prec + 1)'u'Returns maximum exponent (= Emax - prec + 1)'b'Sets the rounding type. + + Sets the rounding type, and returns the current (previous) + rounding type. Often used like: + + context = context.copy() + # so you don't change the calling context + # if an error occurs in the middle. + rounding = context._set_rounding(ROUND_UP) + val = self.__sub__(other, context=context) + context._set_rounding(rounding) + + This will make it round up for that operation. + 'u'Sets the rounding type. + + Sets the rounding type, and returns the current (previous) + rounding type. Often used like: + + context = context.copy() + # so you don't change the calling context + # if an error occurs in the middle. + rounding = context._set_rounding(ROUND_UP) + val = self.__sub__(other, context=context) + context._set_rounding(rounding) + + This will make it round up for that operation. + 'b'Creates a new Decimal instance but using self as context. + + This method implements the to-number operation of the + IBM Decimal specification.'u'Creates a new Decimal instance but using self as context. + + This method implements the to-number operation of the + IBM Decimal specification.'b'trailing or leading whitespace and underscores are not permitted.'u'trailing or leading whitespace and underscores are not permitted.'b'diagnostic info too long in NaN'u'diagnostic info too long in NaN'b'Creates a new Decimal instance from a float but rounding using self + as the context. + + >>> context = Context(prec=5, rounding=ROUND_DOWN) + >>> context.create_decimal_from_float(3.1415926535897932) + Decimal('3.1415') + >>> context = Context(prec=5, traps=[Inexact]) + >>> context.create_decimal_from_float(3.1415926535897932) + Traceback (most recent call last): + ... + decimal.Inexact: None + + 'u'Creates a new Decimal instance from a float but rounding using self + as the context. + + >>> context = Context(prec=5, rounding=ROUND_DOWN) + >>> context.create_decimal_from_float(3.1415926535897932) + Decimal('3.1415') + >>> context = Context(prec=5, traps=[Inexact]) + >>> context.create_decimal_from_float(3.1415926535897932) + Traceback (most recent call last): + ... + decimal.Inexact: None + + 'b'Returns the absolute value of the operand. + + If the operand is negative, the result is the same as using the minus + operation on the operand. Otherwise, the result is the same as using + the plus operation on the operand. + + >>> ExtendedContext.abs(Decimal('2.1')) + Decimal('2.1') + >>> ExtendedContext.abs(Decimal('-100')) + Decimal('100') + >>> ExtendedContext.abs(Decimal('101.5')) + Decimal('101.5') + >>> ExtendedContext.abs(Decimal('-101.5')) + Decimal('101.5') + >>> ExtendedContext.abs(-1) + Decimal('1') + 'u'Returns the absolute value of the operand. + + If the operand is negative, the result is the same as using the minus + operation on the operand. Otherwise, the result is the same as using + the plus operation on the operand. + + >>> ExtendedContext.abs(Decimal('2.1')) + Decimal('2.1') + >>> ExtendedContext.abs(Decimal('-100')) + Decimal('100') + >>> ExtendedContext.abs(Decimal('101.5')) + Decimal('101.5') + >>> ExtendedContext.abs(Decimal('-101.5')) + Decimal('101.5') + >>> ExtendedContext.abs(-1) + Decimal('1') + 'b'Return the sum of the two operands. + + >>> ExtendedContext.add(Decimal('12'), Decimal('7.00')) + Decimal('19.00') + >>> ExtendedContext.add(Decimal('1E+2'), Decimal('1.01E+4')) + Decimal('1.02E+4') + >>> ExtendedContext.add(1, Decimal(2)) + Decimal('3') + >>> ExtendedContext.add(Decimal(8), 5) + Decimal('13') + >>> ExtendedContext.add(5, 5) + Decimal('10') + 'u'Return the sum of the two operands. + + >>> ExtendedContext.add(Decimal('12'), Decimal('7.00')) + Decimal('19.00') + >>> ExtendedContext.add(Decimal('1E+2'), Decimal('1.01E+4')) + Decimal('1.02E+4') + >>> ExtendedContext.add(1, Decimal(2)) + Decimal('3') + >>> ExtendedContext.add(Decimal(8), 5) + Decimal('13') + >>> ExtendedContext.add(5, 5) + Decimal('10') + 'b'Unable to convert %s to Decimal'u'Unable to convert %s to Decimal'b'Returns the same Decimal object. + + As we do not have different encodings for the same number, the + received object already is in its canonical form. + + >>> ExtendedContext.canonical(Decimal('2.50')) + Decimal('2.50') + 'u'Returns the same Decimal object. + + As we do not have different encodings for the same number, the + received object already is in its canonical form. + + >>> ExtendedContext.canonical(Decimal('2.50')) + Decimal('2.50') + 'b'canonical requires a Decimal as an argument.'u'canonical requires a Decimal as an argument.'b'Compares values numerically. + + If the signs of the operands differ, a value representing each operand + ('-1' if the operand is less than zero, '0' if the operand is zero or + negative zero, or '1' if the operand is greater than zero) is used in + place of that operand for the comparison instead of the actual + operand. + + The comparison is then effected by subtracting the second operand from + the first and then returning a value according to the result of the + subtraction: '-1' if the result is less than zero, '0' if the result is + zero or negative zero, or '1' if the result is greater than zero. + + >>> ExtendedContext.compare(Decimal('2.1'), Decimal('3')) + Decimal('-1') + >>> ExtendedContext.compare(Decimal('2.1'), Decimal('2.1')) + Decimal('0') + >>> ExtendedContext.compare(Decimal('2.1'), Decimal('2.10')) + Decimal('0') + >>> ExtendedContext.compare(Decimal('3'), Decimal('2.1')) + Decimal('1') + >>> ExtendedContext.compare(Decimal('2.1'), Decimal('-3')) + Decimal('1') + >>> ExtendedContext.compare(Decimal('-3'), Decimal('2.1')) + Decimal('-1') + >>> ExtendedContext.compare(1, 2) + Decimal('-1') + >>> ExtendedContext.compare(Decimal(1), 2) + Decimal('-1') + >>> ExtendedContext.compare(1, Decimal(2)) + Decimal('-1') + 'u'Compares values numerically. + + If the signs of the operands differ, a value representing each operand + ('-1' if the operand is less than zero, '0' if the operand is zero or + negative zero, or '1' if the operand is greater than zero) is used in + place of that operand for the comparison instead of the actual + operand. + + The comparison is then effected by subtracting the second operand from + the first and then returning a value according to the result of the + subtraction: '-1' if the result is less than zero, '0' if the result is + zero or negative zero, or '1' if the result is greater than zero. + + >>> ExtendedContext.compare(Decimal('2.1'), Decimal('3')) + Decimal('-1') + >>> ExtendedContext.compare(Decimal('2.1'), Decimal('2.1')) + Decimal('0') + >>> ExtendedContext.compare(Decimal('2.1'), Decimal('2.10')) + Decimal('0') + >>> ExtendedContext.compare(Decimal('3'), Decimal('2.1')) + Decimal('1') + >>> ExtendedContext.compare(Decimal('2.1'), Decimal('-3')) + Decimal('1') + >>> ExtendedContext.compare(Decimal('-3'), Decimal('2.1')) + Decimal('-1') + >>> ExtendedContext.compare(1, 2) + Decimal('-1') + >>> ExtendedContext.compare(Decimal(1), 2) + Decimal('-1') + >>> ExtendedContext.compare(1, Decimal(2)) + Decimal('-1') + 'b'Compares the values of the two operands numerically. + + It's pretty much like compare(), but all NaNs signal, with signaling + NaNs taking precedence over quiet NaNs. + + >>> c = ExtendedContext + >>> c.compare_signal(Decimal('2.1'), Decimal('3')) + Decimal('-1') + >>> c.compare_signal(Decimal('2.1'), Decimal('2.1')) + Decimal('0') + >>> c.flags[InvalidOperation] = 0 + >>> print(c.flags[InvalidOperation]) + 0 + >>> c.compare_signal(Decimal('NaN'), Decimal('2.1')) + Decimal('NaN') + >>> print(c.flags[InvalidOperation]) + 1 + >>> c.flags[InvalidOperation] = 0 + >>> print(c.flags[InvalidOperation]) + 0 + >>> c.compare_signal(Decimal('sNaN'), Decimal('2.1')) + Decimal('NaN') + >>> print(c.flags[InvalidOperation]) + 1 + >>> c.compare_signal(-1, 2) + Decimal('-1') + >>> c.compare_signal(Decimal(-1), 2) + Decimal('-1') + >>> c.compare_signal(-1, Decimal(2)) + Decimal('-1') + 'u'Compares the values of the two operands numerically. + + It's pretty much like compare(), but all NaNs signal, with signaling + NaNs taking precedence over quiet NaNs. + + >>> c = ExtendedContext + >>> c.compare_signal(Decimal('2.1'), Decimal('3')) + Decimal('-1') + >>> c.compare_signal(Decimal('2.1'), Decimal('2.1')) + Decimal('0') + >>> c.flags[InvalidOperation] = 0 + >>> print(c.flags[InvalidOperation]) + 0 + >>> c.compare_signal(Decimal('NaN'), Decimal('2.1')) + Decimal('NaN') + >>> print(c.flags[InvalidOperation]) + 1 + >>> c.flags[InvalidOperation] = 0 + >>> print(c.flags[InvalidOperation]) + 0 + >>> c.compare_signal(Decimal('sNaN'), Decimal('2.1')) + Decimal('NaN') + >>> print(c.flags[InvalidOperation]) + 1 + >>> c.compare_signal(-1, 2) + Decimal('-1') + >>> c.compare_signal(Decimal(-1), 2) + Decimal('-1') + >>> c.compare_signal(-1, Decimal(2)) + Decimal('-1') + 'b'Compares two operands using their abstract representation. + + This is not like the standard compare, which use their numerical + value. Note that a total ordering is defined for all possible abstract + representations. + + >>> ExtendedContext.compare_total(Decimal('12.73'), Decimal('127.9')) + Decimal('-1') + >>> ExtendedContext.compare_total(Decimal('-127'), Decimal('12')) + Decimal('-1') + >>> ExtendedContext.compare_total(Decimal('12.30'), Decimal('12.3')) + Decimal('-1') + >>> ExtendedContext.compare_total(Decimal('12.30'), Decimal('12.30')) + Decimal('0') + >>> ExtendedContext.compare_total(Decimal('12.3'), Decimal('12.300')) + Decimal('1') + >>> ExtendedContext.compare_total(Decimal('12.3'), Decimal('NaN')) + Decimal('-1') + >>> ExtendedContext.compare_total(1, 2) + Decimal('-1') + >>> ExtendedContext.compare_total(Decimal(1), 2) + Decimal('-1') + >>> ExtendedContext.compare_total(1, Decimal(2)) + Decimal('-1') + 'u'Compares two operands using their abstract representation. + + This is not like the standard compare, which use their numerical + value. Note that a total ordering is defined for all possible abstract + representations. + + >>> ExtendedContext.compare_total(Decimal('12.73'), Decimal('127.9')) + Decimal('-1') + >>> ExtendedContext.compare_total(Decimal('-127'), Decimal('12')) + Decimal('-1') + >>> ExtendedContext.compare_total(Decimal('12.30'), Decimal('12.3')) + Decimal('-1') + >>> ExtendedContext.compare_total(Decimal('12.30'), Decimal('12.30')) + Decimal('0') + >>> ExtendedContext.compare_total(Decimal('12.3'), Decimal('12.300')) + Decimal('1') + >>> ExtendedContext.compare_total(Decimal('12.3'), Decimal('NaN')) + Decimal('-1') + >>> ExtendedContext.compare_total(1, 2) + Decimal('-1') + >>> ExtendedContext.compare_total(Decimal(1), 2) + Decimal('-1') + >>> ExtendedContext.compare_total(1, Decimal(2)) + Decimal('-1') + 'b'Compares two operands using their abstract representation ignoring sign. + + Like compare_total, but with operand's sign ignored and assumed to be 0. + 'u'Compares two operands using their abstract representation ignoring sign. + + Like compare_total, but with operand's sign ignored and assumed to be 0. + 'b'Returns a copy of the operand with the sign set to 0. + + >>> ExtendedContext.copy_abs(Decimal('2.1')) + Decimal('2.1') + >>> ExtendedContext.copy_abs(Decimal('-100')) + Decimal('100') + >>> ExtendedContext.copy_abs(-1) + Decimal('1') + 'u'Returns a copy of the operand with the sign set to 0. + + >>> ExtendedContext.copy_abs(Decimal('2.1')) + Decimal('2.1') + >>> ExtendedContext.copy_abs(Decimal('-100')) + Decimal('100') + >>> ExtendedContext.copy_abs(-1) + Decimal('1') + 'b'Returns a copy of the decimal object. + + >>> ExtendedContext.copy_decimal(Decimal('2.1')) + Decimal('2.1') + >>> ExtendedContext.copy_decimal(Decimal('-1.00')) + Decimal('-1.00') + >>> ExtendedContext.copy_decimal(1) + Decimal('1') + 'u'Returns a copy of the decimal object. + + >>> ExtendedContext.copy_decimal(Decimal('2.1')) + Decimal('2.1') + >>> ExtendedContext.copy_decimal(Decimal('-1.00')) + Decimal('-1.00') + >>> ExtendedContext.copy_decimal(1) + Decimal('1') + 'b'Returns a copy of the operand with the sign inverted. + + >>> ExtendedContext.copy_negate(Decimal('101.5')) + Decimal('-101.5') + >>> ExtendedContext.copy_negate(Decimal('-101.5')) + Decimal('101.5') + >>> ExtendedContext.copy_negate(1) + Decimal('-1') + 'u'Returns a copy of the operand with the sign inverted. + + >>> ExtendedContext.copy_negate(Decimal('101.5')) + Decimal('-101.5') + >>> ExtendedContext.copy_negate(Decimal('-101.5')) + Decimal('101.5') + >>> ExtendedContext.copy_negate(1) + Decimal('-1') + 'b'Copies the second operand's sign to the first one. + + In detail, it returns a copy of the first operand with the sign + equal to the sign of the second operand. + + >>> ExtendedContext.copy_sign(Decimal( '1.50'), Decimal('7.33')) + Decimal('1.50') + >>> ExtendedContext.copy_sign(Decimal('-1.50'), Decimal('7.33')) + Decimal('1.50') + >>> ExtendedContext.copy_sign(Decimal( '1.50'), Decimal('-7.33')) + Decimal('-1.50') + >>> ExtendedContext.copy_sign(Decimal('-1.50'), Decimal('-7.33')) + Decimal('-1.50') + >>> ExtendedContext.copy_sign(1, -2) + Decimal('-1') + >>> ExtendedContext.copy_sign(Decimal(1), -2) + Decimal('-1') + >>> ExtendedContext.copy_sign(1, Decimal(-2)) + Decimal('-1') + 'u'Copies the second operand's sign to the first one. + + In detail, it returns a copy of the first operand with the sign + equal to the sign of the second operand. + + >>> ExtendedContext.copy_sign(Decimal( '1.50'), Decimal('7.33')) + Decimal('1.50') + >>> ExtendedContext.copy_sign(Decimal('-1.50'), Decimal('7.33')) + Decimal('1.50') + >>> ExtendedContext.copy_sign(Decimal( '1.50'), Decimal('-7.33')) + Decimal('-1.50') + >>> ExtendedContext.copy_sign(Decimal('-1.50'), Decimal('-7.33')) + Decimal('-1.50') + >>> ExtendedContext.copy_sign(1, -2) + Decimal('-1') + >>> ExtendedContext.copy_sign(Decimal(1), -2) + Decimal('-1') + >>> ExtendedContext.copy_sign(1, Decimal(-2)) + Decimal('-1') + 'b'Decimal division in a specified context. + + >>> ExtendedContext.divide(Decimal('1'), Decimal('3')) + Decimal('0.333333333') + >>> ExtendedContext.divide(Decimal('2'), Decimal('3')) + Decimal('0.666666667') + >>> ExtendedContext.divide(Decimal('5'), Decimal('2')) + Decimal('2.5') + >>> ExtendedContext.divide(Decimal('1'), Decimal('10')) + Decimal('0.1') + >>> ExtendedContext.divide(Decimal('12'), Decimal('12')) + Decimal('1') + >>> ExtendedContext.divide(Decimal('8.00'), Decimal('2')) + Decimal('4.00') + >>> ExtendedContext.divide(Decimal('2.400'), Decimal('2.0')) + Decimal('1.20') + >>> ExtendedContext.divide(Decimal('1000'), Decimal('100')) + Decimal('10') + >>> ExtendedContext.divide(Decimal('1000'), Decimal('1')) + Decimal('1000') + >>> ExtendedContext.divide(Decimal('2.40E+6'), Decimal('2')) + Decimal('1.20E+6') + >>> ExtendedContext.divide(5, 5) + Decimal('1') + >>> ExtendedContext.divide(Decimal(5), 5) + Decimal('1') + >>> ExtendedContext.divide(5, Decimal(5)) + Decimal('1') + 'u'Decimal division in a specified context. + + >>> ExtendedContext.divide(Decimal('1'), Decimal('3')) + Decimal('0.333333333') + >>> ExtendedContext.divide(Decimal('2'), Decimal('3')) + Decimal('0.666666667') + >>> ExtendedContext.divide(Decimal('5'), Decimal('2')) + Decimal('2.5') + >>> ExtendedContext.divide(Decimal('1'), Decimal('10')) + Decimal('0.1') + >>> ExtendedContext.divide(Decimal('12'), Decimal('12')) + Decimal('1') + >>> ExtendedContext.divide(Decimal('8.00'), Decimal('2')) + Decimal('4.00') + >>> ExtendedContext.divide(Decimal('2.400'), Decimal('2.0')) + Decimal('1.20') + >>> ExtendedContext.divide(Decimal('1000'), Decimal('100')) + Decimal('10') + >>> ExtendedContext.divide(Decimal('1000'), Decimal('1')) + Decimal('1000') + >>> ExtendedContext.divide(Decimal('2.40E+6'), Decimal('2')) + Decimal('1.20E+6') + >>> ExtendedContext.divide(5, 5) + Decimal('1') + >>> ExtendedContext.divide(Decimal(5), 5) + Decimal('1') + >>> ExtendedContext.divide(5, Decimal(5)) + Decimal('1') + 'b'Divides two numbers and returns the integer part of the result. + + >>> ExtendedContext.divide_int(Decimal('2'), Decimal('3')) + Decimal('0') + >>> ExtendedContext.divide_int(Decimal('10'), Decimal('3')) + Decimal('3') + >>> ExtendedContext.divide_int(Decimal('1'), Decimal('0.3')) + Decimal('3') + >>> ExtendedContext.divide_int(10, 3) + Decimal('3') + >>> ExtendedContext.divide_int(Decimal(10), 3) + Decimal('3') + >>> ExtendedContext.divide_int(10, Decimal(3)) + Decimal('3') + 'u'Divides two numbers and returns the integer part of the result. + + >>> ExtendedContext.divide_int(Decimal('2'), Decimal('3')) + Decimal('0') + >>> ExtendedContext.divide_int(Decimal('10'), Decimal('3')) + Decimal('3') + >>> ExtendedContext.divide_int(Decimal('1'), Decimal('0.3')) + Decimal('3') + >>> ExtendedContext.divide_int(10, 3) + Decimal('3') + >>> ExtendedContext.divide_int(Decimal(10), 3) + Decimal('3') + >>> ExtendedContext.divide_int(10, Decimal(3)) + Decimal('3') + 'b'Return (a // b, a % b). + + >>> ExtendedContext.divmod(Decimal(8), Decimal(3)) + (Decimal('2'), Decimal('2')) + >>> ExtendedContext.divmod(Decimal(8), Decimal(4)) + (Decimal('2'), Decimal('0')) + >>> ExtendedContext.divmod(8, 4) + (Decimal('2'), Decimal('0')) + >>> ExtendedContext.divmod(Decimal(8), 4) + (Decimal('2'), Decimal('0')) + >>> ExtendedContext.divmod(8, Decimal(4)) + (Decimal('2'), Decimal('0')) + 'u'Return (a // b, a % b). + + >>> ExtendedContext.divmod(Decimal(8), Decimal(3)) + (Decimal('2'), Decimal('2')) + >>> ExtendedContext.divmod(Decimal(8), Decimal(4)) + (Decimal('2'), Decimal('0')) + >>> ExtendedContext.divmod(8, 4) + (Decimal('2'), Decimal('0')) + >>> ExtendedContext.divmod(Decimal(8), 4) + (Decimal('2'), Decimal('0')) + >>> ExtendedContext.divmod(8, Decimal(4)) + (Decimal('2'), Decimal('0')) + 'b'Returns e ** a. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.exp(Decimal('-Infinity')) + Decimal('0') + >>> c.exp(Decimal('-1')) + Decimal('0.367879441') + >>> c.exp(Decimal('0')) + Decimal('1') + >>> c.exp(Decimal('1')) + Decimal('2.71828183') + >>> c.exp(Decimal('0.693147181')) + Decimal('2.00000000') + >>> c.exp(Decimal('+Infinity')) + Decimal('Infinity') + >>> c.exp(10) + Decimal('22026.4658') + 'u'Returns e ** a. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.exp(Decimal('-Infinity')) + Decimal('0') + >>> c.exp(Decimal('-1')) + Decimal('0.367879441') + >>> c.exp(Decimal('0')) + Decimal('1') + >>> c.exp(Decimal('1')) + Decimal('2.71828183') + >>> c.exp(Decimal('0.693147181')) + Decimal('2.00000000') + >>> c.exp(Decimal('+Infinity')) + Decimal('Infinity') + >>> c.exp(10) + Decimal('22026.4658') + 'b'Returns a multiplied by b, plus c. + + The first two operands are multiplied together, using multiply, + the third operand is then added to the result of that + multiplication, using add, all with only one final rounding. + + >>> ExtendedContext.fma(Decimal('3'), Decimal('5'), Decimal('7')) + Decimal('22') + >>> ExtendedContext.fma(Decimal('3'), Decimal('-5'), Decimal('7')) + Decimal('-8') + >>> ExtendedContext.fma(Decimal('888565290'), Decimal('1557.96930'), Decimal('-86087.7578')) + Decimal('1.38435736E+12') + >>> ExtendedContext.fma(1, 3, 4) + Decimal('7') + >>> ExtendedContext.fma(1, Decimal(3), 4) + Decimal('7') + >>> ExtendedContext.fma(1, 3, Decimal(4)) + Decimal('7') + 'u'Returns a multiplied by b, plus c. + + The first two operands are multiplied together, using multiply, + the third operand is then added to the result of that + multiplication, using add, all with only one final rounding. + + >>> ExtendedContext.fma(Decimal('3'), Decimal('5'), Decimal('7')) + Decimal('22') + >>> ExtendedContext.fma(Decimal('3'), Decimal('-5'), Decimal('7')) + Decimal('-8') + >>> ExtendedContext.fma(Decimal('888565290'), Decimal('1557.96930'), Decimal('-86087.7578')) + Decimal('1.38435736E+12') + >>> ExtendedContext.fma(1, 3, 4) + Decimal('7') + >>> ExtendedContext.fma(1, Decimal(3), 4) + Decimal('7') + >>> ExtendedContext.fma(1, 3, Decimal(4)) + Decimal('7') + 'b'Return True if the operand is canonical; otherwise return False. + + Currently, the encoding of a Decimal instance is always + canonical, so this method returns True for any Decimal. + + >>> ExtendedContext.is_canonical(Decimal('2.50')) + True + 'u'Return True if the operand is canonical; otherwise return False. + + Currently, the encoding of a Decimal instance is always + canonical, so this method returns True for any Decimal. + + >>> ExtendedContext.is_canonical(Decimal('2.50')) + True + 'b'is_canonical requires a Decimal as an argument.'u'is_canonical requires a Decimal as an argument.'b'Return True if the operand is finite; otherwise return False. + + A Decimal instance is considered finite if it is neither + infinite nor a NaN. + + >>> ExtendedContext.is_finite(Decimal('2.50')) + True + >>> ExtendedContext.is_finite(Decimal('-0.3')) + True + >>> ExtendedContext.is_finite(Decimal('0')) + True + >>> ExtendedContext.is_finite(Decimal('Inf')) + False + >>> ExtendedContext.is_finite(Decimal('NaN')) + False + >>> ExtendedContext.is_finite(1) + True + 'u'Return True if the operand is finite; otherwise return False. + + A Decimal instance is considered finite if it is neither + infinite nor a NaN. + + >>> ExtendedContext.is_finite(Decimal('2.50')) + True + >>> ExtendedContext.is_finite(Decimal('-0.3')) + True + >>> ExtendedContext.is_finite(Decimal('0')) + True + >>> ExtendedContext.is_finite(Decimal('Inf')) + False + >>> ExtendedContext.is_finite(Decimal('NaN')) + False + >>> ExtendedContext.is_finite(1) + True + 'b'Return True if the operand is infinite; otherwise return False. + + >>> ExtendedContext.is_infinite(Decimal('2.50')) + False + >>> ExtendedContext.is_infinite(Decimal('-Inf')) + True + >>> ExtendedContext.is_infinite(Decimal('NaN')) + False + >>> ExtendedContext.is_infinite(1) + False + 'u'Return True if the operand is infinite; otherwise return False. + + >>> ExtendedContext.is_infinite(Decimal('2.50')) + False + >>> ExtendedContext.is_infinite(Decimal('-Inf')) + True + >>> ExtendedContext.is_infinite(Decimal('NaN')) + False + >>> ExtendedContext.is_infinite(1) + False + 'b'Return True if the operand is a qNaN or sNaN; + otherwise return False. + + >>> ExtendedContext.is_nan(Decimal('2.50')) + False + >>> ExtendedContext.is_nan(Decimal('NaN')) + True + >>> ExtendedContext.is_nan(Decimal('-sNaN')) + True + >>> ExtendedContext.is_nan(1) + False + 'u'Return True if the operand is a qNaN or sNaN; + otherwise return False. + + >>> ExtendedContext.is_nan(Decimal('2.50')) + False + >>> ExtendedContext.is_nan(Decimal('NaN')) + True + >>> ExtendedContext.is_nan(Decimal('-sNaN')) + True + >>> ExtendedContext.is_nan(1) + False + 'b'Return True if the operand is a normal number; + otherwise return False. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.is_normal(Decimal('2.50')) + True + >>> c.is_normal(Decimal('0.1E-999')) + False + >>> c.is_normal(Decimal('0.00')) + False + >>> c.is_normal(Decimal('-Inf')) + False + >>> c.is_normal(Decimal('NaN')) + False + >>> c.is_normal(1) + True + 'u'Return True if the operand is a normal number; + otherwise return False. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.is_normal(Decimal('2.50')) + True + >>> c.is_normal(Decimal('0.1E-999')) + False + >>> c.is_normal(Decimal('0.00')) + False + >>> c.is_normal(Decimal('-Inf')) + False + >>> c.is_normal(Decimal('NaN')) + False + >>> c.is_normal(1) + True + 'b'Return True if the operand is a quiet NaN; otherwise return False. + + >>> ExtendedContext.is_qnan(Decimal('2.50')) + False + >>> ExtendedContext.is_qnan(Decimal('NaN')) + True + >>> ExtendedContext.is_qnan(Decimal('sNaN')) + False + >>> ExtendedContext.is_qnan(1) + False + 'u'Return True if the operand is a quiet NaN; otherwise return False. + + >>> ExtendedContext.is_qnan(Decimal('2.50')) + False + >>> ExtendedContext.is_qnan(Decimal('NaN')) + True + >>> ExtendedContext.is_qnan(Decimal('sNaN')) + False + >>> ExtendedContext.is_qnan(1) + False + 'b'Return True if the operand is negative; otherwise return False. + + >>> ExtendedContext.is_signed(Decimal('2.50')) + False + >>> ExtendedContext.is_signed(Decimal('-12')) + True + >>> ExtendedContext.is_signed(Decimal('-0')) + True + >>> ExtendedContext.is_signed(8) + False + >>> ExtendedContext.is_signed(-8) + True + 'u'Return True if the operand is negative; otherwise return False. + + >>> ExtendedContext.is_signed(Decimal('2.50')) + False + >>> ExtendedContext.is_signed(Decimal('-12')) + True + >>> ExtendedContext.is_signed(Decimal('-0')) + True + >>> ExtendedContext.is_signed(8) + False + >>> ExtendedContext.is_signed(-8) + True + 'b'Return True if the operand is a signaling NaN; + otherwise return False. + + >>> ExtendedContext.is_snan(Decimal('2.50')) + False + >>> ExtendedContext.is_snan(Decimal('NaN')) + False + >>> ExtendedContext.is_snan(Decimal('sNaN')) + True + >>> ExtendedContext.is_snan(1) + False + 'u'Return True if the operand is a signaling NaN; + otherwise return False. + + >>> ExtendedContext.is_snan(Decimal('2.50')) + False + >>> ExtendedContext.is_snan(Decimal('NaN')) + False + >>> ExtendedContext.is_snan(Decimal('sNaN')) + True + >>> ExtendedContext.is_snan(1) + False + 'b'Return True if the operand is subnormal; otherwise return False. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.is_subnormal(Decimal('2.50')) + False + >>> c.is_subnormal(Decimal('0.1E-999')) + True + >>> c.is_subnormal(Decimal('0.00')) + False + >>> c.is_subnormal(Decimal('-Inf')) + False + >>> c.is_subnormal(Decimal('NaN')) + False + >>> c.is_subnormal(1) + False + 'u'Return True if the operand is subnormal; otherwise return False. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.is_subnormal(Decimal('2.50')) + False + >>> c.is_subnormal(Decimal('0.1E-999')) + True + >>> c.is_subnormal(Decimal('0.00')) + False + >>> c.is_subnormal(Decimal('-Inf')) + False + >>> c.is_subnormal(Decimal('NaN')) + False + >>> c.is_subnormal(1) + False + 'b'Return True if the operand is a zero; otherwise return False. + + >>> ExtendedContext.is_zero(Decimal('0')) + True + >>> ExtendedContext.is_zero(Decimal('2.50')) + False + >>> ExtendedContext.is_zero(Decimal('-0E+2')) + True + >>> ExtendedContext.is_zero(1) + False + >>> ExtendedContext.is_zero(0) + True + 'u'Return True if the operand is a zero; otherwise return False. + + >>> ExtendedContext.is_zero(Decimal('0')) + True + >>> ExtendedContext.is_zero(Decimal('2.50')) + False + >>> ExtendedContext.is_zero(Decimal('-0E+2')) + True + >>> ExtendedContext.is_zero(1) + False + >>> ExtendedContext.is_zero(0) + True + 'b'Returns the natural (base e) logarithm of the operand. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.ln(Decimal('0')) + Decimal('-Infinity') + >>> c.ln(Decimal('1.000')) + Decimal('0') + >>> c.ln(Decimal('2.71828183')) + Decimal('1.00000000') + >>> c.ln(Decimal('10')) + Decimal('2.30258509') + >>> c.ln(Decimal('+Infinity')) + Decimal('Infinity') + >>> c.ln(1) + Decimal('0') + 'u'Returns the natural (base e) logarithm of the operand. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.ln(Decimal('0')) + Decimal('-Infinity') + >>> c.ln(Decimal('1.000')) + Decimal('0') + >>> c.ln(Decimal('2.71828183')) + Decimal('1.00000000') + >>> c.ln(Decimal('10')) + Decimal('2.30258509') + >>> c.ln(Decimal('+Infinity')) + Decimal('Infinity') + >>> c.ln(1) + Decimal('0') + 'b'Returns the base 10 logarithm of the operand. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.log10(Decimal('0')) + Decimal('-Infinity') + >>> c.log10(Decimal('0.001')) + Decimal('-3') + >>> c.log10(Decimal('1.000')) + Decimal('0') + >>> c.log10(Decimal('2')) + Decimal('0.301029996') + >>> c.log10(Decimal('10')) + Decimal('1') + >>> c.log10(Decimal('70')) + Decimal('1.84509804') + >>> c.log10(Decimal('+Infinity')) + Decimal('Infinity') + >>> c.log10(0) + Decimal('-Infinity') + >>> c.log10(1) + Decimal('0') + 'u'Returns the base 10 logarithm of the operand. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.log10(Decimal('0')) + Decimal('-Infinity') + >>> c.log10(Decimal('0.001')) + Decimal('-3') + >>> c.log10(Decimal('1.000')) + Decimal('0') + >>> c.log10(Decimal('2')) + Decimal('0.301029996') + >>> c.log10(Decimal('10')) + Decimal('1') + >>> c.log10(Decimal('70')) + Decimal('1.84509804') + >>> c.log10(Decimal('+Infinity')) + Decimal('Infinity') + >>> c.log10(0) + Decimal('-Infinity') + >>> c.log10(1) + Decimal('0') + 'b' Returns the exponent of the magnitude of the operand's MSD. + + The result is the integer which is the exponent of the magnitude + of the most significant digit of the operand (as though the + operand were truncated to a single digit while maintaining the + value of that digit and without limiting the resulting exponent). + + >>> ExtendedContext.logb(Decimal('250')) + Decimal('2') + >>> ExtendedContext.logb(Decimal('2.50')) + Decimal('0') + >>> ExtendedContext.logb(Decimal('0.03')) + Decimal('-2') + >>> ExtendedContext.logb(Decimal('0')) + Decimal('-Infinity') + >>> ExtendedContext.logb(1) + Decimal('0') + >>> ExtendedContext.logb(10) + Decimal('1') + >>> ExtendedContext.logb(100) + Decimal('2') + 'u' Returns the exponent of the magnitude of the operand's MSD. + + The result is the integer which is the exponent of the magnitude + of the most significant digit of the operand (as though the + operand were truncated to a single digit while maintaining the + value of that digit and without limiting the resulting exponent). + + >>> ExtendedContext.logb(Decimal('250')) + Decimal('2') + >>> ExtendedContext.logb(Decimal('2.50')) + Decimal('0') + >>> ExtendedContext.logb(Decimal('0.03')) + Decimal('-2') + >>> ExtendedContext.logb(Decimal('0')) + Decimal('-Infinity') + >>> ExtendedContext.logb(1) + Decimal('0') + >>> ExtendedContext.logb(10) + Decimal('1') + >>> ExtendedContext.logb(100) + Decimal('2') + 'b'Applies the logical operation 'and' between each operand's digits. + + The operands must be both logical numbers. + + >>> ExtendedContext.logical_and(Decimal('0'), Decimal('0')) + Decimal('0') + >>> ExtendedContext.logical_and(Decimal('0'), Decimal('1')) + Decimal('0') + >>> ExtendedContext.logical_and(Decimal('1'), Decimal('0')) + Decimal('0') + >>> ExtendedContext.logical_and(Decimal('1'), Decimal('1')) + Decimal('1') + >>> ExtendedContext.logical_and(Decimal('1100'), Decimal('1010')) + Decimal('1000') + >>> ExtendedContext.logical_and(Decimal('1111'), Decimal('10')) + Decimal('10') + >>> ExtendedContext.logical_and(110, 1101) + Decimal('100') + >>> ExtendedContext.logical_and(Decimal(110), 1101) + Decimal('100') + >>> ExtendedContext.logical_and(110, Decimal(1101)) + Decimal('100') + 'u'Applies the logical operation 'and' between each operand's digits. + + The operands must be both logical numbers. + + >>> ExtendedContext.logical_and(Decimal('0'), Decimal('0')) + Decimal('0') + >>> ExtendedContext.logical_and(Decimal('0'), Decimal('1')) + Decimal('0') + >>> ExtendedContext.logical_and(Decimal('1'), Decimal('0')) + Decimal('0') + >>> ExtendedContext.logical_and(Decimal('1'), Decimal('1')) + Decimal('1') + >>> ExtendedContext.logical_and(Decimal('1100'), Decimal('1010')) + Decimal('1000') + >>> ExtendedContext.logical_and(Decimal('1111'), Decimal('10')) + Decimal('10') + >>> ExtendedContext.logical_and(110, 1101) + Decimal('100') + >>> ExtendedContext.logical_and(Decimal(110), 1101) + Decimal('100') + >>> ExtendedContext.logical_and(110, Decimal(1101)) + Decimal('100') + 'b'Invert all the digits in the operand. + + The operand must be a logical number. + + >>> ExtendedContext.logical_invert(Decimal('0')) + Decimal('111111111') + >>> ExtendedContext.logical_invert(Decimal('1')) + Decimal('111111110') + >>> ExtendedContext.logical_invert(Decimal('111111111')) + Decimal('0') + >>> ExtendedContext.logical_invert(Decimal('101010101')) + Decimal('10101010') + >>> ExtendedContext.logical_invert(1101) + Decimal('111110010') + 'u'Invert all the digits in the operand. + + The operand must be a logical number. + + >>> ExtendedContext.logical_invert(Decimal('0')) + Decimal('111111111') + >>> ExtendedContext.logical_invert(Decimal('1')) + Decimal('111111110') + >>> ExtendedContext.logical_invert(Decimal('111111111')) + Decimal('0') + >>> ExtendedContext.logical_invert(Decimal('101010101')) + Decimal('10101010') + >>> ExtendedContext.logical_invert(1101) + Decimal('111110010') + 'b'Applies the logical operation 'or' between each operand's digits. + + The operands must be both logical numbers. + + >>> ExtendedContext.logical_or(Decimal('0'), Decimal('0')) + Decimal('0') + >>> ExtendedContext.logical_or(Decimal('0'), Decimal('1')) + Decimal('1') + >>> ExtendedContext.logical_or(Decimal('1'), Decimal('0')) + Decimal('1') + >>> ExtendedContext.logical_or(Decimal('1'), Decimal('1')) + Decimal('1') + >>> ExtendedContext.logical_or(Decimal('1100'), Decimal('1010')) + Decimal('1110') + >>> ExtendedContext.logical_or(Decimal('1110'), Decimal('10')) + Decimal('1110') + >>> ExtendedContext.logical_or(110, 1101) + Decimal('1111') + >>> ExtendedContext.logical_or(Decimal(110), 1101) + Decimal('1111') + >>> ExtendedContext.logical_or(110, Decimal(1101)) + Decimal('1111') + 'u'Applies the logical operation 'or' between each operand's digits. + + The operands must be both logical numbers. + + >>> ExtendedContext.logical_or(Decimal('0'), Decimal('0')) + Decimal('0') + >>> ExtendedContext.logical_or(Decimal('0'), Decimal('1')) + Decimal('1') + >>> ExtendedContext.logical_or(Decimal('1'), Decimal('0')) + Decimal('1') + >>> ExtendedContext.logical_or(Decimal('1'), Decimal('1')) + Decimal('1') + >>> ExtendedContext.logical_or(Decimal('1100'), Decimal('1010')) + Decimal('1110') + >>> ExtendedContext.logical_or(Decimal('1110'), Decimal('10')) + Decimal('1110') + >>> ExtendedContext.logical_or(110, 1101) + Decimal('1111') + >>> ExtendedContext.logical_or(Decimal(110), 1101) + Decimal('1111') + >>> ExtendedContext.logical_or(110, Decimal(1101)) + Decimal('1111') + 'b'Applies the logical operation 'xor' between each operand's digits. + + The operands must be both logical numbers. + + >>> ExtendedContext.logical_xor(Decimal('0'), Decimal('0')) + Decimal('0') + >>> ExtendedContext.logical_xor(Decimal('0'), Decimal('1')) + Decimal('1') + >>> ExtendedContext.logical_xor(Decimal('1'), Decimal('0')) + Decimal('1') + >>> ExtendedContext.logical_xor(Decimal('1'), Decimal('1')) + Decimal('0') + >>> ExtendedContext.logical_xor(Decimal('1100'), Decimal('1010')) + Decimal('110') + >>> ExtendedContext.logical_xor(Decimal('1111'), Decimal('10')) + Decimal('1101') + >>> ExtendedContext.logical_xor(110, 1101) + Decimal('1011') + >>> ExtendedContext.logical_xor(Decimal(110), 1101) + Decimal('1011') + >>> ExtendedContext.logical_xor(110, Decimal(1101)) + Decimal('1011') + 'u'Applies the logical operation 'xor' between each operand's digits. + + The operands must be both logical numbers. + + >>> ExtendedContext.logical_xor(Decimal('0'), Decimal('0')) + Decimal('0') + >>> ExtendedContext.logical_xor(Decimal('0'), Decimal('1')) + Decimal('1') + >>> ExtendedContext.logical_xor(Decimal('1'), Decimal('0')) + Decimal('1') + >>> ExtendedContext.logical_xor(Decimal('1'), Decimal('1')) + Decimal('0') + >>> ExtendedContext.logical_xor(Decimal('1100'), Decimal('1010')) + Decimal('110') + >>> ExtendedContext.logical_xor(Decimal('1111'), Decimal('10')) + Decimal('1101') + >>> ExtendedContext.logical_xor(110, 1101) + Decimal('1011') + >>> ExtendedContext.logical_xor(Decimal(110), 1101) + Decimal('1011') + >>> ExtendedContext.logical_xor(110, Decimal(1101)) + Decimal('1011') + 'b'max compares two values numerically and returns the maximum. + + If either operand is a NaN then the general rules apply. + Otherwise, the operands are compared as though by the compare + operation. If they are numerically equal then the left-hand operand + is chosen as the result. Otherwise the maximum (closer to positive + infinity) of the two operands is chosen as the result. + + >>> ExtendedContext.max(Decimal('3'), Decimal('2')) + Decimal('3') + >>> ExtendedContext.max(Decimal('-10'), Decimal('3')) + Decimal('3') + >>> ExtendedContext.max(Decimal('1.0'), Decimal('1')) + Decimal('1') + >>> ExtendedContext.max(Decimal('7'), Decimal('NaN')) + Decimal('7') + >>> ExtendedContext.max(1, 2) + Decimal('2') + >>> ExtendedContext.max(Decimal(1), 2) + Decimal('2') + >>> ExtendedContext.max(1, Decimal(2)) + Decimal('2') + 'u'max compares two values numerically and returns the maximum. + + If either operand is a NaN then the general rules apply. + Otherwise, the operands are compared as though by the compare + operation. If they are numerically equal then the left-hand operand + is chosen as the result. Otherwise the maximum (closer to positive + infinity) of the two operands is chosen as the result. + + >>> ExtendedContext.max(Decimal('3'), Decimal('2')) + Decimal('3') + >>> ExtendedContext.max(Decimal('-10'), Decimal('3')) + Decimal('3') + >>> ExtendedContext.max(Decimal('1.0'), Decimal('1')) + Decimal('1') + >>> ExtendedContext.max(Decimal('7'), Decimal('NaN')) + Decimal('7') + >>> ExtendedContext.max(1, 2) + Decimal('2') + >>> ExtendedContext.max(Decimal(1), 2) + Decimal('2') + >>> ExtendedContext.max(1, Decimal(2)) + Decimal('2') + 'b'Compares the values numerically with their sign ignored. + + >>> ExtendedContext.max_mag(Decimal('7'), Decimal('NaN')) + Decimal('7') + >>> ExtendedContext.max_mag(Decimal('7'), Decimal('-10')) + Decimal('-10') + >>> ExtendedContext.max_mag(1, -2) + Decimal('-2') + >>> ExtendedContext.max_mag(Decimal(1), -2) + Decimal('-2') + >>> ExtendedContext.max_mag(1, Decimal(-2)) + Decimal('-2') + 'u'Compares the values numerically with their sign ignored. + + >>> ExtendedContext.max_mag(Decimal('7'), Decimal('NaN')) + Decimal('7') + >>> ExtendedContext.max_mag(Decimal('7'), Decimal('-10')) + Decimal('-10') + >>> ExtendedContext.max_mag(1, -2) + Decimal('-2') + >>> ExtendedContext.max_mag(Decimal(1), -2) + Decimal('-2') + >>> ExtendedContext.max_mag(1, Decimal(-2)) + Decimal('-2') + 'b'min compares two values numerically and returns the minimum. + + If either operand is a NaN then the general rules apply. + Otherwise, the operands are compared as though by the compare + operation. If they are numerically equal then the left-hand operand + is chosen as the result. Otherwise the minimum (closer to negative + infinity) of the two operands is chosen as the result. + + >>> ExtendedContext.min(Decimal('3'), Decimal('2')) + Decimal('2') + >>> ExtendedContext.min(Decimal('-10'), Decimal('3')) + Decimal('-10') + >>> ExtendedContext.min(Decimal('1.0'), Decimal('1')) + Decimal('1.0') + >>> ExtendedContext.min(Decimal('7'), Decimal('NaN')) + Decimal('7') + >>> ExtendedContext.min(1, 2) + Decimal('1') + >>> ExtendedContext.min(Decimal(1), 2) + Decimal('1') + >>> ExtendedContext.min(1, Decimal(29)) + Decimal('1') + 'u'min compares two values numerically and returns the minimum. + + If either operand is a NaN then the general rules apply. + Otherwise, the operands are compared as though by the compare + operation. If they are numerically equal then the left-hand operand + is chosen as the result. Otherwise the minimum (closer to negative + infinity) of the two operands is chosen as the result. + + >>> ExtendedContext.min(Decimal('3'), Decimal('2')) + Decimal('2') + >>> ExtendedContext.min(Decimal('-10'), Decimal('3')) + Decimal('-10') + >>> ExtendedContext.min(Decimal('1.0'), Decimal('1')) + Decimal('1.0') + >>> ExtendedContext.min(Decimal('7'), Decimal('NaN')) + Decimal('7') + >>> ExtendedContext.min(1, 2) + Decimal('1') + >>> ExtendedContext.min(Decimal(1), 2) + Decimal('1') + >>> ExtendedContext.min(1, Decimal(29)) + Decimal('1') + 'b'Compares the values numerically with their sign ignored. + + >>> ExtendedContext.min_mag(Decimal('3'), Decimal('-2')) + Decimal('-2') + >>> ExtendedContext.min_mag(Decimal('-3'), Decimal('NaN')) + Decimal('-3') + >>> ExtendedContext.min_mag(1, -2) + Decimal('1') + >>> ExtendedContext.min_mag(Decimal(1), -2) + Decimal('1') + >>> ExtendedContext.min_mag(1, Decimal(-2)) + Decimal('1') + 'u'Compares the values numerically with their sign ignored. + + >>> ExtendedContext.min_mag(Decimal('3'), Decimal('-2')) + Decimal('-2') + >>> ExtendedContext.min_mag(Decimal('-3'), Decimal('NaN')) + Decimal('-3') + >>> ExtendedContext.min_mag(1, -2) + Decimal('1') + >>> ExtendedContext.min_mag(Decimal(1), -2) + Decimal('1') + >>> ExtendedContext.min_mag(1, Decimal(-2)) + Decimal('1') + 'b'Minus corresponds to unary prefix minus in Python. + + The operation is evaluated using the same rules as subtract; the + operation minus(a) is calculated as subtract('0', a) where the '0' + has the same exponent as the operand. + + >>> ExtendedContext.minus(Decimal('1.3')) + Decimal('-1.3') + >>> ExtendedContext.minus(Decimal('-1.3')) + Decimal('1.3') + >>> ExtendedContext.minus(1) + Decimal('-1') + 'u'Minus corresponds to unary prefix minus in Python. + + The operation is evaluated using the same rules as subtract; the + operation minus(a) is calculated as subtract('0', a) where the '0' + has the same exponent as the operand. + + >>> ExtendedContext.minus(Decimal('1.3')) + Decimal('-1.3') + >>> ExtendedContext.minus(Decimal('-1.3')) + Decimal('1.3') + >>> ExtendedContext.minus(1) + Decimal('-1') + 'b'multiply multiplies two operands. + + If either operand is a special value then the general rules apply. + Otherwise, the operands are multiplied together + ('long multiplication'), resulting in a number which may be as long as + the sum of the lengths of the two operands. + + >>> ExtendedContext.multiply(Decimal('1.20'), Decimal('3')) + Decimal('3.60') + >>> ExtendedContext.multiply(Decimal('7'), Decimal('3')) + Decimal('21') + >>> ExtendedContext.multiply(Decimal('0.9'), Decimal('0.8')) + Decimal('0.72') + >>> ExtendedContext.multiply(Decimal('0.9'), Decimal('-0')) + Decimal('-0.0') + >>> ExtendedContext.multiply(Decimal('654321'), Decimal('654321')) + Decimal('4.28135971E+11') + >>> ExtendedContext.multiply(7, 7) + Decimal('49') + >>> ExtendedContext.multiply(Decimal(7), 7) + Decimal('49') + >>> ExtendedContext.multiply(7, Decimal(7)) + Decimal('49') + 'u'multiply multiplies two operands. + + If either operand is a special value then the general rules apply. + Otherwise, the operands are multiplied together + ('long multiplication'), resulting in a number which may be as long as + the sum of the lengths of the two operands. + + >>> ExtendedContext.multiply(Decimal('1.20'), Decimal('3')) + Decimal('3.60') + >>> ExtendedContext.multiply(Decimal('7'), Decimal('3')) + Decimal('21') + >>> ExtendedContext.multiply(Decimal('0.9'), Decimal('0.8')) + Decimal('0.72') + >>> ExtendedContext.multiply(Decimal('0.9'), Decimal('-0')) + Decimal('-0.0') + >>> ExtendedContext.multiply(Decimal('654321'), Decimal('654321')) + Decimal('4.28135971E+11') + >>> ExtendedContext.multiply(7, 7) + Decimal('49') + >>> ExtendedContext.multiply(Decimal(7), 7) + Decimal('49') + >>> ExtendedContext.multiply(7, Decimal(7)) + Decimal('49') + 'b'Returns the largest representable number smaller than a. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> ExtendedContext.next_minus(Decimal('1')) + Decimal('0.999999999') + >>> c.next_minus(Decimal('1E-1007')) + Decimal('0E-1007') + >>> ExtendedContext.next_minus(Decimal('-1.00000003')) + Decimal('-1.00000004') + >>> c.next_minus(Decimal('Infinity')) + Decimal('9.99999999E+999') + >>> c.next_minus(1) + Decimal('0.999999999') + 'u'Returns the largest representable number smaller than a. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> ExtendedContext.next_minus(Decimal('1')) + Decimal('0.999999999') + >>> c.next_minus(Decimal('1E-1007')) + Decimal('0E-1007') + >>> ExtendedContext.next_minus(Decimal('-1.00000003')) + Decimal('-1.00000004') + >>> c.next_minus(Decimal('Infinity')) + Decimal('9.99999999E+999') + >>> c.next_minus(1) + Decimal('0.999999999') + 'b'Returns the smallest representable number larger than a. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> ExtendedContext.next_plus(Decimal('1')) + Decimal('1.00000001') + >>> c.next_plus(Decimal('-1E-1007')) + Decimal('-0E-1007') + >>> ExtendedContext.next_plus(Decimal('-1.00000003')) + Decimal('-1.00000002') + >>> c.next_plus(Decimal('-Infinity')) + Decimal('-9.99999999E+999') + >>> c.next_plus(1) + Decimal('1.00000001') + 'u'Returns the smallest representable number larger than a. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> ExtendedContext.next_plus(Decimal('1')) + Decimal('1.00000001') + >>> c.next_plus(Decimal('-1E-1007')) + Decimal('-0E-1007') + >>> ExtendedContext.next_plus(Decimal('-1.00000003')) + Decimal('-1.00000002') + >>> c.next_plus(Decimal('-Infinity')) + Decimal('-9.99999999E+999') + >>> c.next_plus(1) + Decimal('1.00000001') + 'b'Returns the number closest to a, in direction towards b. + + The result is the closest representable number from the first + operand (but not the first operand) that is in the direction + towards the second operand, unless the operands have the same + value. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.next_toward(Decimal('1'), Decimal('2')) + Decimal('1.00000001') + >>> c.next_toward(Decimal('-1E-1007'), Decimal('1')) + Decimal('-0E-1007') + >>> c.next_toward(Decimal('-1.00000003'), Decimal('0')) + Decimal('-1.00000002') + >>> c.next_toward(Decimal('1'), Decimal('0')) + Decimal('0.999999999') + >>> c.next_toward(Decimal('1E-1007'), Decimal('-100')) + Decimal('0E-1007') + >>> c.next_toward(Decimal('-1.00000003'), Decimal('-10')) + Decimal('-1.00000004') + >>> c.next_toward(Decimal('0.00'), Decimal('-0.0000')) + Decimal('-0.00') + >>> c.next_toward(0, 1) + Decimal('1E-1007') + >>> c.next_toward(Decimal(0), 1) + Decimal('1E-1007') + >>> c.next_toward(0, Decimal(1)) + Decimal('1E-1007') + 'u'Returns the number closest to a, in direction towards b. + + The result is the closest representable number from the first + operand (but not the first operand) that is in the direction + towards the second operand, unless the operands have the same + value. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.next_toward(Decimal('1'), Decimal('2')) + Decimal('1.00000001') + >>> c.next_toward(Decimal('-1E-1007'), Decimal('1')) + Decimal('-0E-1007') + >>> c.next_toward(Decimal('-1.00000003'), Decimal('0')) + Decimal('-1.00000002') + >>> c.next_toward(Decimal('1'), Decimal('0')) + Decimal('0.999999999') + >>> c.next_toward(Decimal('1E-1007'), Decimal('-100')) + Decimal('0E-1007') + >>> c.next_toward(Decimal('-1.00000003'), Decimal('-10')) + Decimal('-1.00000004') + >>> c.next_toward(Decimal('0.00'), Decimal('-0.0000')) + Decimal('-0.00') + >>> c.next_toward(0, 1) + Decimal('1E-1007') + >>> c.next_toward(Decimal(0), 1) + Decimal('1E-1007') + >>> c.next_toward(0, Decimal(1)) + Decimal('1E-1007') + 'b'normalize reduces an operand to its simplest form. + + Essentially a plus operation with all trailing zeros removed from the + result. + + >>> ExtendedContext.normalize(Decimal('2.1')) + Decimal('2.1') + >>> ExtendedContext.normalize(Decimal('-2.0')) + Decimal('-2') + >>> ExtendedContext.normalize(Decimal('1.200')) + Decimal('1.2') + >>> ExtendedContext.normalize(Decimal('-120')) + Decimal('-1.2E+2') + >>> ExtendedContext.normalize(Decimal('120.00')) + Decimal('1.2E+2') + >>> ExtendedContext.normalize(Decimal('0.00')) + Decimal('0') + >>> ExtendedContext.normalize(6) + Decimal('6') + 'u'normalize reduces an operand to its simplest form. + + Essentially a plus operation with all trailing zeros removed from the + result. + + >>> ExtendedContext.normalize(Decimal('2.1')) + Decimal('2.1') + >>> ExtendedContext.normalize(Decimal('-2.0')) + Decimal('-2') + >>> ExtendedContext.normalize(Decimal('1.200')) + Decimal('1.2') + >>> ExtendedContext.normalize(Decimal('-120')) + Decimal('-1.2E+2') + >>> ExtendedContext.normalize(Decimal('120.00')) + Decimal('1.2E+2') + >>> ExtendedContext.normalize(Decimal('0.00')) + Decimal('0') + >>> ExtendedContext.normalize(6) + Decimal('6') + 'b'Returns an indication of the class of the operand. + + The class is one of the following strings: + -sNaN + -NaN + -Infinity + -Normal + -Subnormal + -Zero + +Zero + +Subnormal + +Normal + +Infinity + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.number_class(Decimal('Infinity')) + '+Infinity' + >>> c.number_class(Decimal('1E-10')) + '+Normal' + >>> c.number_class(Decimal('2.50')) + '+Normal' + >>> c.number_class(Decimal('0.1E-999')) + '+Subnormal' + >>> c.number_class(Decimal('0')) + '+Zero' + >>> c.number_class(Decimal('-0')) + '-Zero' + >>> c.number_class(Decimal('-0.1E-999')) + '-Subnormal' + >>> c.number_class(Decimal('-1E-10')) + '-Normal' + >>> c.number_class(Decimal('-2.50')) + '-Normal' + >>> c.number_class(Decimal('-Infinity')) + '-Infinity' + >>> c.number_class(Decimal('NaN')) + 'NaN' + >>> c.number_class(Decimal('-NaN')) + 'NaN' + >>> c.number_class(Decimal('sNaN')) + 'sNaN' + >>> c.number_class(123) + '+Normal' + 'u'Returns an indication of the class of the operand. + + The class is one of the following strings: + -sNaN + -NaN + -Infinity + -Normal + -Subnormal + -Zero + +Zero + +Subnormal + +Normal + +Infinity + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.number_class(Decimal('Infinity')) + '+Infinity' + >>> c.number_class(Decimal('1E-10')) + '+Normal' + >>> c.number_class(Decimal('2.50')) + '+Normal' + >>> c.number_class(Decimal('0.1E-999')) + '+Subnormal' + >>> c.number_class(Decimal('0')) + '+Zero' + >>> c.number_class(Decimal('-0')) + '-Zero' + >>> c.number_class(Decimal('-0.1E-999')) + '-Subnormal' + >>> c.number_class(Decimal('-1E-10')) + '-Normal' + >>> c.number_class(Decimal('-2.50')) + '-Normal' + >>> c.number_class(Decimal('-Infinity')) + '-Infinity' + >>> c.number_class(Decimal('NaN')) + 'NaN' + >>> c.number_class(Decimal('-NaN')) + 'NaN' + >>> c.number_class(Decimal('sNaN')) + 'sNaN' + >>> c.number_class(123) + '+Normal' + 'b'Plus corresponds to unary prefix plus in Python. + + The operation is evaluated using the same rules as add; the + operation plus(a) is calculated as add('0', a) where the '0' + has the same exponent as the operand. + + >>> ExtendedContext.plus(Decimal('1.3')) + Decimal('1.3') + >>> ExtendedContext.plus(Decimal('-1.3')) + Decimal('-1.3') + >>> ExtendedContext.plus(-1) + Decimal('-1') + 'u'Plus corresponds to unary prefix plus in Python. + + The operation is evaluated using the same rules as add; the + operation plus(a) is calculated as add('0', a) where the '0' + has the same exponent as the operand. + + >>> ExtendedContext.plus(Decimal('1.3')) + Decimal('1.3') + >>> ExtendedContext.plus(Decimal('-1.3')) + Decimal('-1.3') + >>> ExtendedContext.plus(-1) + Decimal('-1') + 'b'Raises a to the power of b, to modulo if given. + + With two arguments, compute a**b. If a is negative then b + must be integral. The result will be inexact unless b is + integral and the result is finite and can be expressed exactly + in 'precision' digits. + + With three arguments, compute (a**b) % modulo. For the + three argument form, the following restrictions on the + arguments hold: + + - all three arguments must be integral + - b must be nonnegative + - at least one of a or b must be nonzero + - modulo must be nonzero and have at most 'precision' digits + + The result of pow(a, b, modulo) is identical to the result + that would be obtained by computing (a**b) % modulo with + unbounded precision, but is computed more efficiently. It is + always exact. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.power(Decimal('2'), Decimal('3')) + Decimal('8') + >>> c.power(Decimal('-2'), Decimal('3')) + Decimal('-8') + >>> c.power(Decimal('2'), Decimal('-3')) + Decimal('0.125') + >>> c.power(Decimal('1.7'), Decimal('8')) + Decimal('69.7575744') + >>> c.power(Decimal('10'), Decimal('0.301029996')) + Decimal('2.00000000') + >>> c.power(Decimal('Infinity'), Decimal('-1')) + Decimal('0') + >>> c.power(Decimal('Infinity'), Decimal('0')) + Decimal('1') + >>> c.power(Decimal('Infinity'), Decimal('1')) + Decimal('Infinity') + >>> c.power(Decimal('-Infinity'), Decimal('-1')) + Decimal('-0') + >>> c.power(Decimal('-Infinity'), Decimal('0')) + Decimal('1') + >>> c.power(Decimal('-Infinity'), Decimal('1')) + Decimal('-Infinity') + >>> c.power(Decimal('-Infinity'), Decimal('2')) + Decimal('Infinity') + >>> c.power(Decimal('0'), Decimal('0')) + Decimal('NaN') + + >>> c.power(Decimal('3'), Decimal('7'), Decimal('16')) + Decimal('11') + >>> c.power(Decimal('-3'), Decimal('7'), Decimal('16')) + Decimal('-11') + >>> c.power(Decimal('-3'), Decimal('8'), Decimal('16')) + Decimal('1') + >>> c.power(Decimal('3'), Decimal('7'), Decimal('-16')) + Decimal('11') + >>> c.power(Decimal('23E12345'), Decimal('67E189'), Decimal('123456789')) + Decimal('11729830') + >>> c.power(Decimal('-0'), Decimal('17'), Decimal('1729')) + Decimal('-0') + >>> c.power(Decimal('-23'), Decimal('0'), Decimal('65537')) + Decimal('1') + >>> ExtendedContext.power(7, 7) + Decimal('823543') + >>> ExtendedContext.power(Decimal(7), 7) + Decimal('823543') + >>> ExtendedContext.power(7, Decimal(7), 2) + Decimal('1') + 'u'Raises a to the power of b, to modulo if given. + + With two arguments, compute a**b. If a is negative then b + must be integral. The result will be inexact unless b is + integral and the result is finite and can be expressed exactly + in 'precision' digits. + + With three arguments, compute (a**b) % modulo. For the + three argument form, the following restrictions on the + arguments hold: + + - all three arguments must be integral + - b must be nonnegative + - at least one of a or b must be nonzero + - modulo must be nonzero and have at most 'precision' digits + + The result of pow(a, b, modulo) is identical to the result + that would be obtained by computing (a**b) % modulo with + unbounded precision, but is computed more efficiently. It is + always exact. + + >>> c = ExtendedContext.copy() + >>> c.Emin = -999 + >>> c.Emax = 999 + >>> c.power(Decimal('2'), Decimal('3')) + Decimal('8') + >>> c.power(Decimal('-2'), Decimal('3')) + Decimal('-8') + >>> c.power(Decimal('2'), Decimal('-3')) + Decimal('0.125') + >>> c.power(Decimal('1.7'), Decimal('8')) + Decimal('69.7575744') + >>> c.power(Decimal('10'), Decimal('0.301029996')) + Decimal('2.00000000') + >>> c.power(Decimal('Infinity'), Decimal('-1')) + Decimal('0') + >>> c.power(Decimal('Infinity'), Decimal('0')) + Decimal('1') + >>> c.power(Decimal('Infinity'), Decimal('1')) + Decimal('Infinity') + >>> c.power(Decimal('-Infinity'), Decimal('-1')) + Decimal('-0') + >>> c.power(Decimal('-Infinity'), Decimal('0')) + Decimal('1') + >>> c.power(Decimal('-Infinity'), Decimal('1')) + Decimal('-Infinity') + >>> c.power(Decimal('-Infinity'), Decimal('2')) + Decimal('Infinity') + >>> c.power(Decimal('0'), Decimal('0')) + Decimal('NaN') + + >>> c.power(Decimal('3'), Decimal('7'), Decimal('16')) + Decimal('11') + >>> c.power(Decimal('-3'), Decimal('7'), Decimal('16')) + Decimal('-11') + >>> c.power(Decimal('-3'), Decimal('8'), Decimal('16')) + Decimal('1') + >>> c.power(Decimal('3'), Decimal('7'), Decimal('-16')) + Decimal('11') + >>> c.power(Decimal('23E12345'), Decimal('67E189'), Decimal('123456789')) + Decimal('11729830') + >>> c.power(Decimal('-0'), Decimal('17'), Decimal('1729')) + Decimal('-0') + >>> c.power(Decimal('-23'), Decimal('0'), Decimal('65537')) + Decimal('1') + >>> ExtendedContext.power(7, 7) + Decimal('823543') + >>> ExtendedContext.power(Decimal(7), 7) + Decimal('823543') + >>> ExtendedContext.power(7, Decimal(7), 2) + Decimal('1') + 'b'Returns a value equal to 'a' (rounded), having the exponent of 'b'. + + The coefficient of the result is derived from that of the left-hand + operand. It may be rounded using the current rounding setting (if the + exponent is being increased), multiplied by a positive power of ten (if + the exponent is being decreased), or is unchanged (if the exponent is + already equal to that of the right-hand operand). + + Unlike other operations, if the length of the coefficient after the + quantize operation would be greater than precision then an Invalid + operation condition is raised. This guarantees that, unless there is + an error condition, the exponent of the result of a quantize is always + equal to that of the right-hand operand. + + Also unlike other operations, quantize will never raise Underflow, even + if the result is subnormal and inexact. + + >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('0.001')) + Decimal('2.170') + >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('0.01')) + Decimal('2.17') + >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('0.1')) + Decimal('2.2') + >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('1e+0')) + Decimal('2') + >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('1e+1')) + Decimal('0E+1') + >>> ExtendedContext.quantize(Decimal('-Inf'), Decimal('Infinity')) + Decimal('-Infinity') + >>> ExtendedContext.quantize(Decimal('2'), Decimal('Infinity')) + Decimal('NaN') + >>> ExtendedContext.quantize(Decimal('-0.1'), Decimal('1')) + Decimal('-0') + >>> ExtendedContext.quantize(Decimal('-0'), Decimal('1e+5')) + Decimal('-0E+5') + >>> ExtendedContext.quantize(Decimal('+35236450.6'), Decimal('1e-2')) + Decimal('NaN') + >>> ExtendedContext.quantize(Decimal('-35236450.6'), Decimal('1e-2')) + Decimal('NaN') + >>> ExtendedContext.quantize(Decimal('217'), Decimal('1e-1')) + Decimal('217.0') + >>> ExtendedContext.quantize(Decimal('217'), Decimal('1e-0')) + Decimal('217') + >>> ExtendedContext.quantize(Decimal('217'), Decimal('1e+1')) + Decimal('2.2E+2') + >>> ExtendedContext.quantize(Decimal('217'), Decimal('1e+2')) + Decimal('2E+2') + >>> ExtendedContext.quantize(1, 2) + Decimal('1') + >>> ExtendedContext.quantize(Decimal(1), 2) + Decimal('1') + >>> ExtendedContext.quantize(1, Decimal(2)) + Decimal('1') + 'u'Returns a value equal to 'a' (rounded), having the exponent of 'b'. + + The coefficient of the result is derived from that of the left-hand + operand. It may be rounded using the current rounding setting (if the + exponent is being increased), multiplied by a positive power of ten (if + the exponent is being decreased), or is unchanged (if the exponent is + already equal to that of the right-hand operand). + + Unlike other operations, if the length of the coefficient after the + quantize operation would be greater than precision then an Invalid + operation condition is raised. This guarantees that, unless there is + an error condition, the exponent of the result of a quantize is always + equal to that of the right-hand operand. + + Also unlike other operations, quantize will never raise Underflow, even + if the result is subnormal and inexact. + + >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('0.001')) + Decimal('2.170') + >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('0.01')) + Decimal('2.17') + >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('0.1')) + Decimal('2.2') + >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('1e+0')) + Decimal('2') + >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('1e+1')) + Decimal('0E+1') + >>> ExtendedContext.quantize(Decimal('-Inf'), Decimal('Infinity')) + Decimal('-Infinity') + >>> ExtendedContext.quantize(Decimal('2'), Decimal('Infinity')) + Decimal('NaN') + >>> ExtendedContext.quantize(Decimal('-0.1'), Decimal('1')) + Decimal('-0') + >>> ExtendedContext.quantize(Decimal('-0'), Decimal('1e+5')) + Decimal('-0E+5') + >>> ExtendedContext.quantize(Decimal('+35236450.6'), Decimal('1e-2')) + Decimal('NaN') + >>> ExtendedContext.quantize(Decimal('-35236450.6'), Decimal('1e-2')) + Decimal('NaN') + >>> ExtendedContext.quantize(Decimal('217'), Decimal('1e-1')) + Decimal('217.0') + >>> ExtendedContext.quantize(Decimal('217'), Decimal('1e-0')) + Decimal('217') + >>> ExtendedContext.quantize(Decimal('217'), Decimal('1e+1')) + Decimal('2.2E+2') + >>> ExtendedContext.quantize(Decimal('217'), Decimal('1e+2')) + Decimal('2E+2') + >>> ExtendedContext.quantize(1, 2) + Decimal('1') + >>> ExtendedContext.quantize(Decimal(1), 2) + Decimal('1') + >>> ExtendedContext.quantize(1, Decimal(2)) + Decimal('1') + 'b'Just returns 10, as this is Decimal, :) + + >>> ExtendedContext.radix() + Decimal('10') + 'u'Just returns 10, as this is Decimal, :) + + >>> ExtendedContext.radix() + Decimal('10') + 'b'Returns the remainder from integer division. + + The result is the residue of the dividend after the operation of + calculating integer division as described for divide-integer, rounded + to precision digits if necessary. The sign of the result, if + non-zero, is the same as that of the original dividend. + + This operation will fail under the same conditions as integer division + (that is, if integer division on the same two operands would fail, the + remainder cannot be calculated). + + >>> ExtendedContext.remainder(Decimal('2.1'), Decimal('3')) + Decimal('2.1') + >>> ExtendedContext.remainder(Decimal('10'), Decimal('3')) + Decimal('1') + >>> ExtendedContext.remainder(Decimal('-10'), Decimal('3')) + Decimal('-1') + >>> ExtendedContext.remainder(Decimal('10.2'), Decimal('1')) + Decimal('0.2') + >>> ExtendedContext.remainder(Decimal('10'), Decimal('0.3')) + Decimal('0.1') + >>> ExtendedContext.remainder(Decimal('3.6'), Decimal('1.3')) + Decimal('1.0') + >>> ExtendedContext.remainder(22, 6) + Decimal('4') + >>> ExtendedContext.remainder(Decimal(22), 6) + Decimal('4') + >>> ExtendedContext.remainder(22, Decimal(6)) + Decimal('4') + 'u'Returns the remainder from integer division. + + The result is the residue of the dividend after the operation of + calculating integer division as described for divide-integer, rounded + to precision digits if necessary. The sign of the result, if + non-zero, is the same as that of the original dividend. + + This operation will fail under the same conditions as integer division + (that is, if integer division on the same two operands would fail, the + remainder cannot be calculated). + + >>> ExtendedContext.remainder(Decimal('2.1'), Decimal('3')) + Decimal('2.1') + >>> ExtendedContext.remainder(Decimal('10'), Decimal('3')) + Decimal('1') + >>> ExtendedContext.remainder(Decimal('-10'), Decimal('3')) + Decimal('-1') + >>> ExtendedContext.remainder(Decimal('10.2'), Decimal('1')) + Decimal('0.2') + >>> ExtendedContext.remainder(Decimal('10'), Decimal('0.3')) + Decimal('0.1') + >>> ExtendedContext.remainder(Decimal('3.6'), Decimal('1.3')) + Decimal('1.0') + >>> ExtendedContext.remainder(22, 6) + Decimal('4') + >>> ExtendedContext.remainder(Decimal(22), 6) + Decimal('4') + >>> ExtendedContext.remainder(22, Decimal(6)) + Decimal('4') + 'b'Returns to be "a - b * n", where n is the integer nearest the exact + value of "x / b" (if two integers are equally near then the even one + is chosen). If the result is equal to 0 then its sign will be the + sign of a. + + This operation will fail under the same conditions as integer division + (that is, if integer division on the same two operands would fail, the + remainder cannot be calculated). + + >>> ExtendedContext.remainder_near(Decimal('2.1'), Decimal('3')) + Decimal('-0.9') + >>> ExtendedContext.remainder_near(Decimal('10'), Decimal('6')) + Decimal('-2') + >>> ExtendedContext.remainder_near(Decimal('10'), Decimal('3')) + Decimal('1') + >>> ExtendedContext.remainder_near(Decimal('-10'), Decimal('3')) + Decimal('-1') + >>> ExtendedContext.remainder_near(Decimal('10.2'), Decimal('1')) + Decimal('0.2') + >>> ExtendedContext.remainder_near(Decimal('10'), Decimal('0.3')) + Decimal('0.1') + >>> ExtendedContext.remainder_near(Decimal('3.6'), Decimal('1.3')) + Decimal('-0.3') + >>> ExtendedContext.remainder_near(3, 11) + Decimal('3') + >>> ExtendedContext.remainder_near(Decimal(3), 11) + Decimal('3') + >>> ExtendedContext.remainder_near(3, Decimal(11)) + Decimal('3') + 'u'Returns to be "a - b * n", where n is the integer nearest the exact + value of "x / b" (if two integers are equally near then the even one + is chosen). If the result is equal to 0 then its sign will be the + sign of a. + + This operation will fail under the same conditions as integer division + (that is, if integer division on the same two operands would fail, the + remainder cannot be calculated). + + >>> ExtendedContext.remainder_near(Decimal('2.1'), Decimal('3')) + Decimal('-0.9') + >>> ExtendedContext.remainder_near(Decimal('10'), Decimal('6')) + Decimal('-2') + >>> ExtendedContext.remainder_near(Decimal('10'), Decimal('3')) + Decimal('1') + >>> ExtendedContext.remainder_near(Decimal('-10'), Decimal('3')) + Decimal('-1') + >>> ExtendedContext.remainder_near(Decimal('10.2'), Decimal('1')) + Decimal('0.2') + >>> ExtendedContext.remainder_near(Decimal('10'), Decimal('0.3')) + Decimal('0.1') + >>> ExtendedContext.remainder_near(Decimal('3.6'), Decimal('1.3')) + Decimal('-0.3') + >>> ExtendedContext.remainder_near(3, 11) + Decimal('3') + >>> ExtendedContext.remainder_near(Decimal(3), 11) + Decimal('3') + >>> ExtendedContext.remainder_near(3, Decimal(11)) + Decimal('3') + 'b'Returns a rotated copy of a, b times. + + The coefficient of the result is a rotated copy of the digits in + the coefficient of the first operand. The number of places of + rotation is taken from the absolute value of the second operand, + with the rotation being to the left if the second operand is + positive or to the right otherwise. + + >>> ExtendedContext.rotate(Decimal('34'), Decimal('8')) + Decimal('400000003') + >>> ExtendedContext.rotate(Decimal('12'), Decimal('9')) + Decimal('12') + >>> ExtendedContext.rotate(Decimal('123456789'), Decimal('-2')) + Decimal('891234567') + >>> ExtendedContext.rotate(Decimal('123456789'), Decimal('0')) + Decimal('123456789') + >>> ExtendedContext.rotate(Decimal('123456789'), Decimal('+2')) + Decimal('345678912') + >>> ExtendedContext.rotate(1333333, 1) + Decimal('13333330') + >>> ExtendedContext.rotate(Decimal(1333333), 1) + Decimal('13333330') + >>> ExtendedContext.rotate(1333333, Decimal(1)) + Decimal('13333330') + 'u'Returns a rotated copy of a, b times. + + The coefficient of the result is a rotated copy of the digits in + the coefficient of the first operand. The number of places of + rotation is taken from the absolute value of the second operand, + with the rotation being to the left if the second operand is + positive or to the right otherwise. + + >>> ExtendedContext.rotate(Decimal('34'), Decimal('8')) + Decimal('400000003') + >>> ExtendedContext.rotate(Decimal('12'), Decimal('9')) + Decimal('12') + >>> ExtendedContext.rotate(Decimal('123456789'), Decimal('-2')) + Decimal('891234567') + >>> ExtendedContext.rotate(Decimal('123456789'), Decimal('0')) + Decimal('123456789') + >>> ExtendedContext.rotate(Decimal('123456789'), Decimal('+2')) + Decimal('345678912') + >>> ExtendedContext.rotate(1333333, 1) + Decimal('13333330') + >>> ExtendedContext.rotate(Decimal(1333333), 1) + Decimal('13333330') + >>> ExtendedContext.rotate(1333333, Decimal(1)) + Decimal('13333330') + 'b'Returns True if the two operands have the same exponent. + + The result is never affected by either the sign or the coefficient of + either operand. + + >>> ExtendedContext.same_quantum(Decimal('2.17'), Decimal('0.001')) + False + >>> ExtendedContext.same_quantum(Decimal('2.17'), Decimal('0.01')) + True + >>> ExtendedContext.same_quantum(Decimal('2.17'), Decimal('1')) + False + >>> ExtendedContext.same_quantum(Decimal('Inf'), Decimal('-Inf')) + True + >>> ExtendedContext.same_quantum(10000, -1) + True + >>> ExtendedContext.same_quantum(Decimal(10000), -1) + True + >>> ExtendedContext.same_quantum(10000, Decimal(-1)) + True + 'u'Returns True if the two operands have the same exponent. + + The result is never affected by either the sign or the coefficient of + either operand. + + >>> ExtendedContext.same_quantum(Decimal('2.17'), Decimal('0.001')) + False + >>> ExtendedContext.same_quantum(Decimal('2.17'), Decimal('0.01')) + True + >>> ExtendedContext.same_quantum(Decimal('2.17'), Decimal('1')) + False + >>> ExtendedContext.same_quantum(Decimal('Inf'), Decimal('-Inf')) + True + >>> ExtendedContext.same_quantum(10000, -1) + True + >>> ExtendedContext.same_quantum(Decimal(10000), -1) + True + >>> ExtendedContext.same_quantum(10000, Decimal(-1)) + True + 'b'Returns the first operand after adding the second value its exp. + + >>> ExtendedContext.scaleb(Decimal('7.50'), Decimal('-2')) + Decimal('0.0750') + >>> ExtendedContext.scaleb(Decimal('7.50'), Decimal('0')) + Decimal('7.50') + >>> ExtendedContext.scaleb(Decimal('7.50'), Decimal('3')) + Decimal('7.50E+3') + >>> ExtendedContext.scaleb(1, 4) + Decimal('1E+4') + >>> ExtendedContext.scaleb(Decimal(1), 4) + Decimal('1E+4') + >>> ExtendedContext.scaleb(1, Decimal(4)) + Decimal('1E+4') + 'u'Returns the first operand after adding the second value its exp. + + >>> ExtendedContext.scaleb(Decimal('7.50'), Decimal('-2')) + Decimal('0.0750') + >>> ExtendedContext.scaleb(Decimal('7.50'), Decimal('0')) + Decimal('7.50') + >>> ExtendedContext.scaleb(Decimal('7.50'), Decimal('3')) + Decimal('7.50E+3') + >>> ExtendedContext.scaleb(1, 4) + Decimal('1E+4') + >>> ExtendedContext.scaleb(Decimal(1), 4) + Decimal('1E+4') + >>> ExtendedContext.scaleb(1, Decimal(4)) + Decimal('1E+4') + 'b'Returns a shifted copy of a, b times. + + The coefficient of the result is a shifted copy of the digits + in the coefficient of the first operand. The number of places + to shift is taken from the absolute value of the second operand, + with the shift being to the left if the second operand is + positive or to the right otherwise. Digits shifted into the + coefficient are zeros. + + >>> ExtendedContext.shift(Decimal('34'), Decimal('8')) + Decimal('400000000') + >>> ExtendedContext.shift(Decimal('12'), Decimal('9')) + Decimal('0') + >>> ExtendedContext.shift(Decimal('123456789'), Decimal('-2')) + Decimal('1234567') + >>> ExtendedContext.shift(Decimal('123456789'), Decimal('0')) + Decimal('123456789') + >>> ExtendedContext.shift(Decimal('123456789'), Decimal('+2')) + Decimal('345678900') + >>> ExtendedContext.shift(88888888, 2) + Decimal('888888800') + >>> ExtendedContext.shift(Decimal(88888888), 2) + Decimal('888888800') + >>> ExtendedContext.shift(88888888, Decimal(2)) + Decimal('888888800') + 'u'Returns a shifted copy of a, b times. + + The coefficient of the result is a shifted copy of the digits + in the coefficient of the first operand. The number of places + to shift is taken from the absolute value of the second operand, + with the shift being to the left if the second operand is + positive or to the right otherwise. Digits shifted into the + coefficient are zeros. + + >>> ExtendedContext.shift(Decimal('34'), Decimal('8')) + Decimal('400000000') + >>> ExtendedContext.shift(Decimal('12'), Decimal('9')) + Decimal('0') + >>> ExtendedContext.shift(Decimal('123456789'), Decimal('-2')) + Decimal('1234567') + >>> ExtendedContext.shift(Decimal('123456789'), Decimal('0')) + Decimal('123456789') + >>> ExtendedContext.shift(Decimal('123456789'), Decimal('+2')) + Decimal('345678900') + >>> ExtendedContext.shift(88888888, 2) + Decimal('888888800') + >>> ExtendedContext.shift(Decimal(88888888), 2) + Decimal('888888800') + >>> ExtendedContext.shift(88888888, Decimal(2)) + Decimal('888888800') + 'b'Square root of a non-negative number to context precision. + + If the result must be inexact, it is rounded using the round-half-even + algorithm. + + >>> ExtendedContext.sqrt(Decimal('0')) + Decimal('0') + >>> ExtendedContext.sqrt(Decimal('-0')) + Decimal('-0') + >>> ExtendedContext.sqrt(Decimal('0.39')) + Decimal('0.624499800') + >>> ExtendedContext.sqrt(Decimal('100')) + Decimal('10') + >>> ExtendedContext.sqrt(Decimal('1')) + Decimal('1') + >>> ExtendedContext.sqrt(Decimal('1.0')) + Decimal('1.0') + >>> ExtendedContext.sqrt(Decimal('1.00')) + Decimal('1.0') + >>> ExtendedContext.sqrt(Decimal('7')) + Decimal('2.64575131') + >>> ExtendedContext.sqrt(Decimal('10')) + Decimal('3.16227766') + >>> ExtendedContext.sqrt(2) + Decimal('1.41421356') + >>> ExtendedContext.prec + 9 + 'u'Square root of a non-negative number to context precision. + + If the result must be inexact, it is rounded using the round-half-even + algorithm. + + >>> ExtendedContext.sqrt(Decimal('0')) + Decimal('0') + >>> ExtendedContext.sqrt(Decimal('-0')) + Decimal('-0') + >>> ExtendedContext.sqrt(Decimal('0.39')) + Decimal('0.624499800') + >>> ExtendedContext.sqrt(Decimal('100')) + Decimal('10') + >>> ExtendedContext.sqrt(Decimal('1')) + Decimal('1') + >>> ExtendedContext.sqrt(Decimal('1.0')) + Decimal('1.0') + >>> ExtendedContext.sqrt(Decimal('1.00')) + Decimal('1.0') + >>> ExtendedContext.sqrt(Decimal('7')) + Decimal('2.64575131') + >>> ExtendedContext.sqrt(Decimal('10')) + Decimal('3.16227766') + >>> ExtendedContext.sqrt(2) + Decimal('1.41421356') + >>> ExtendedContext.prec + 9 + 'b'Return the difference between the two operands. + + >>> ExtendedContext.subtract(Decimal('1.3'), Decimal('1.07')) + Decimal('0.23') + >>> ExtendedContext.subtract(Decimal('1.3'), Decimal('1.30')) + Decimal('0.00') + >>> ExtendedContext.subtract(Decimal('1.3'), Decimal('2.07')) + Decimal('-0.77') + >>> ExtendedContext.subtract(8, 5) + Decimal('3') + >>> ExtendedContext.subtract(Decimal(8), 5) + Decimal('3') + >>> ExtendedContext.subtract(8, Decimal(5)) + Decimal('3') + 'u'Return the difference between the two operands. + + >>> ExtendedContext.subtract(Decimal('1.3'), Decimal('1.07')) + Decimal('0.23') + >>> ExtendedContext.subtract(Decimal('1.3'), Decimal('1.30')) + Decimal('0.00') + >>> ExtendedContext.subtract(Decimal('1.3'), Decimal('2.07')) + Decimal('-0.77') + >>> ExtendedContext.subtract(8, 5) + Decimal('3') + >>> ExtendedContext.subtract(Decimal(8), 5) + Decimal('3') + >>> ExtendedContext.subtract(8, Decimal(5)) + Decimal('3') + 'b'Convert to a string, using engineering notation if an exponent is needed. + + Engineering notation has an exponent which is a multiple of 3. This + can leave up to 3 digits to the left of the decimal place and may + require the addition of either one or two trailing zeros. + + The operation is not affected by the context. + + >>> ExtendedContext.to_eng_string(Decimal('123E+1')) + '1.23E+3' + >>> ExtendedContext.to_eng_string(Decimal('123E+3')) + '123E+3' + >>> ExtendedContext.to_eng_string(Decimal('123E-10')) + '12.3E-9' + >>> ExtendedContext.to_eng_string(Decimal('-123E-12')) + '-123E-12' + >>> ExtendedContext.to_eng_string(Decimal('7E-7')) + '700E-9' + >>> ExtendedContext.to_eng_string(Decimal('7E+1')) + '70' + >>> ExtendedContext.to_eng_string(Decimal('0E+1')) + '0.00E+3' + + 'u'Convert to a string, using engineering notation if an exponent is needed. + + Engineering notation has an exponent which is a multiple of 3. This + can leave up to 3 digits to the left of the decimal place and may + require the addition of either one or two trailing zeros. + + The operation is not affected by the context. + + >>> ExtendedContext.to_eng_string(Decimal('123E+1')) + '1.23E+3' + >>> ExtendedContext.to_eng_string(Decimal('123E+3')) + '123E+3' + >>> ExtendedContext.to_eng_string(Decimal('123E-10')) + '12.3E-9' + >>> ExtendedContext.to_eng_string(Decimal('-123E-12')) + '-123E-12' + >>> ExtendedContext.to_eng_string(Decimal('7E-7')) + '700E-9' + >>> ExtendedContext.to_eng_string(Decimal('7E+1')) + '70' + >>> ExtendedContext.to_eng_string(Decimal('0E+1')) + '0.00E+3' + + 'b'Converts a number to a string, using scientific notation. + + The operation is not affected by the context. + 'u'Converts a number to a string, using scientific notation. + + The operation is not affected by the context. + 'b'Rounds to an integer. + + When the operand has a negative exponent, the result is the same + as using the quantize() operation using the given operand as the + left-hand-operand, 1E+0 as the right-hand-operand, and the precision + of the operand as the precision setting; Inexact and Rounded flags + are allowed in this operation. The rounding mode is taken from the + context. + + >>> ExtendedContext.to_integral_exact(Decimal('2.1')) + Decimal('2') + >>> ExtendedContext.to_integral_exact(Decimal('100')) + Decimal('100') + >>> ExtendedContext.to_integral_exact(Decimal('100.0')) + Decimal('100') + >>> ExtendedContext.to_integral_exact(Decimal('101.5')) + Decimal('102') + >>> ExtendedContext.to_integral_exact(Decimal('-101.5')) + Decimal('-102') + >>> ExtendedContext.to_integral_exact(Decimal('10E+5')) + Decimal('1.0E+6') + >>> ExtendedContext.to_integral_exact(Decimal('7.89E+77')) + Decimal('7.89E+77') + >>> ExtendedContext.to_integral_exact(Decimal('-Inf')) + Decimal('-Infinity') + 'u'Rounds to an integer. + + When the operand has a negative exponent, the result is the same + as using the quantize() operation using the given operand as the + left-hand-operand, 1E+0 as the right-hand-operand, and the precision + of the operand as the precision setting; Inexact and Rounded flags + are allowed in this operation. The rounding mode is taken from the + context. + + >>> ExtendedContext.to_integral_exact(Decimal('2.1')) + Decimal('2') + >>> ExtendedContext.to_integral_exact(Decimal('100')) + Decimal('100') + >>> ExtendedContext.to_integral_exact(Decimal('100.0')) + Decimal('100') + >>> ExtendedContext.to_integral_exact(Decimal('101.5')) + Decimal('102') + >>> ExtendedContext.to_integral_exact(Decimal('-101.5')) + Decimal('-102') + >>> ExtendedContext.to_integral_exact(Decimal('10E+5')) + Decimal('1.0E+6') + >>> ExtendedContext.to_integral_exact(Decimal('7.89E+77')) + Decimal('7.89E+77') + >>> ExtendedContext.to_integral_exact(Decimal('-Inf')) + Decimal('-Infinity') + 'b'Rounds to an integer. + + When the operand has a negative exponent, the result is the same + as using the quantize() operation using the given operand as the + left-hand-operand, 1E+0 as the right-hand-operand, and the precision + of the operand as the precision setting, except that no flags will + be set. The rounding mode is taken from the context. + + >>> ExtendedContext.to_integral_value(Decimal('2.1')) + Decimal('2') + >>> ExtendedContext.to_integral_value(Decimal('100')) + Decimal('100') + >>> ExtendedContext.to_integral_value(Decimal('100.0')) + Decimal('100') + >>> ExtendedContext.to_integral_value(Decimal('101.5')) + Decimal('102') + >>> ExtendedContext.to_integral_value(Decimal('-101.5')) + Decimal('-102') + >>> ExtendedContext.to_integral_value(Decimal('10E+5')) + Decimal('1.0E+6') + >>> ExtendedContext.to_integral_value(Decimal('7.89E+77')) + Decimal('7.89E+77') + >>> ExtendedContext.to_integral_value(Decimal('-Inf')) + Decimal('-Infinity') + 'u'Rounds to an integer. + + When the operand has a negative exponent, the result is the same + as using the quantize() operation using the given operand as the + left-hand-operand, 1E+0 as the right-hand-operand, and the precision + of the operand as the precision setting, except that no flags will + be set. The rounding mode is taken from the context. + + >>> ExtendedContext.to_integral_value(Decimal('2.1')) + Decimal('2') + >>> ExtendedContext.to_integral_value(Decimal('100')) + Decimal('100') + >>> ExtendedContext.to_integral_value(Decimal('100.0')) + Decimal('100') + >>> ExtendedContext.to_integral_value(Decimal('101.5')) + Decimal('102') + >>> ExtendedContext.to_integral_value(Decimal('-101.5')) + Decimal('-102') + >>> ExtendedContext.to_integral_value(Decimal('10E+5')) + Decimal('1.0E+6') + >>> ExtendedContext.to_integral_value(Decimal('7.89E+77')) + Decimal('7.89E+77') + >>> ExtendedContext.to_integral_value(Decimal('-Inf')) + Decimal('-Infinity') + 'b'(%r, %r, %r)'u'(%r, %r, %r)'b'Normalizes op1, op2 to have the same exp and length of coefficient. + + Done during addition. + 'u'Normalizes op1, op2 to have the same exp and length of coefficient. + + Done during addition. + 'b' Given integers n and e, return n * 10**e if it's an integer, else None. + + The computation is designed to avoid computing large powers of 10 + unnecessarily. + + >>> _decimal_lshift_exact(3, 4) + 30000 + >>> _decimal_lshift_exact(300, -999999999) # returns None + + 'u' Given integers n and e, return n * 10**e if it's an integer, else None. + + The computation is designed to avoid computing large powers of 10 + unnecessarily. + + >>> _decimal_lshift_exact(3, 4) + 30000 + >>> _decimal_lshift_exact(300, -999999999) # returns None + + 'b'Closest integer to the square root of the positive integer n. a is + an initial approximation to the square root. Any positive integer + will do for a, but the closer a is to the square root of n the + faster convergence will be. + + 'u'Closest integer to the square root of the positive integer n. a is + an initial approximation to the square root. Any positive integer + will do for a, but the closer a is to the square root of n the + faster convergence will be. + + 'b'Both arguments to _sqrt_nearest should be positive.'u'Both arguments to _sqrt_nearest should be positive.'b'Given an integer x and a nonnegative integer shift, return closest + integer to x / 2**shift; use round-to-even in case of a tie. + + 'u'Given an integer x and a nonnegative integer shift, return closest + integer to x / 2**shift; use round-to-even in case of a tie. + + 'b'Closest integer to a/b, a and b positive integers; rounds to even + in the case of a tie. + + 'u'Closest integer to a/b, a and b positive integers; rounds to even + in the case of a tie. + + 'b'Integer approximation to M*log(x/M), with absolute error boundable + in terms only of x/M. + + Given positive integers x and M, return an integer approximation to + M * log(x/M). For L = 8 and 0.1 <= x/M <= 10 the difference + between the approximation and the exact result is at most 22. For + L = 8 and 1.0 <= x/M <= 10.0 the difference is at most 15. In + both cases these are upper bounds on the error; it will usually be + much smaller.'u'Integer approximation to M*log(x/M), with absolute error boundable + in terms only of x/M. + + Given positive integers x and M, return an integer approximation to + M * log(x/M). For L = 8 and 0.1 <= x/M <= 10 the difference + between the approximation and the exact result is at most 22. For + L = 8 and 1.0 <= x/M <= 10.0 the difference is at most 15. In + both cases these are upper bounds on the error; it will usually be + much smaller.'b'Given integers c, e and p with c > 0, p >= 0, compute an integer + approximation to 10**p * log10(c*10**e), with an absolute error of + at most 1. Assumes that c*10**e is not exactly 1.'u'Given integers c, e and p with c > 0, p >= 0, compute an integer + approximation to 10**p * log10(c*10**e), with an absolute error of + at most 1. Assumes that c*10**e is not exactly 1.'b'Given integers c, e and p with c > 0, compute an integer + approximation to 10**p * log(c*10**e), with an absolute error of + at most 1. Assumes that c*10**e is not exactly 1.'u'Given integers c, e and p with c > 0, compute an integer + approximation to 10**p * log(c*10**e), with an absolute error of + at most 1. Assumes that c*10**e is not exactly 1.'b'Class to compute, store, and allow retrieval of, digits of the + constant log(10) = 2.302585.... This constant is needed by + Decimal.ln, Decimal.log10, Decimal.exp and Decimal.__pow__.'u'Class to compute, store, and allow retrieval of, digits of the + constant log(10) = 2.302585.... This constant is needed by + Decimal.ln, Decimal.log10, Decimal.exp and Decimal.__pow__.'b'23025850929940456840179914546843642076011014886'u'23025850929940456840179914546843642076011014886'b'Given an integer p >= 0, return floor(10**p)*log(10). + + For example, self.getdigits(3) returns 2302. + 'u'Given an integer p >= 0, return floor(10**p)*log(10). + + For example, self.getdigits(3) returns 2302. + 'b'p should be nonnegative'u'p should be nonnegative'b'Given integers x and M, M > 0, such that x/M is small in absolute + value, compute an integer approximation to M*exp(x/M). For 0 <= + x/M <= 2.4, the absolute error in the result is bounded by 60 (and + is usually much smaller).'u'Given integers x and M, M > 0, such that x/M is small in absolute + value, compute an integer approximation to M*exp(x/M). For 0 <= + x/M <= 2.4, the absolute error in the result is bounded by 60 (and + is usually much smaller).'b'Compute an approximation to exp(c*10**e), with p decimal places of + precision. + + Returns integers d, f such that: + + 10**(p-1) <= d <= 10**p, and + (d-1)*10**f < exp(c*10**e) < (d+1)*10**f + + In other words, d*10**f is an approximation to exp(c*10**e) with p + digits of precision, and with an error in d of at most 1. This is + almost, but not quite, the same as the error being < 1ulp: when d + = 10**(p-1) the error could be up to 10 ulp.'u'Compute an approximation to exp(c*10**e), with p decimal places of + precision. + + Returns integers d, f such that: + + 10**(p-1) <= d <= 10**p, and + (d-1)*10**f < exp(c*10**e) < (d+1)*10**f + + In other words, d*10**f is an approximation to exp(c*10**e) with p + digits of precision, and with an error in d of at most 1. This is + almost, but not quite, the same as the error being < 1ulp: when d + = 10**(p-1) the error could be up to 10 ulp.'b'Given integers xc, xe, yc and ye representing Decimals x = xc*10**xe and + y = yc*10**ye, compute x**y. Returns a pair of integers (c, e) such that: + + 10**(p-1) <= c <= 10**p, and + (c-1)*10**e < x**y < (c+1)*10**e + + in other words, c*10**e is an approximation to x**y with p digits + of precision, and with an error in c of at most 1. (This is + almost, but not quite, the same as the error being < 1ulp: when c + == 10**(p-1) we can only guarantee error < 10ulp.) + + We assume that: x is positive and not equal to 1, and y is nonzero. + 'u'Given integers xc, xe, yc and ye representing Decimals x = xc*10**xe and + y = yc*10**ye, compute x**y. Returns a pair of integers (c, e) such that: + + 10**(p-1) <= c <= 10**p, and + (c-1)*10**e < x**y < (c+1)*10**e + + in other words, c*10**e is an approximation to x**y with p digits + of precision, and with an error in c of at most 1. (This is + almost, but not quite, the same as the error being < 1ulp: when c + == 10**(p-1) we can only guarantee error < 10ulp.) + + We assume that: x is positive and not equal to 1, and y is nonzero. + 'b'Compute a lower bound for 100*log10(c) for a positive integer c.'u'Compute a lower bound for 100*log10(c) for a positive integer c.'b'The argument to _log10_lb should be nonnegative.'u'The argument to _log10_lb should be nonnegative.'b'Convert other to Decimal. + + Verifies that it's ok to use in an implicit construction. + If allow_float is true, allow conversion from float; this + is used in the comparison methods (__eq__ and friends). + + 'u'Convert other to Decimal. + + Verifies that it's ok to use in an implicit construction. + If allow_float is true, allow conversion from float; this + is used in the comparison methods (__eq__ and friends). + + 'b'Given a Decimal instance self and a Python object other, return + a pair (s, o) of Decimal instances such that "s op o" is + equivalent to "self op other" for any of the 6 comparison + operators "op". + + 'u'Given a Decimal instance self and a Python object other, return + a pair (s, o) of Decimal instances such that "s op o" is + equivalent to "self op other" for any of the 6 comparison + operators "op". + + 'b' # A numeric string consists of: +# \s* + (?P[-+])? # an optional sign, followed by either... + ( + (?=\d|\.\d) # ...a number (with at least one digit) + (?P\d*) # having a (possibly empty) integer part + (\.(?P\d*))? # followed by an optional fractional part + (E(?P[-+]?\d+))? # followed by an optional exponent, or... + | + Inf(inity)? # ...an infinity, or... + | + (?Ps)? # ...an (optionally signaling) + NaN # NaN + (?P\d*) # with (possibly empty) diagnostic info. + ) +# \s* + \Z +'u' # A numeric string consists of: +# \s* + (?P[-+])? # an optional sign, followed by either... + ( + (?=\d|\.\d) # ...a number (with at least one digit) + (?P\d*) # having a (possibly empty) integer part + (\.(?P\d*))? # followed by an optional fractional part + (E(?P[-+]?\d+))? # followed by an optional exponent, or... + | + Inf(inity)? # ...an infinity, or... + | + (?Ps)? # ...an (optionally signaling) + NaN # NaN + (?P\d*) # with (possibly empty) diagnostic info. + ) +# \s* + \Z +'b'0*$'u'0*$'b'50*$'u'50*$'b'\A +(?: + (?P.)? + (?P[<>=^]) +)? +(?P[-+ ])? +(?P\#)? +(?P0)? +(?P(?!0)\d+)? +(?P,)? +(?:\.(?P0|(?!0)\d+))? +(?P[eEfFgGn%])? +\Z +'u'\A +(?: + (?P.)? + (?P[<>=^]) +)? +(?P[-+ ])? +(?P\#)? +(?P0)? +(?P(?!0)\d+)? +(?P,)? +(?:\.(?P0|(?!0)\d+))? +(?P[eEfFgGn%])? +\Z +'b'Parse and validate a format specifier. + + Turns a standard numeric format specifier into a dict, with the + following entries: + + fill: fill character to pad field to minimum width + align: alignment type, either '<', '>', '=' or '^' + sign: either '+', '-' or ' ' + minimumwidth: nonnegative integer giving minimum width + zeropad: boolean, indicating whether to pad with zeros + thousands_sep: string to use as thousands separator, or '' + grouping: grouping for thousands separators, in format + used by localeconv + decimal_point: string to use for decimal point + precision: nonnegative integer giving precision, or None + type: one of the characters 'eEfFgG%', or None + + 'u'Parse and validate a format specifier. + + Turns a standard numeric format specifier into a dict, with the + following entries: + + fill: fill character to pad field to minimum width + align: alignment type, either '<', '>', '=' or '^' + sign: either '+', '-' or ' ' + minimumwidth: nonnegative integer giving minimum width + zeropad: boolean, indicating whether to pad with zeros + thousands_sep: string to use as thousands separator, or '' + grouping: grouping for thousands separators, in format + used by localeconv + decimal_point: string to use for decimal point + precision: nonnegative integer giving precision, or None + type: one of the characters 'eEfFgG%', or None + + 'b'Invalid format specifier: 'u'Invalid format specifier: 'b'fill'u'fill'b'align'u'align'b'zeropad'u'zeropad'b'Fill character conflicts with '0' in format specifier: 'u'Fill character conflicts with '0' in format specifier: 'b'Alignment conflicts with '0' in format specifier: 'u'Alignment conflicts with '0' in format specifier: 'b'minimumwidth'u'minimumwidth'b'gGn'u'gGn'b'thousands_sep'u'thousands_sep'b'Explicit thousands separator conflicts with 'n' type in format specifier: 'u'Explicit thousands separator conflicts with 'n' type in format specifier: 'b'grouping'u'grouping'b'decimal_point'u'decimal_point'b'Given an unpadded, non-aligned numeric string 'body' and sign + string 'sign', add padding and alignment conforming to the given + format specifier dictionary 'spec' (as produced by + parse_format_specifier). + + 'u'Given an unpadded, non-aligned numeric string 'body' and sign + string 'sign', add padding and alignment conforming to the given + format specifier dictionary 'spec' (as produced by + parse_format_specifier). + + 'b'='u'='b'^'u'^'b'Unrecognised alignment field'u'Unrecognised alignment field'b'Convert a localeconv-style grouping into a (possibly infinite) + iterable of integers representing group lengths. + + 'u'Convert a localeconv-style grouping into a (possibly infinite) + iterable of integers representing group lengths. + + 'b'unrecognised format for grouping'u'unrecognised format for grouping'b'Insert thousands separators into a digit string. + + spec is a dictionary whose keys should include 'thousands_sep' and + 'grouping'; typically it's the result of parsing the format + specifier using _parse_format_specifier. + + The min_width keyword argument gives the minimum length of the + result, which will be padded on the left with zeros if necessary. + + If necessary, the zero padding adds an extra '0' on the left to + avoid a leading thousands separator. For example, inserting + commas every three digits in '123456', with min_width=8, gives + '0,123,456', even though that has length 9. + + 'u'Insert thousands separators into a digit string. + + spec is a dictionary whose keys should include 'thousands_sep' and + 'grouping'; typically it's the result of parsing the format + specifier using _parse_format_specifier. + + The min_width keyword argument gives the minimum length of the + result, which will be padded on the left with zeros if necessary. + + If necessary, the zero padding adds an extra '0' on the left to + avoid a leading thousands separator. For example, inserting + commas every three digits in '123456', with min_width=8, gives + '0,123,456', even though that has length 9. + + 'b'group length should be positive'u'group length should be positive'b'Determine sign character.'u'Determine sign character.'b' +'u' +'b'Format a number, given the following data: + + is_negative: true if the number is negative, else false + intpart: string of digits that must appear before the decimal point + fracpart: string of digits that must come after the point + exp: exponent, as an integer + spec: dictionary resulting from parsing the format specifier + + This function uses the information in spec to: + insert separators (decimal separator and thousands separators) + format the sign + format the exponent + add trailing '%' for the '%' type + zero-pad if necessary + fill and align if necessary + 'u'Format a number, given the following data: + + is_negative: true if the number is negative, else false + intpart: string of digits that must appear before the decimal point + fracpart: string of digits that must come after the point + exp: exponent, as an integer + spec: dictionary resulting from parsing the format specifier + + This function uses the information in spec to: + insert separators (decimal separator and thousands separators) + format the sign + format the exponent + add trailing '%' for the '%' type + zero-pad if necessary + fill and align if necessary + 'b'alt'u'alt'b'{0}{1:+}'u'{0}{1:+}'b'Inf'u'Inf'b'-Inf'u'-Inf'u'_pydecimal'u'Exception raised by Queue.get(block=0)/get_nowait().'u'_queue'u'Empty.__weakref__'_queue.EmptyEmptyu'Simple, unbounded, reentrant FIFO queue.'emptyget_nowaitput_nowaitqsize_queue.SimpleQueueSimpleQueueu'C implementation of the Python queue module. +This module is an implementation detail, please do not use it directly.'u'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/_queue.cpython-38-darwin.so'_queueu'Random() -> create a random number generator with its own internal state.'getrandbitsrandomseed_random.RandomRandomu'Module implements the Mersenne Twister random number generator.'u'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/_random.cpython-38-darwin.so'u'_random'_randomu'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/_scproxy.cpython-38-darwin.so'u'_scproxy'_get_proxies_get_proxy_settings_scproxyu'sha1.block_size'u'sha1.digest_size'u'sha1.name'_sha1.sha1SHA1Typeu'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/_sha1.cpython-38-darwin.so'u'_sha1'sha1_sha1u'sha224.block_size'u'sha224.name'_sha256.sha224SHA224Typeu'sha256.block_size'u'sha256.name'_sha256.sha256SHA256Typeu'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/_sha256.cpython-38-darwin.so'u'_sha256'sha224sha256_sha256u'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/_sha3.cpython-38-darwin.so'u'_sha3'u'generic 64-bit optimized implementation (lane complementing, all rounds unrolled)'keccakoptu'sha3_224([data]) -> SHA3 object + +Return a new SHA3 hash object with a hashbit length of 28 bytes.'u'sha3_224._capacity_bits'_capacity_bitsu'sha3_224._rate_bits'_rate_bitsu'sha3_224._suffix'_suffixu'sha3_224.block_size'u'sha3_224.digest_size'u'sha3_224.name'_sha3.sha3_224sha3_224u'sha3_256([data]) -> SHA3 object + +Return a new SHA3 hash object with a hashbit length of 32 bytes.'u'sha3_256._capacity_bits'u'sha3_256._rate_bits'u'sha3_256._suffix'u'sha3_256.block_size'u'sha3_256.digest_size'u'sha3_256.name'_sha3.sha3_256sha3_256u'sha3_384([data]) -> SHA3 object + +Return a new SHA3 hash object with a hashbit length of 48 bytes.'u'sha3_384._capacity_bits'u'sha3_384._rate_bits'u'sha3_384._suffix'u'sha3_384.block_size'u'sha3_384.digest_size'u'sha3_384.name'_sha3.sha3_384sha3_384u'sha3_512([data]) -> SHA3 object + +Return a new SHA3 hash object with a hashbit length of 64 bytes.'u'sha3_512._capacity_bits'u'sha3_512._rate_bits'u'sha3_512._suffix'u'sha3_512.block_size'u'sha3_512.digest_size'u'sha3_512.name'_sha3.sha3_512sha3_512u'shake_128([data]) -> SHAKE object + +Return a new SHAKE hash object.'u'shake_128._capacity_bits'u'shake_128._rate_bits'u'shake_128._suffix'u'shake_128.block_size'u'shake_128.digest_size'u'shake_128.name'_sha3.shake_128shake_128u'shake_256([data]) -> SHAKE object + +Return a new SHAKE hash object.'u'shake_256._capacity_bits'u'shake_256._rate_bits'u'shake_256._suffix'u'shake_256.block_size'u'shake_256.digest_size'u'shake_256.name'_sha3.shake_256shake_256_sha3u'sha384.block_size'u'sha384.name'_sha512.sha384SHA384Typeu'sha512.block_size'u'sha512.name'_sha512.sha512SHA512Typeu'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/_sha512.cpython-38-darwin.so'u'_sha512'sha384sha512_sha512ITIMER_PROFITIMER_REALITIMER_VIRTUALu'ItimerError.__weakref__'signal.ItimerErrorItimerErrorNSIGSIGABRTSIGALRMSIGBUSSIGCHLDSIGCONTSIGEMTSIGFPESIGHUPSIGILLSIGINFOSIGINTSIGIOSIGIOTSIGPIPESIGPROFSIGQUITSIGSEGVSIGSYSSIGTERMSIGTRAPSIGTSTPSIGTTINSIGTTOUSIGURGSIGUSR1SIGUSR2SIGVTALRMSIGWINCHSIGXCPUSIGXFSZSIG_BLOCKSIG_DFLSIG_IGNSIG_SETMASKSIG_UNBLOCKu'This module provides mechanisms to use signal handlers in Python. + +Functions: + +alarm() -- cause SIGALRM after a specified time [Unix only] +setitimer() -- cause a signal (described below) after a specified + float time and the timer may restart then [Unix only] +getitimer() -- get current value of timer [Unix only] +signal() -- set the action for a given signal +getsignal() -- get the signal action for a given signal +pause() -- wait until a signal arrives [Unix only] +default_int_handler() -- default SIGINT handler + +signal constants: +SIG_DFL -- used to refer to the system default handler +SIG_IGN -- used to ignore the signal +NSIG -- number of defined signals +SIGINT, SIGTERM, etc. -- signal numbers + +itimer constants: +ITIMER_REAL -- decrements in real time, and delivers SIGALRM upon + expiration +ITIMER_VIRTUAL -- decrements only when the process is executing, + and delivers SIGVTALRM upon expiration +ITIMER_PROF -- decrements both when the process is executing and + when the system is executing on behalf of the process. + Coupled with ITIMER_VIRTUAL, this timer is usually + used to profile the time spent by the application + in user and kernel space. SIGPROF is delivered upon + expiration. + + +*** IMPORTANT NOTICE *** +A signal handler function is called with two arguments: +the first is the signal number, the second is the interrupted stack frame.'alarmdefault_int_handlergetitimerpausepthread_killpthread_sigmaskraise_signalset_wakeup_fdsetitimersiginterruptsigpendingsigwaitstrsignal_signalAF_APPLETALKAF_DECnetAF_IPXAF_LINKAF_ROUTEAF_SNAAF_SYSTEMAF_UNSPECAI_ADDRCONFIGAI_ALLAI_CANONNAME1536AI_DEFAULT5127AI_MASKAI_NUMERICHOSTAI_NUMERICSERVAI_PASSIVE2048AI_V4MAPPED512AI_V4MAPPED_CFGCAPICMSG_LENCMSG_SPACEEAI_ADDRFAMILYEAI_BADFLAGSEAI_BADHINTSEAI_FAMILYEAI_MAXEAI_MEMORYEAI_OVERFLOWEAI_PROTOCOLEAI_SERVICEEAI_SOCKTYPEEAI_SYSTEM3758096385INADDR_ALLHOSTS_GROUPINADDR_ANYINADDR_BROADCAST2130706433INADDR_LOOPBACK3758096639INADDR_MAX_LOCAL_GROUPINADDR_NONE3758096384INADDR_UNSPEC_GROUPIPPORT_RESERVED5000IPPORT_USERRESERVEDIPPROTO_AHIPPROTO_DSTOPTSIPPROTO_EGP80IPPROTO_EONIPPROTO_ESPIPPROTO_FRAGMENTIPPROTO_GGPIPPROTO_GREIPPROTO_HELLOIPPROTO_HOPOPTSIPPROTO_ICMP58IPPROTO_ICMPV6IPPROTO_IDPIPPROTO_IGMPIPPROTO_IP108IPPROTO_IPCOMPIPPROTO_IPIPIPPROTO_IPV4IPPROTO_IPV6IPPROTO_MAX77IPPROTO_ND59IPPROTO_NONE103IPPROTO_PIMIPPROTO_PUP255IPPROTO_RAWIPPROTO_ROUTINGIPPROTO_RSVPIPPROTO_SCTPIPPROTO_TCPIPPROTO_TPIPPROTO_UDPIPPROTO_XTPIPV6_CHECKSUMIPV6_JOIN_GROUPIPV6_LEAVE_GROUPIPV6_MULTICAST_HOPSIPV6_MULTICAST_IFIPV6_MULTICAST_LOOPIPV6_RECVTCLASSIPV6_RTHDR_TYPE_0IPV6_TCLASSIPV6_UNICAST_HOPSIPV6_V6ONLYIP_ADD_MEMBERSHIPIP_DEFAULT_MULTICAST_LOOPIP_DEFAULT_MULTICAST_TTLIP_DROP_MEMBERSHIPIP_HDRINCL4095IP_MAX_MEMBERSHIPSIP_MULTICAST_IFIP_MULTICAST_LOOPIP_MULTICAST_TTLIP_OPTIONSIP_RECVDSTADDRIP_RECVOPTSIP_RECVRETOPTSIP_RETOPTSIP_TOSIP_TTLLOCAL_PEERCREDMSG_CTRUNCMSG_DONTROUTEMSG_DONTWAITMSG_EOFMSG_EORMSG_NOSIGNALMSG_OOBMSG_PEEKMSG_TRUNCMSG_WAITALLNI_DGRAM1025NI_MAXHOSTNI_MAXSERVNI_NAMEREQDNI_NOFQDNNI_NUMERICHOSTNI_NUMERICSERVPF_SYSTEMSCM_CREDSSCM_RIGHTSSHUT_RDSHUT_RDWRSHUT_WRSOCK_DGRAMSOCK_RAWSOCK_RDMSOCK_SEQPACKETSOL_IPSOL_TCPSOL_UDPSOMAXCONNSO_ACCEPTCONNSO_BROADCASTSO_DEBUGSO_DONTROUTE4103SO_ERRORSO_KEEPALIVESO_LINGERSO_OOBINLINE4098SO_RCVBUF4100SO_RCVLOWAT4102SO_RCVTIMEO4097SO_SNDBUF4099SO_SNDLOWAT4101SO_SNDTIMEO4104SO_TYPESO_USELOOPBACKSYSPROTO_CONTROLu'socket(family=AF_INET, type=SOCK_STREAM, proto=0) -> socket object +socket(family=-1, type=-1, proto=-1, fileno=None) -> socket object + +Open a socket of the given type. The family argument specifies the +address family; it defaults to AF_INET. The type argument specifies +whether this is a stream (SOCK_STREAM, this is the default) +or datagram (SOCK_DGRAM) socket. The protocol argument defaults to 0, +specifying the default protocol. Keyword arguments are accepted. +The socket is created as non-inheritable. + +When a fileno is passed in, family, type and proto are auto-detected, +unless they are explicitly set. + +A socket object represents one endpoint of a network connection. + +Methods of socket objects (keyword arguments not allowed): + +_accept() -- accept connection, returning new socket fd and client address +bind(addr) -- bind the socket to a local address +close() -- close the socket +connect(addr) -- connect the socket to a remote address +connect_ex(addr) -- connect, return an error code instead of an exception +dup() -- return a new socket fd duplicated from fileno() +fileno() -- return underlying file descriptor +getpeername() -- return remote address [*] +getsockname() -- return local address +getsockopt(level, optname[, buflen]) -- get socket options +gettimeout() -- return timeout or None +listen([n]) -- start listening for incoming connections +recv(buflen[, flags]) -- receive data +recv_into(buffer[, nbytes[, flags]]) -- receive data (into a buffer) +recvfrom(buflen[, flags]) -- receive data and sender's address +recvfrom_into(buffer[, nbytes, [, flags]) + -- receive data and sender's address (into a buffer) +sendall(data[, flags]) -- send all data +send(data[, flags]) -- send data, may not send all of it +sendto(data[, flags], addr) -- send data to a given address +setblocking(0 | 1) -- set or clear the blocking I/O flag +getblocking() -- return True if socket is blocking, False if non-blocking +setsockopt(level, optname, value[, optlen]) -- set socket options +settimeout(None | float) -- set or clear the timeout +shutdown(how) -- shut down traffic in one or both directions +if_nameindex() -- return all network interface indices and names +if_nametoindex(name) -- return the corresponding interface index +if_indextoname(index) -- return the corresponding interface name + + [*] not available on all platforms!'_acceptconnectconnect_exgetblockinggetpeernamegettimeoutlistenprotorecvrecv_intorecvfromrecvfrom_intorecvmsgrecvmsg_intosendallsendmsgsendtosetblockingsettimeoutu'the socket timeout'u'socket.timeout'_socket.socket261TCP_FASTOPEN258TCP_KEEPCNT257TCP_KEEPINTVLTCP_MAXSEGTCP_NODELAY513TCP_NOTSENT_LOWATu'Implementation module for socket operations. + +See the socket module for documentation.'u'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/_socket.cpython-38-darwin.so'u'gaierror.__weakref__'socket.gaierrorgetaddrinfogethostbyaddrgethostbynamegethostbyname_exgethostnamegetnameinfogetprotobynamegetservbynamegetservbyportu'herror.__weakref__'socket.herrorherrorhtonlhtonsif_indextonameif_nameindexif_nametoindexinet_atoninet_ntoainet_ntopinet_ptonntohlntohssethostnamesocketpairu'timeout.__weakref__'socket.timeoutCODESIZE20171005MAGIC2147483647MAXGROUPSMAXREPEATascii_iscasedascii_toloweru' SRE 2.2.2 Copyright (c) 1997-2002 by Secret Labs AB 'getcodesizeunicode_iscasedunicode_tolower_sreALERT_DESCRIPTION_ACCESS_DENIEDALERT_DESCRIPTION_BAD_CERTIFICATE114ALERT_DESCRIPTION_BAD_CERTIFICATE_HASH_VALUEALERT_DESCRIPTION_BAD_CERTIFICATE_STATUS_RESPONSEALERT_DESCRIPTION_BAD_RECORD_MACALERT_DESCRIPTION_CERTIFICATE_EXPIREDALERT_DESCRIPTION_CERTIFICATE_REVOKEDALERT_DESCRIPTION_CERTIFICATE_UNKNOWNALERT_DESCRIPTION_CERTIFICATE_UNOBTAINABLEALERT_DESCRIPTION_CLOSE_NOTIFYALERT_DESCRIPTION_DECODE_ERRORALERT_DESCRIPTION_DECOMPRESSION_FAILUREALERT_DESCRIPTION_DECRYPT_ERRORALERT_DESCRIPTION_HANDSHAKE_FAILUREALERT_DESCRIPTION_ILLEGAL_PARAMETER71ALERT_DESCRIPTION_INSUFFICIENT_SECURITYALERT_DESCRIPTION_INTERNAL_ERRORALERT_DESCRIPTION_NO_RENEGOTIATIONALERT_DESCRIPTION_PROTOCOL_VERSIONALERT_DESCRIPTION_RECORD_OVERFLOWALERT_DESCRIPTION_UNEXPECTED_MESSAGEALERT_DESCRIPTION_UNKNOWN_CA115ALERT_DESCRIPTION_UNKNOWN_PSK_IDENTITY112ALERT_DESCRIPTION_UNRECOGNIZED_NAMEALERT_DESCRIPTION_UNSUPPORTED_CERTIFICATEALERT_DESCRIPTION_UNSUPPORTED_EXTENSION90ALERT_DESCRIPTION_USER_CANCELLEDCERT_NONECERT_OPTIONALCERT_REQUIREDHAS_ALPNHAS_ECDHHAS_NPNHAS_SNIHAS_SSLv2HAS_SSLv3HAS_TLS_UNIQUEHAS_TLSv1HAS_TLSv1_1HAS_TLSv1_2HAS_TLSv1_3HOSTFLAG_ALWAYS_CHECK_SUBJECTHOSTFLAG_MULTI_LABEL_WILDCARDSHOSTFLAG_NEVER_CHECK_SUBJECTHOSTFLAG_NO_PARTIAL_WILDCARDSHOSTFLAG_NO_WILDCARDSHOSTFLAG_SINGLE_LABEL_SUBDOMAINSu'Whether the memory BIO is at EOF.'u'MemoryBIO.eof'u'The number of bytes pending in the memory BIO.'u'MemoryBIO.pending'write_eof_ssl.MemoryBIOMemoryBIOu'OpenSSL 1.1.1u 30 May 2023'OPENSSL_VERSIONOPENSSL_VERSION_INFO269488479OPENSSL_VERSION_NUMBER2147483732OP_ALLOP_CIPHER_SERVER_PREFERENCEOP_ENABLE_MIDDLEBOX_COMPATOP_NO_COMPRESSION1073741824OP_NO_RENEGOTIATIONOP_NO_SSLv233554432OP_NO_SSLv3OP_NO_TICKET67108864OP_NO_TLSv1268435456OP_NO_TLSv1_1134217728OP_NO_TLSv1_2536870912OP_NO_TLSv1_3OP_SINGLE_DH_USEOP_SINGLE_ECDH_USEPROTOCOL_SSLv23PROTOCOL_TLSPROTOCOL_TLS_CLIENTPROTOCOL_TLS_SERVERPROTOCOL_TLSv1PROTOCOL_TLSv1_1PROTOCOL_TLSv1_2-1PROTO_MAXIMUM_SUPPORTED-2PROTO_MINIMUM_SUPPORTED768PROTO_SSLv3769PROTO_TLSv1770PROTO_TLSv1_1771PROTO_TLSv1_2772PROTO_TLSv1_3RAND_addRAND_bytesRAND_pseudo_bytesRAND_statusu'A certificate could not be verified.'u'ssl'u'SSLCertVerificationError.__weakref__'u'An error occurred in the SSL implementation.'ssl.SSLErrorssl.SSLCertVerificationErrorSSLCertVerificationErroru'SSL/TLS connection terminated abruptly.'u'SSLEOFError.__weakref__'ssl.SSLEOFErrorSSLEOFErrorSSLErroru'Does the session contain a ticket?'u'Session.has_ticket'has_ticketu'Session id'u'Session.id'u'Ticket life time hint.'u'Session.ticket_lifetime_hint'ticket_lifetime_hintu'Session creation time (seconds since epoch).'u'Session.time'u'Session timeout (delta in seconds).'u'Session.timeout'_ssl.SessionSSLSessionu'System error when attempting SSL operation.'u'SSLSyscallError.__weakref__'ssl.SSLSyscallErrorSSLSyscallErroru'Non-blocking SSL socket needs to read more data +before the requested operation can be completed.'u'SSLWantReadError.__weakref__'ssl.SSLWantReadErrorSSLWantReadErroru'Non-blocking SSL socket needs to write more data +before the requested operation can be completed.'u'SSLWantWriteError.__weakref__'ssl.SSLWantWriteErrorSSLWantWriteErroru'SSL/TLS session closed cleanly.'u'SSLZeroReturnError.__weakref__'ssl.SSLZeroReturnErrorSSLZeroReturnErrorSSL_ERROR_EOFSSL_ERROR_INVALID_ERROR_CODESSL_ERROR_SSLSSL_ERROR_SYSCALLSSL_ERROR_WANT_CONNECTSSL_ERROR_WANT_READSSL_ERROR_WANT_WRITESSL_ERROR_WANT_X509_LOOKUPSSL_ERROR_ZERO_RETURNVERIFY_CRL_CHECK_CHAINVERIFY_CRL_CHECK_LEAFVERIFY_DEFAULTVERIFY_X509_STRICTVERIFY_X509_TRUSTED_FIRSTu'DEFAULT:!aNULL:!eNULL:!MD5:!3DES:!DES:!RC4:!IDEA:!SEED:!aDSS:!SRP:!PSK'_DEFAULT_CIPHERS_OPENSSL_API_VERSIONu'_SSLContext._host_flags'_host_flagsu'_SSLContext._msg_callback'_msg_callback_set_alpn_protocols_set_npn_protocols_wrap_bio_wrap_socketcert_store_statsu'_SSLContext.check_hostname'check_hostnameget_ca_certsget_ciphersu'_SSLContext.keylog_filename'keylog_filenameload_cert_chainload_dh_paramsload_verify_locationsu'_SSLContext.maximum_version'maximum_versionu'_SSLContext.minimum_version'minimum_versionu'Control the number of TLSv1.3 session tickets'u'_SSLContext.num_tickets'num_ticketsu'_SSLContext.options'u'_SSLContext.post_handshake_auth'post_handshake_authu'_SSLContext.protocol'session_statsset_ciphersset_default_verify_pathsset_ecdh_curveu'Set a callback that will be called when a server name is provided by the SSL/TLS client in the SNI extension. + +If the argument is None then the callback is disabled. The method is called +with the SSLSocket, the server name as a string, and the SSLContext object. +See RFC 6066 for details of the SNI extension.'u'_SSLContext.sni_callback'sni_callbacku'_SSLContext.verify_flags'verify_flagsu'_SSLContext.verify_mode'verify_mode_ssl._SSLContext_SSLContextciphercompressionu'_setter_context(ctx) +This changes the context associated with the SSLSocket. This is typically +used from within a callback function set by the sni_callback +on the SSLContext to change the certificate information associated with the +SSLSocket before the cryptographic exchange handshake messages +'u'_SSLSocket.context'do_handshakeget_channel_bindinggetpeercertu'The Python-level owner of this object.Passed as "self" in servername callback.'u'_SSLSocket.owner'selected_alpn_protocolu'The currently set server hostname (for SNI).'u'_SSLSocket.server_hostname'server_hostnameu'Whether this is a server-side socket.'u'_SSLSocket.server_side'server_sideu'_setter_session(session) +Get / set SSLSession.'u'_SSLSocket.session'sessionu'Was the client session reused during handshake?'u'_SSLSocket.session_reused'session_reusedshared_ciphersverify_client_post_handshake_ssl._SSLSocket_SSLSocketu'Implementation module for SSL socket operations. See the socket module +for documentation.'u'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/_ssl.cpython-38-darwin.so'u'_ssl'_test_decode_certerr_codes_to_nameserr_names_to_codesget_default_verify_pathslib_codes_to_namesnid2objtxt2obj_sslSF_APPENDSF_ARCHIVEDSF_IMMUTABLESF_NOUNLINKSF_SNAPSHOTST_ATIMEST_CTIMEST_DEVST_GIDST_INOST_MODEST_MTIMEST_NLINKST_SIZEST_UIDS_ENFMTS_IEXEC24576S_IFBLKS_IFCHRS_IFDIRS_IFDOORS_IFIFO40960S_IFLNKS_IFMTS_IFPORTS_IFREG49152S_IFSOCK57344S_IFWHTS_IMODES_IREADS_IRGRPS_IROTHS_IRUSRS_IRWXGS_IRWXO448S_ISBLKS_ISCHRS_ISDOORS_ISFIFOS_ISGIDS_ISLNKS_ISPORTS_ISREGS_ISSOCKS_ISUIDS_ISVTXS_ISWHTS_IWGRPS_IWOTHS_IWRITES_IWUSRS_IXGRPS_IXOTHS_IXUSRUF_APPENDUF_COMPRESSEDUF_HIDDENUF_IMMUTABLEUF_NODUMPUF_NOUNLINKUF_OPAQUEu'S_IFMT_: file type bits +S_IFDIR: directory +S_IFCHR: character device +S_IFBLK: block device +S_IFREG: regular file +S_IFIFO: fifo (named pipe) +S_IFLNK: symbolic link +S_IFSOCK: socket file +S_IFDOOR: door +S_IFPORT: event port +S_IFWHT: whiteout + +S_ISUID: set UID bit +S_ISGID: set GID bit +S_ENFMT: file locking enforcement +S_ISVTX: sticky bit +S_IREAD: Unix V7 synonym for S_IRUSR +S_IWRITE: Unix V7 synonym for S_IWUSR +S_IEXEC: Unix V7 synonym for S_IXUSR +S_IRWXU: mask for owner permissions +S_IRUSR: read by owner +S_IWUSR: write by owner +S_IXUSR: execute by owner +S_IRWXG: mask for group permissions +S_IRGRP: read by group +S_IWGRP: write by group +S_IXGRP: execute by group +S_IRWXO: mask for others (not in group) permissions +S_IROTH: read by others +S_IWOTH: write by others +S_IXOTH: execute by others + +UF_NODUMP: do not dump file +UF_IMMUTABLE: file may not be changed +UF_APPEND: file may only be appended to +UF_OPAQUE: directory is opaque when viewed through a union stack +UF_NOUNLINK: file may not be renamed or deleted +UF_COMPRESSED: OS X: file is hfs-compressed +UF_HIDDEN: OS X: file should not be displayed +SF_ARCHIVED: file may be archived +SF_IMMUTABLE: file may not be changed +SF_APPEND: file may only be appended to +SF_NOUNLINK: file may not be renamed or deleted +SF_SNAPSHOT: file is a snapshot file + +ST_MODE +ST_INO +ST_DEV +ST_NLINK +ST_UID +ST_GID +ST_SIZE +ST_ATIME +ST_MTIME +ST_CTIME + +FILE_ATTRIBUTE_*: Windows file attribute constants + (only present on Windows) +'_statu'string helper module'formatter_field_name_splitformatter_parser_stringu'Create a compiled struct object. + +Return a new Struct object which writes and reads binary data according to +the format string. + +See help(struct) for more on format strings.'u'struct format string'u'Struct.format'iter_unpackpack_intou'struct size in bytes'u'Struct.size'unpackunpack_fromStructu'Functions to convert between Python values and C structs. +Python bytes objects are used to hold the data representing the C struct +and also as format strings (explained below) to describe the layout of data +in the C struct. + +The optional first format char indicates byte order, size and alignment: + @: native order, size & alignment (default) + =: native order, std. size & alignment + <: little-endian, std. size & alignment + >: big-endian, std. size & alignment + !: same as > + +The remaining chars indicate types of args and must match exactly; +these can be preceded by a decimal repeat count: + x: pad byte (no data); c:char; b:signed byte; B:unsigned byte; + ?: _Bool (requires C99; if not available, char is used instead) + h:short; H:unsigned short; i:int; I:unsigned int; + l:long; L:unsigned long; f:float; d:double; e:half-float. +Special cases (preceding decimal count indicates length): + s:string (array of char); p: pascal string (with count byte). +Special cases (only available in native format): + n:ssize_t; N:size_t; + P:an integer type that is wide enough to hold a pointer. +Special case (not in native mode unless 'long long' in platform C): + q:long long; Q:unsigned long long +Whitespace between formats is ignored. + +The variable struct.error is an exception raised on errors. +'u'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/_struct.cpython-38-darwin.so'u'_struct'_clearcacheu'struct'u'error.__weakref__'struct.error_struct-128CHAR_MINDBL_MAXDBL_MINDecodeLocaleExEncodeLocaleEx3.4028234663852886e+38FLT_MAX1.1754943508222875e-38FLT_MINGeneric__mro_entries__GenericAliasu'A heap type without GC, but with overridden __setattr__. + +The 'value' attribute is set to 10 in __init__ and updated via attribute setting.'u'_testcapi'pvalue_testcapi.HeapCTypeSetattrHeapCTypeSetattru'Subclass of HeapCType, without GC. + +__init__ sets the 'value' attribute to 10 and 'value2' to 20.'value2u'A heap type without GC, but with overridden dealloc. + +The 'value' attribute is set to 10 in __init__.'_testcapi.HeapCType_testcapi.HeapCTypeSubclassHeapCTypeSubclassu'Subclass of HeapCType with a finalizer that reassigns __class__. + +__class__ is set to plain HeapCTypeSubclass during finalization. +__init__ sets the 'value' attribute to 10 and 'value2' to 20.'_testcapi.HeapCTypeSubclassWithFinalizerHeapCTypeSubclassWithFinalizeru'A heap type with GC, and with overridden dealloc. + +The 'value' attribute is set to 10 in __init__.'_testcapi.HeapGcCTypeHeapGcCTypeINT_MAX-2147483648INT_MINLLONG_MAX-9223372036854775808LLONG_MINLONG_MAXLONG_MINMethodDescriptorBaseMethodDescriptor2MethodDescriptorDerivedMethodDescriptorNopGetMyListPY_SSIZE_T_MAXPY_SSIZE_T_MINPyTime_AsMicrosecondsPyTime_AsMillisecondsPyTime_AsSecondsDoublePyTime_AsTimespecPyTime_AsTimevalPyTime_FromSecondsPyTime_FromSecondsObjectu'Instantiating this exception starts infinite recursion.'RecursingInfinitelyErrorSHRT_MAX-32768SHRT_MINSIZEOF_TIME_TUCHAR_MAXUINT_MAX18446744073709551615ULLONG_MAXULONG_MAXUSHRT_MAXW_STOPCODEu'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/_testcapi.cpython-38-darwin.so'_pending_threadfuncT_BOOLT_BYTET_DOUBLET_FLOATT_INTT_LONGT_LONGLONGT_PYSSIZETT_SHORTT_STRING_INPLACET_UBYTET_UINTT_ULONGT_ULONGLONGT_USHORTu'Type containing all structmember types'test_structmembersType_test_structmembersType_test_thread_stateargparsingu'C level type with tp_as_async'awaitTypebad_getcall_in_temporary_c_threadcheck_pyobject_forbidden_bytes_is_freedcheck_pyobject_freed_is_freedcheck_pyobject_null_is_freedcheck_pyobject_uninitialized_is_freedcode_newemptycodec_incrementaldecodercodec_incrementalencodercrash_no_current_threadcreate_cfunctiondatetime_check_datedatetime_check_datetimedatetime_check_deltadatetime_check_timedatetime_check_tzinfodict_get_versiondict_getitem_knownhashdict_hassplittabledocstring_emptydocstring_no_signaturedocstring_with_invalid_signaturedocstring_with_invalid_signature2docstring_with_signaturedocstring_with_signature_and_extra_newlinesdocstring_with_signature_but_no_docdocstring_with_signature_with_defaults_testcapi.errorexception_printget_argsget_date_fromdateget_date_fromtimestampget_datetime_fromdateandtimeget_datetime_fromdateandtimeandfoldget_datetime_fromtimestampget_delta_fromdsuget_kwargsget_mapping_itemsget_mapping_keysget_mapping_valuesget_recursion_depthget_time_fromtimeget_time_fromtimeandfoldget_timezone_utc_capiget_timezones_offset_zerogetargs_Bgetargs_Cgetargs_Dgetargs_Hgetargs_Igetargs_Kgetargs_Lgetargs_Sgetargs_Ugetargs_Ygetargs_Zgetargs_Z_hashgetargs_bgetargs_cgetargs_dgetargs_esgetargs_es_hashgetargs_etgetargs_et_hashgetargs_fgetargs_hgetargs_igetargs_kgetargs_keyword_onlygetargs_keywordsgetargs_lgetargs_ngetargs_pgetargs_positional_only_and_keywordsgetargs_sgetargs_s_hashgetargs_s_stargetargs_tuplegetargs_ugetargs_u_hashgetargs_w_stargetargs_ygetargs_y_hashgetargs_y_stargetargs_zgetargs_z_hashgetargs_z_stargetbuffer_with_null_viewhamtu'instancemethod.__doc__'instancemethod__ipow__ipowTypemake_exception_with_docmake_memoryview_from_NULL_pointermake_timezones_capiu'C level type with matrix operations defined'__imatmul____matmul____rmatmul__matmulTypeno_docstringparse_tuple_and_keywordsprofile_intpymarshal_read_last_object_from_filepymarshal_read_long_from_filepymarshal_read_object_from_filepymarshal_read_short_from_filepymarshal_write_long_to_filepymarshal_write_object_to_filepymem_api_misusepymem_buffer_overflowpymem_getallocatorsnamepymem_malloc_without_gilpynumber_tobasepyobject_fastcallpyobject_fastcalldictpyobject_malloc_without_gilpyobject_vectorcallpytime_object_to_time_tpytime_object_to_timespecpytime_object_to_timevalpyvectorcall_callraise_SIGINT_then_send_Noneraise_exceptionraise_memoryerrorremove_mem_hooksreturn_null_without_errorreturn_result_with_errorset_exc_infoset_nomemorystack_pointertest_L_codetest_Z_codetest_buildvalue_Ntest_buildvalue_issue38913test_capsuletest_configtest_datetime_capitest_decref_doesnt_leaktest_dict_iterationtest_empty_argparsetest_from_contiguoustest_incref_decref_APItest_incref_doesnt_leaktest_k_codetest_lazy_hash_inheritancetest_list_apitest_long_and_overflowtest_long_apitest_long_as_doubletest_long_as_size_ttest_long_as_unsigned_long_long_masktest_long_long_and_overflowtest_long_numbitstest_longlong_apitest_null_stringstest_pymem_alloc0test_pymem_setallocatorstest_pymem_setrawallocatorstest_pyobject_setallocatorstest_pythread_tss_key_statetest_s_codetest_sizeof_c_typestest_string_from_formattest_string_to_doubletest_structseq_newtype_doesnt_leaktest_u_codetest_unicode_compare_with_asciitest_widechartest_with_docstringtest_xdecref_doesnt_leaktest_xincref_doesnt_leakthe_number_threetraceback_printtracemalloc_get_tracebacktracemalloc_tracktracemalloc_untrackunicode_asucs4unicode_aswidecharunicode_aswidecharstringunicode_copycharactersunicode_encodedecimalunicode_findcharunicode_legacy_stringunicode_transformdecimaltoasciiwith_tp_delwithout_gcwrite_unraisable_exclockedlocked_lock_thread.lockLockType_acquire_restore_is_owned_release_save_thread.RLock9223372036.0TIMEOUT_MAXu'ExceptHookArgs + +Type used to pass arguments to threading.excepthook.'_thread.ExceptHookArgs_ExceptHookArgsu'This module provides primitive operations to write multi-threaded programs. +The 'threading' module provides a more convenient interface.'_excepthooku'Thread-local data'_thread._local_local_set_sentinelallocateexit_threadget_native_idinterrupt_mainstack_sizestart_newstart_new_threadThread-local objects. + +(Note that this module provides a Python version of the threading.local + class. Depending on the version of Python you're using, there may be a + faster one available. You should always import the `local` class from + `threading`.) + +Thread-local objects support the management of thread-local data. +If you have data that you want to be local to a thread, simply create +a thread-local object and use its attributes: + + >>> mydata = local() + >>> mydata.number = 42 + >>> mydata.number + 42 + +You can also access the local-object's dictionary: + + >>> mydata.__dict__ + {'number': 42} + >>> mydata.__dict__.setdefault('widgets', []) + [] + >>> mydata.widgets + [] + +What's important about thread-local objects is that their data are +local to a thread. If we access the data in a different thread: + + >>> log = [] + >>> def f(): + ... items = sorted(mydata.__dict__.items()) + ... log.append(items) + ... mydata.number = 11 + ... log.append(mydata.number) + + >>> import threading + >>> thread = threading.Thread(target=f) + >>> thread.start() + >>> thread.join() + >>> log + [[], 11] + +we get different data. Furthermore, changes made in the other thread +don't affect data seen in this thread: + + >>> mydata.number + 42 + +Of course, values you get from a local object, including a __dict__ +attribute, are for whatever thread was current at the time the +attribute was read. For that reason, you generally don't want to save +these values across threads, as they apply only to the thread they +came from. + +You can create custom local objects by subclassing the local class: + + >>> class MyLocal(local): + ... number = 2 + ... def __init__(self, /, **kw): + ... self.__dict__.update(kw) + ... def squared(self): + ... return self.number ** 2 + +This can be useful to support default values, methods and +initialization. Note that if you define an __init__ method, it will be +called each time the local object is used in a separate thread. This +is necessary to initialize each thread's dictionary. + +Now if we create a local object: + + >>> mydata = MyLocal(color='red') + +Now we have a default number: + + >>> mydata.number + 2 + +an initial color: + + >>> mydata.color + 'red' + >>> del mydata.color + +And a method that operates on the data: + + >>> mydata.squared() + 4 + +As before, we can access the data in a separate thread: + + >>> log = [] + >>> thread = threading.Thread(target=f) + >>> thread.start() + >>> thread.join() + >>> log + [[('color', 'red')], 11] + +without affecting this thread's data: + + >>> mydata.number + 2 + >>> mydata.color + Traceback (most recent call last): + ... + AttributeError: 'MyLocal' object has no attribute 'color' + +Note that subclasses can define slots, but they are not thread +local. They are shared across threads: + + >>> class MyLocal(local): + ... __slots__ = 'number' + + >>> mydata = MyLocal() + >>> mydata.number = 42 + >>> mydata.color = 'red' + +So, the separate thread: + + >>> thread = threading.Thread(target=f) + >>> thread.start() + >>> thread.join() + +affects what we see: + + >>> mydata.number + 11 + +>>> del mydata +local_localimplA class managing thread-local dictsdictslocalargslocallock_threading_local._localimpl.get_dictReturn the dict for the current thread. Raises KeyError if none + defined.create_dictCreate a new dict for the current thread, and return it.localdictidtlocal_deletedwrthreadthread_deletedwrlocaldct_patch_local__implimplInitialization arguments are not supported%r object attribute '__dict__' is read-only# We need to use objects from the threading module, but the threading# module may also want to use our `local` class, if support for locals# isn't compiled in to the `thread` module. This creates potential problems# with circular imports. For that reason, we don't import `threading`# until the bottom of this file (a hack sufficient to worm around the# potential problems). Note that all platforms on CPython do have support# for locals in the `thread` module, and there is no circular import problem# then, so problems introduced by fiddling the order of imports here won't# manifest.# The key used in the Thread objects' attribute dicts.# We keep it a string for speed but make it unlikely to clash with# a "real" attribute.# { id(Thread) -> (ref(Thread), thread-local dict) }# When the localimpl is deleted, remove the thread attribute.# When the thread is deleted, remove the local dict.# Note that this is suboptimal if the thread object gets# caught in a reference loop. We would like to be called# as soon as the OS-level thread ends instead.# We need to create the thread dict in anticipation of# __init__ being called, to make sure we don't call it# again ourselves.b'Thread-local objects. + +(Note that this module provides a Python version of the threading.local + class. Depending on the version of Python you're using, there may be a + faster one available. You should always import the `local` class from + `threading`.) + +Thread-local objects support the management of thread-local data. +If you have data that you want to be local to a thread, simply create +a thread-local object and use its attributes: + + >>> mydata = local() + >>> mydata.number = 42 + >>> mydata.number + 42 + +You can also access the local-object's dictionary: + + >>> mydata.__dict__ + {'number': 42} + >>> mydata.__dict__.setdefault('widgets', []) + [] + >>> mydata.widgets + [] + +What's important about thread-local objects is that their data are +local to a thread. If we access the data in a different thread: + + >>> log = [] + >>> def f(): + ... items = sorted(mydata.__dict__.items()) + ... log.append(items) + ... mydata.number = 11 + ... log.append(mydata.number) + + >>> import threading + >>> thread = threading.Thread(target=f) + >>> thread.start() + >>> thread.join() + >>> log + [[], 11] + +we get different data. Furthermore, changes made in the other thread +don't affect data seen in this thread: + + >>> mydata.number + 42 + +Of course, values you get from a local object, including a __dict__ +attribute, are for whatever thread was current at the time the +attribute was read. For that reason, you generally don't want to save +these values across threads, as they apply only to the thread they +came from. + +You can create custom local objects by subclassing the local class: + + >>> class MyLocal(local): + ... number = 2 + ... def __init__(self, /, **kw): + ... self.__dict__.update(kw) + ... def squared(self): + ... return self.number ** 2 + +This can be useful to support default values, methods and +initialization. Note that if you define an __init__ method, it will be +called each time the local object is used in a separate thread. This +is necessary to initialize each thread's dictionary. + +Now if we create a local object: + + >>> mydata = MyLocal(color='red') + +Now we have a default number: + + >>> mydata.number + 2 + +an initial color: + + >>> mydata.color + 'red' + >>> del mydata.color + +And a method that operates on the data: + + >>> mydata.squared() + 4 + +As before, we can access the data in a separate thread: + + >>> log = [] + >>> thread = threading.Thread(target=f) + >>> thread.start() + >>> thread.join() + >>> log + [[('color', 'red')], 11] + +without affecting this thread's data: + + >>> mydata.number + 2 + >>> mydata.color + Traceback (most recent call last): + ... + AttributeError: 'MyLocal' object has no attribute 'color' + +Note that subclasses can define slots, but they are not thread +local. They are shared across threads: + + >>> class MyLocal(local): + ... __slots__ = 'number' + + >>> mydata = MyLocal() + >>> mydata.number = 42 + >>> mydata.color = 'red' + +So, the separate thread: + + >>> thread = threading.Thread(target=f) + >>> thread.start() + >>> thread.join() + +affects what we see: + + >>> mydata.number + 11 + +>>> del mydata +'u'Thread-local objects. + +(Note that this module provides a Python version of the threading.local + class. Depending on the version of Python you're using, there may be a + faster one available. You should always import the `local` class from + `threading`.) + +Thread-local objects support the management of thread-local data. +If you have data that you want to be local to a thread, simply create +a thread-local object and use its attributes: + + >>> mydata = local() + >>> mydata.number = 42 + >>> mydata.number + 42 + +You can also access the local-object's dictionary: + + >>> mydata.__dict__ + {'number': 42} + >>> mydata.__dict__.setdefault('widgets', []) + [] + >>> mydata.widgets + [] + +What's important about thread-local objects is that their data are +local to a thread. If we access the data in a different thread: + + >>> log = [] + >>> def f(): + ... items = sorted(mydata.__dict__.items()) + ... log.append(items) + ... mydata.number = 11 + ... log.append(mydata.number) + + >>> import threading + >>> thread = threading.Thread(target=f) + >>> thread.start() + >>> thread.join() + >>> log + [[], 11] + +we get different data. Furthermore, changes made in the other thread +don't affect data seen in this thread: + + >>> mydata.number + 42 + +Of course, values you get from a local object, including a __dict__ +attribute, are for whatever thread was current at the time the +attribute was read. For that reason, you generally don't want to save +these values across threads, as they apply only to the thread they +came from. + +You can create custom local objects by subclassing the local class: + + >>> class MyLocal(local): + ... number = 2 + ... def __init__(self, /, **kw): + ... self.__dict__.update(kw) + ... def squared(self): + ... return self.number ** 2 + +This can be useful to support default values, methods and +initialization. Note that if you define an __init__ method, it will be +called each time the local object is used in a separate thread. This +is necessary to initialize each thread's dictionary. + +Now if we create a local object: + + >>> mydata = MyLocal(color='red') + +Now we have a default number: + + >>> mydata.number + 2 + +an initial color: + + >>> mydata.color + 'red' + >>> del mydata.color + +And a method that operates on the data: + + >>> mydata.squared() + 4 + +As before, we can access the data in a separate thread: + + >>> log = [] + >>> thread = threading.Thread(target=f) + >>> thread.start() + >>> thread.join() + >>> log + [[('color', 'red')], 11] + +without affecting this thread's data: + + >>> mydata.number + 2 + >>> mydata.color + Traceback (most recent call last): + ... + AttributeError: 'MyLocal' object has no attribute 'color' + +Note that subclasses can define slots, but they are not thread +local. They are shared across threads: + + >>> class MyLocal(local): + ... __slots__ = 'number' + + >>> mydata = MyLocal() + >>> mydata.number = 42 + >>> mydata.color = 'red' + +So, the separate thread: + + >>> thread = threading.Thread(target=f) + >>> thread.start() + >>> thread.join() + +affects what we see: + + >>> mydata.number + 11 + +>>> del mydata +'b'local'u'local'b'A class managing thread-local dicts'u'A class managing thread-local dicts'b'dicts'u'dicts'b'localargs'u'localargs'b'locallock'u'locallock'b'_threading_local._localimpl.'u'_threading_local._localimpl.'b'Return the dict for the current thread. Raises KeyError if none + defined.'u'Return the dict for the current thread. Raises KeyError if none + defined.'b'Create a new dict for the current thread, and return it.'u'Create a new dict for the current thread, and return it.'b'_local__impl'u'_local__impl'b'__dict__'u'__dict__'b'Initialization arguments are not supported'u'Initialization arguments are not supported'b'%r object attribute '__dict__' is read-only'u'%r object attribute '__dict__' is read-only'u'_threading_local'u'Debug module to trace memory blocks allocated by Python.'_get_object_traceback_get_tracesclear_tracesget_traceback_limitget_traced_memoryget_tracemalloc_memory_tracemallocu'_warnings provides basic warning filtering support. +It is a helper module to speed up interpreter start-up.'_defaultaction_filters_mutated_onceregistrywarn_explicit__ifloordiv____ilshift____imod____irshift____itruediv__weakcallableproxyCallableProxyType__bytes__weakproxyProxyType__callback__ReferenceTypeu'Weak-reference support module.'_remove_dead_weakrefgetweakrefcountgetweakrefs_IterationGuardweakcontainer_iterating_removeselfref_pending_removalsitemrefpop from empty WeakSetnewset# Access WeakSet through the weakref module.# This code is separated-out because it is needed# by abc.py to load everything else at startup.# This context manager registers itself in the current iterators of the# weak container, such as to delay all removals until the context manager# exits.# This technique should be relatively thread-safe (since sets are).# Don't create cycles# A list of keys to be removed# Caveat: the iterator will keep a strong reference to# `item` until it is resumed or closed.b'WeakSet'u'WeakSet'b'pop from empty WeakSet'u'pop from empty WeakSet'u'abc'Abstract base classes related to import.machineryabstract_clsfrozen_clsFinderLegacy abstract base class for import finders. + + It may be subclassed for compatibility with legacy third party + reimplementations of the import system. Otherwise, finder + implementations should derive from the more specific MetaPathFinder + or PathEntryFinder ABCs. + + Deprecated since Python 3.3 + An abstract method that should find a module. + The fullname is a str and the optional path is a str or None. + Returns a Loader object or None. + MetaPathFinderAbstract base class for import finders on sys.meta_path.Return a loader for the module. + + If no module is found, return None. The fullname is a str and + the path is a list of strings or None. + + This method is deprecated since Python 3.4 in favor of + finder.find_spec(). If find_spec() exists then backwards-compatible + functionality is provided for this method. + + MetaPathFinder.find_module() is deprecated since Python 3.4 in favor of MetaPathFinder.find_spec() (available since 3.4)"MetaPathFinder.find_module() is deprecated since Python ""3.4 in favor of MetaPathFinder.find_spec() ""(available since 3.4)"An optional method for clearing the finder's cache, if any. + This method is used by importlib.invalidate_caches(). + PathEntryFinderAbstract base class for path entry finders used by PathFinder.Return (loader, namespace portion) for the path entry. + + The fullname is a str. The namespace portion is a sequence of + path entries contributing to part of a namespace package. The + sequence may be empty. If loader is not None, the portion will + be ignored. + + The portion will be discarded if another path entry finder + locates the module as a normal module or package. + + This method is deprecated since Python 3.4 in favor of + finder.find_spec(). If find_spec() is provided than backwards-compatible + functionality is provided. + PathEntryFinder.find_loader() is deprecated since Python 3.4 in favor of PathEntryFinder.find_spec() (available since 3.4)"PathEntryFinder.find_loader() is deprecated since Python ""3.4 in favor of PathEntryFinder.find_spec() "An optional method for clearing the finder's cache, if any. + This method is used by PathFinder.invalidate_caches(). + LoaderAbstract base class for import loaders.Return a module to initialize and into which to load. + + This method should raise ImportError if anything prevents it + from creating a new module. It may return None to indicate + that the spec should create the new module. + Return the loaded module. + + The module must be added to sys.modules and have import-related + attributes set properly. The fullname is a str. + + ImportError is raised on failure. + + This method is deprecated in favor of loader.exec_module(). If + exec_module() exists then it is used to provide a backwards-compatible + functionality for this method. + + Return a module's repr. + + Used by the module type when the method does not raise + NotImplementedError. + + This method is deprecated. + + ResourceLoaderAbstract base class for loaders which can return data from their + back-end storage. + + This ABC represents one of the optional protocols specified by PEP 302. + + Abstract method which when implemented should return the bytes for + the specified path. The path must be a str.InspectLoaderAbstract base class for loaders which support inspection about the + modules they can load. + + This ABC represents one of the optional protocols specified by PEP 302. + + Optional method which when implemented should return whether the + module is a package. The fullname is a str. Returns a bool. + + Raises ImportError if the module cannot be found. + Method which returns the code object for the module. + + The fullname is a str. Returns a types.CodeType if possible, else + returns None if a code object does not make sense + (e.g. built-in module). Raises ImportError if the module cannot be + found. + Abstract method which should return the source code for the + module. The fullname is a str. Returns a str. + + Raises ImportError if the module cannot be found. + Compile 'data' into a code object. + + The 'data' argument can be anything that compile() can handle. The'path' + argument should be where the data was retrieved (when applicable).ExecutionLoaderAbstract base class for loaders that wish to support the execution of + modules as scripts. + + This ABC represents one of the optional protocols specified in PEP 302. + + Abstract method which should return the value that __file__ is to be + set to. + + Raises ImportError if the module cannot be found. + Method to return the code object for fullname. + + Should return None if not applicable (e.g. built-in module). + Raise ImportError if the module cannot be found. + Abstract base class partially implementing the ResourceLoader and + ExecutionLoader ABCs.Abstract base class for loading source code (and optionally any + corresponding bytecode). + + To support loading from source code, the abstractmethods inherited from + ResourceLoader and ExecutionLoader need to be implemented. To also support + loading from bytecode, the optional methods specified directly by this ABC + is required. + + Inherited abstractmethods not implemented in this ABC: + + * ResourceLoader.get_data + * ExecutionLoader.get_filename + + Return the (int) modification time for the path (str).Return a metadata dict for the source pointed to by the path (str). + Possible keys: + - 'mtime' (mandatory) is the numeric timestamp of last source + code modification; + - 'size' (optional) is the size in bytes of the source code. + Write the bytes to the path (if possible). + + Accepts a str path and data as bytes. + + Any needed intermediary directories are to be created. If for some + reason the file cannot be written because of permissions, fail + silently. + ResourceReaderAbstract base class to provide resource-reading support. + + Loaders that support resource reading are expected to implement + the ``get_resource_reader(fullname)`` method and have it either return None + or an object compatible with this ABC. + Return an opened, file-like object for binary reading. + + The 'resource' argument is expected to represent only a file name + and thus not contain any subdirectory components. + + If the resource cannot be found, FileNotFoundError is raised. + Return the file system path to the specified resource. + + The 'resource' argument is expected to represent only a file name + and thus not contain any subdirectory components. + + If the resource does not exist on the file system, raise + FileNotFoundError. + Return True if the named 'name' is consider a resource.Return an iterable of strings over the contents of the package.# We don't define find_spec() here since that would break# hasattr checks we do to support backward compatibility.# By default, defer to default semantics for the new module.# We don't define exec_module() here since that would break# The exception will cause ModuleType.__repr__ to ignore this method.b'Abstract base classes related to import.'u'Abstract base classes related to import.'b'_frozen_importlib'b'Legacy abstract base class for import finders. + + It may be subclassed for compatibility with legacy third party + reimplementations of the import system. Otherwise, finder + implementations should derive from the more specific MetaPathFinder + or PathEntryFinder ABCs. + + Deprecated since Python 3.3 + 'u'Legacy abstract base class for import finders. + + It may be subclassed for compatibility with legacy third party + reimplementations of the import system. Otherwise, finder + implementations should derive from the more specific MetaPathFinder + or PathEntryFinder ABCs. + + Deprecated since Python 3.3 + 'b'An abstract method that should find a module. + The fullname is a str and the optional path is a str or None. + Returns a Loader object or None. + 'u'An abstract method that should find a module. + The fullname is a str and the optional path is a str or None. + Returns a Loader object or None. + 'b'Abstract base class for import finders on sys.meta_path.'u'Abstract base class for import finders on sys.meta_path.'b'Return a loader for the module. + + If no module is found, return None. The fullname is a str and + the path is a list of strings or None. + + This method is deprecated since Python 3.4 in favor of + finder.find_spec(). If find_spec() exists then backwards-compatible + functionality is provided for this method. + + 'u'Return a loader for the module. + + If no module is found, return None. The fullname is a str and + the path is a list of strings or None. + + This method is deprecated since Python 3.4 in favor of + finder.find_spec(). If find_spec() exists then backwards-compatible + functionality is provided for this method. + + 'b'MetaPathFinder.find_module() is deprecated since Python 3.4 in favor of MetaPathFinder.find_spec() (available since 3.4)'u'MetaPathFinder.find_module() is deprecated since Python 3.4 in favor of MetaPathFinder.find_spec() (available since 3.4)'b'An optional method for clearing the finder's cache, if any. + This method is used by importlib.invalidate_caches(). + 'u'An optional method for clearing the finder's cache, if any. + This method is used by importlib.invalidate_caches(). + 'b'Abstract base class for path entry finders used by PathFinder.'u'Abstract base class for path entry finders used by PathFinder.'b'Return (loader, namespace portion) for the path entry. + + The fullname is a str. The namespace portion is a sequence of + path entries contributing to part of a namespace package. The + sequence may be empty. If loader is not None, the portion will + be ignored. + + The portion will be discarded if another path entry finder + locates the module as a normal module or package. + + This method is deprecated since Python 3.4 in favor of + finder.find_spec(). If find_spec() is provided than backwards-compatible + functionality is provided. + 'u'Return (loader, namespace portion) for the path entry. + + The fullname is a str. The namespace portion is a sequence of + path entries contributing to part of a namespace package. The + sequence may be empty. If loader is not None, the portion will + be ignored. + + The portion will be discarded if another path entry finder + locates the module as a normal module or package. + + This method is deprecated since Python 3.4 in favor of + finder.find_spec(). If find_spec() is provided than backwards-compatible + functionality is provided. + 'b'PathEntryFinder.find_loader() is deprecated since Python 3.4 in favor of PathEntryFinder.find_spec() (available since 3.4)'u'PathEntryFinder.find_loader() is deprecated since Python 3.4 in favor of PathEntryFinder.find_spec() (available since 3.4)'b'An optional method for clearing the finder's cache, if any. + This method is used by PathFinder.invalidate_caches(). + 'u'An optional method for clearing the finder's cache, if any. + This method is used by PathFinder.invalidate_caches(). + 'b'Abstract base class for import loaders.'u'Abstract base class for import loaders.'b'Return a module to initialize and into which to load. + + This method should raise ImportError if anything prevents it + from creating a new module. It may return None to indicate + that the spec should create the new module. + 'u'Return a module to initialize and into which to load. + + This method should raise ImportError if anything prevents it + from creating a new module. It may return None to indicate + that the spec should create the new module. + 'b'Return the loaded module. + + The module must be added to sys.modules and have import-related + attributes set properly. The fullname is a str. + + ImportError is raised on failure. + + This method is deprecated in favor of loader.exec_module(). If + exec_module() exists then it is used to provide a backwards-compatible + functionality for this method. + + 'u'Return the loaded module. + + The module must be added to sys.modules and have import-related + attributes set properly. The fullname is a str. + + ImportError is raised on failure. + + This method is deprecated in favor of loader.exec_module(). If + exec_module() exists then it is used to provide a backwards-compatible + functionality for this method. + + 'b'Return a module's repr. + + Used by the module type when the method does not raise + NotImplementedError. + + This method is deprecated. + + 'u'Return a module's repr. + + Used by the module type when the method does not raise + NotImplementedError. + + This method is deprecated. + + 'b'Abstract base class for loaders which can return data from their + back-end storage. + + This ABC represents one of the optional protocols specified by PEP 302. + + 'u'Abstract base class for loaders which can return data from their + back-end storage. + + This ABC represents one of the optional protocols specified by PEP 302. + + 'b'Abstract method which when implemented should return the bytes for + the specified path. The path must be a str.'u'Abstract method which when implemented should return the bytes for + the specified path. The path must be a str.'b'Abstract base class for loaders which support inspection about the + modules they can load. + + This ABC represents one of the optional protocols specified by PEP 302. + + 'u'Abstract base class for loaders which support inspection about the + modules they can load. + + This ABC represents one of the optional protocols specified by PEP 302. + + 'b'Optional method which when implemented should return whether the + module is a package. The fullname is a str. Returns a bool. + + Raises ImportError if the module cannot be found. + 'u'Optional method which when implemented should return whether the + module is a package. The fullname is a str. Returns a bool. + + Raises ImportError if the module cannot be found. + 'b'Method which returns the code object for the module. + + The fullname is a str. Returns a types.CodeType if possible, else + returns None if a code object does not make sense + (e.g. built-in module). Raises ImportError if the module cannot be + found. + 'u'Method which returns the code object for the module. + + The fullname is a str. Returns a types.CodeType if possible, else + returns None if a code object does not make sense + (e.g. built-in module). Raises ImportError if the module cannot be + found. + 'b'Abstract method which should return the source code for the + module. The fullname is a str. Returns a str. + + Raises ImportError if the module cannot be found. + 'u'Abstract method which should return the source code for the + module. The fullname is a str. Returns a str. + + Raises ImportError if the module cannot be found. + 'b'Compile 'data' into a code object. + + The 'data' argument can be anything that compile() can handle. The'path' + argument should be where the data was retrieved (when applicable).'u'Compile 'data' into a code object. + + The 'data' argument can be anything that compile() can handle. The'path' + argument should be where the data was retrieved (when applicable).'b'Abstract base class for loaders that wish to support the execution of + modules as scripts. + + This ABC represents one of the optional protocols specified in PEP 302. + + 'u'Abstract base class for loaders that wish to support the execution of + modules as scripts. + + This ABC represents one of the optional protocols specified in PEP 302. + + 'b'Abstract method which should return the value that __file__ is to be + set to. + + Raises ImportError if the module cannot be found. + 'u'Abstract method which should return the value that __file__ is to be + set to. + + Raises ImportError if the module cannot be found. + 'b'Method to return the code object for fullname. + + Should return None if not applicable (e.g. built-in module). + Raise ImportError if the module cannot be found. + 'u'Method to return the code object for fullname. + + Should return None if not applicable (e.g. built-in module). + Raise ImportError if the module cannot be found. + 'b'Abstract base class partially implementing the ResourceLoader and + ExecutionLoader ABCs.'u'Abstract base class partially implementing the ResourceLoader and + ExecutionLoader ABCs.'b'Abstract base class for loading source code (and optionally any + corresponding bytecode). + + To support loading from source code, the abstractmethods inherited from + ResourceLoader and ExecutionLoader need to be implemented. To also support + loading from bytecode, the optional methods specified directly by this ABC + is required. + + Inherited abstractmethods not implemented in this ABC: + + * ResourceLoader.get_data + * ExecutionLoader.get_filename + + 'u'Abstract base class for loading source code (and optionally any + corresponding bytecode). + + To support loading from source code, the abstractmethods inherited from + ResourceLoader and ExecutionLoader need to be implemented. To also support + loading from bytecode, the optional methods specified directly by this ABC + is required. + + Inherited abstractmethods not implemented in this ABC: + + * ResourceLoader.get_data + * ExecutionLoader.get_filename + + 'b'Return the (int) modification time for the path (str).'u'Return the (int) modification time for the path (str).'b'Return a metadata dict for the source pointed to by the path (str). + Possible keys: + - 'mtime' (mandatory) is the numeric timestamp of last source + code modification; + - 'size' (optional) is the size in bytes of the source code. + 'u'Return a metadata dict for the source pointed to by the path (str). + Possible keys: + - 'mtime' (mandatory) is the numeric timestamp of last source + code modification; + - 'size' (optional) is the size in bytes of the source code. + 'b'Write the bytes to the path (if possible). + + Accepts a str path and data as bytes. + + Any needed intermediary directories are to be created. If for some + reason the file cannot be written because of permissions, fail + silently. + 'u'Write the bytes to the path (if possible). + + Accepts a str path and data as bytes. + + Any needed intermediary directories are to be created. If for some + reason the file cannot be written because of permissions, fail + silently. + 'b'Abstract base class to provide resource-reading support. + + Loaders that support resource reading are expected to implement + the ``get_resource_reader(fullname)`` method and have it either return None + or an object compatible with this ABC. + 'u'Abstract base class to provide resource-reading support. + + Loaders that support resource reading are expected to implement + the ``get_resource_reader(fullname)`` method and have it either return None + or an object compatible with this ABC. + 'b'Return an opened, file-like object for binary reading. + + The 'resource' argument is expected to represent only a file name + and thus not contain any subdirectory components. + + If the resource cannot be found, FileNotFoundError is raised. + 'u'Return an opened, file-like object for binary reading. + + The 'resource' argument is expected to represent only a file name + and thus not contain any subdirectory components. + + If the resource cannot be found, FileNotFoundError is raised. + 'b'Return the file system path to the specified resource. + + The 'resource' argument is expected to represent only a file name + and thus not contain any subdirectory components. + + If the resource does not exist on the file system, raise + FileNotFoundError. + 'u'Return the file system path to the specified resource. + + The 'resource' argument is expected to represent only a file name + and thus not contain any subdirectory components. + + If the resource does not exist on the file system, raise + FileNotFoundError. + 'b'Return True if the named 'name' is consider a resource.'u'Return True if the named 'name' is consider a resource.'b'Return an iterable of strings over the contents of the package.'u'Return an iterable of strings over the contents of the package.'u'importlib.abc'Abstract Base Classes (ABCs) according to PEP 3119.funcobjA decorator indicating abstract methods. + + Requires that the metaclass is ABCMeta or derived from it. A + class that has a metaclass derived from ABCMeta cannot be + instantiated unless all of its abstract methods are overridden. + The abstract methods can be called using any of the normal + 'super' call mechanisms. abstractmethod() may be used to declare + abstract methods for properties and descriptors. + + Usage: + + class C(metaclass=ABCMeta): + @abstractmethod + def my_abstract_method(self, ...): + ... + abstractclassmethodA decorator indicating abstract classmethods. + + Deprecated, use 'classmethod' with 'abstractmethod' instead. + abstractstaticmethodA decorator indicating abstract staticmethods. + + Deprecated, use 'staticmethod' with 'abstractmethod' instead. + abstractpropertyA decorator indicating abstract properties. + + Deprecated, use 'property' with 'abstractmethod' instead. + Metaclass for defining Abstract Base Classes (ABCs). + + Use this metaclass to create an ABC. An ABC can be subclassed + directly, and then acts as a mix-in class. You can also register + unrelated concrete classes (even built-in classes) and unrelated + ABCs as 'virtual subclasses' -- these and their descendants will + be considered subclasses of the registering ABC by the built-in + issubclass() function, but the registering ABC won't show up in + their MRO (Method Resolution Order) nor will method + implementations defined by the registering ABC be callable (not + even via super()). + Register a virtual subclass of an ABC. + + Returns the subclass, to allow usage as a class decorator. + _abc_registry: _abc_cache: _abc_negative_cache: _abc_negative_cache_version: _py_abcABCHelper class that provides a standard way to create an ABC using + inheritance. + b'Abstract Base Classes (ABCs) according to PEP 3119.'u'Abstract Base Classes (ABCs) according to PEP 3119.'b'A decorator indicating abstract methods. + + Requires that the metaclass is ABCMeta or derived from it. A + class that has a metaclass derived from ABCMeta cannot be + instantiated unless all of its abstract methods are overridden. + The abstract methods can be called using any of the normal + 'super' call mechanisms. abstractmethod() may be used to declare + abstract methods for properties and descriptors. + + Usage: + + class C(metaclass=ABCMeta): + @abstractmethod + def my_abstract_method(self, ...): + ... + 'u'A decorator indicating abstract methods. + + Requires that the metaclass is ABCMeta or derived from it. A + class that has a metaclass derived from ABCMeta cannot be + instantiated unless all of its abstract methods are overridden. + The abstract methods can be called using any of the normal + 'super' call mechanisms. abstractmethod() may be used to declare + abstract methods for properties and descriptors. + + Usage: + + class C(metaclass=ABCMeta): + @abstractmethod + def my_abstract_method(self, ...): + ... + 'b'A decorator indicating abstract classmethods. + + Deprecated, use 'classmethod' with 'abstractmethod' instead. + 'u'A decorator indicating abstract classmethods. + + Deprecated, use 'classmethod' with 'abstractmethod' instead. + 'b'A decorator indicating abstract staticmethods. + + Deprecated, use 'staticmethod' with 'abstractmethod' instead. + 'u'A decorator indicating abstract staticmethods. + + Deprecated, use 'staticmethod' with 'abstractmethod' instead. + 'b'A decorator indicating abstract properties. + + Deprecated, use 'property' with 'abstractmethod' instead. + 'u'A decorator indicating abstract properties. + + Deprecated, use 'property' with 'abstractmethod' instead. + 'b'Metaclass for defining Abstract Base Classes (ABCs). + + Use this metaclass to create an ABC. An ABC can be subclassed + directly, and then acts as a mix-in class. You can also register + unrelated concrete classes (even built-in classes) and unrelated + ABCs as 'virtual subclasses' -- these and their descendants will + be considered subclasses of the registering ABC by the built-in + issubclass() function, but the registering ABC won't show up in + their MRO (Method Resolution Order) nor will method + implementations defined by the registering ABC be callable (not + even via super()). + 'u'Metaclass for defining Abstract Base Classes (ABCs). + + Use this metaclass to create an ABC. An ABC can be subclassed + directly, and then acts as a mix-in class. You can also register + unrelated concrete classes (even built-in classes) and unrelated + ABCs as 'virtual subclasses' -- these and their descendants will + be considered subclasses of the registering ABC by the built-in + issubclass() function, but the registering ABC won't show up in + their MRO (Method Resolution Order) nor will method + implementations defined by the registering ABC be callable (not + even via super()). + 'b'Register a virtual subclass of an ABC. + + Returns the subclass, to allow usage as a class decorator. + 'u'Register a virtual subclass of an ABC. + + Returns the subclass, to allow usage as a class decorator. + 'b'_abc_registry: 'u'_abc_registry: 'b'_abc_cache: 'u'_abc_cache: 'b'_abc_negative_cache: 'u'_abc_negative_cache: 'b'_abc_negative_cache_version: 'u'_abc_negative_cache_version: 'b'abc'b'Helper class that provides a standard way to create an ABC using + inheritance. + 'u'Helper class that provides a standard way to create an ABC using + inheritance. + ' Encoding Aliases Support + + This module is used by the encodings package search function to + map encodings names to module names. + + Note that the search function normalizes the encoding names before + doing the lookup, so the mapping will have to map normalized + encoding names to module names. + + Contents: + + The following aliases dictionary contains mappings of all IANA + character set names for which the Python core library provides + codecs. In addition to these, a few Python specific codec + aliases have also been added. + +646ansi_x3.4_1968ansi_x3_4_1968ansi_x3.4_1986cp367csasciiibm367iso646_usiso_646.irv_1991iso_ir_6usus_asciibase64_codecbase_64big5big5_twcsbig5big5hkscsbig5_hkscshkscsbz2_codeccp037037csibm037ebcdic_cp_caebcdic_cp_nlebcdic_cp_usebcdic_cp_wtibm037ibm039cp10261026csibm1026ibm1026cp11251125ibm1125cp866urusciicp11401140ibm1140cp12501250windows_1250cp12511251windows_1251cp12521252windows_1252cp12531253windows_1253cp12541254windows_1254cp12551255windows_1255cp12561256windows_1256cp12571257windows_1257cp12581258windows_1258cp273273ibm273csibm273cp424csibm424ebcdic_cp_heibm424cp437437cspc8codepage437ibm437cp500csibm500ebcdic_cp_beebcdic_cp_chibm500cp775775cspc775balticibm775cp850850cspc850multilingualibm850cp852852cspcp852ibm852cp855855csibm855ibm855cp857857csibm857ibm857cp858858csibm858ibm858cp860860csibm860ibm860cp861861cp_iscsibm861ibm861cp862862cspc862latinhebrewibm862cp863863csibm863ibm863cp864864csibm864ibm864cp865865csibm865ibm865cp866866csibm866ibm866cp869869cp_grcsibm869ibm869cp932932ms932mskanjims_kanjicp949949ms949uhccp950950ms950euc_jis_2004jisx0213eucjis2004euc_jis2004euc_jisx0213eucjisx0213euc_jpeucjpujisu_jiseuc_kreuckrkoreanksc5601ks_c_5601ks_c_5601_1987ksx1001ks_x_1001gb18030gb18030_2000gb2312chinesecsiso58gb231280euc_cneuccneucgb2312_cngb2312_1980gb2312_80iso_ir_58gbk936cp936ms936hex_codechp_roman8roman8r8csHPRoman8cp1051ibm1051hzhzgbhz_gbhz_gb_2312iso2022_jpcsiso2022jpiso2022jpiso_2022_jpiso2022_jp_1iso2022jp_1iso_2022_jp_1iso2022_jp_2iso2022jp_2iso_2022_jp_2iso2022_jp_2004iso_2022_jp_2004iso2022jp_2004iso2022_jp_3iso2022jp_3iso_2022_jp_3iso2022_jp_extiso2022jp_extiso_2022_jp_extiso2022_krcsiso2022kriso2022kriso_2022_kriso8859_10csisolatin6iso_8859_10iso_8859_10_1992iso_ir_157l6latin6iso8859_11thaiiso_8859_11iso_8859_11_2001iso8859_13iso_8859_13l7latin7iso8859_14iso_8859_14iso_8859_14_1998iso_celticiso_ir_199l8latin8iso8859_15iso_8859_15l9latin9iso8859_16iso_8859_16iso_8859_16_2001iso_ir_226l10latin10iso8859_2csisolatin2iso_8859_2iso_8859_2_1987iso_ir_101l2latin2iso8859_3csisolatin3iso_8859_3iso_8859_3_1988iso_ir_109l3latin3iso8859_4csisolatin4iso_8859_4iso_8859_4_1988iso_ir_110l4latin4iso8859_5csisolatincyrilliccyrilliciso_8859_5iso_8859_5_1988iso_ir_144iso8859_6arabicasmo_708csisolatinarabicecma_114iso_8859_6iso_8859_6_1987iso_ir_127iso8859_7csisolatingreekecma_118elot_928greekgreek8iso_8859_7iso_8859_7_1987iso_ir_126iso8859_8csisolatinhebrewhebrewiso_8859_8iso_8859_8_1988iso_ir_138iso8859_9csisolatin5iso_8859_9iso_8859_9_1989iso_ir_148l5latin5johabcp1361ms1361koi8_rcskoi8rkz1048kz_1048rk1048strk1048_2002latin_18859cp819csisolatin1ibm819iso8859iso8859_1iso_8859_1iso_8859_1_1987iso_ir_100l1latinlatin1mac_cyrillicmaccyrillicmac_greekmacgreekmac_icelandmacicelandmac_latin2maccentraleuropemaclatin2mac_romanmacintoshmacromanmac_turkishmacturkishansidbcsptcp154csptcp154pt154cp154cyrillic_asianquopri_codecquopriquoted_printablequotedprintablerot_13rot13shift_jiscsshiftjisshiftjissjiss_jisshift_jis_2004shiftjis2004sjis_2004s_jis_2004shift_jisx0213shiftjisx0213sjisx0213s_jisx0213tactistis260tis_620tis620tis_620_0tis_620_2529_0tis_620_2529_1iso_ir_166utf_16u16utf16utf_16_beunicodebigunmarkedutf_16beutf_16_leunicodelittleunmarkedutf_16leutf_32u32utf32utf_32_beutf_32beutf_32_leutf_32leutf_7u7utf7unicode_1_1_utf_7utf_8u8utfutf8utf8_ucs2utf8_ucs4uu_codecuuzlib_codecx_mac_japanesex_mac_koreanx_mac_simp_chinesex_mac_trad_chinese# Please keep this list sorted alphabetically by value !# ascii codec# some email headers use this non-standard name# base64_codec codec# big5 codec# big5hkscs codec# bz2_codec codec# cp037 codec# cp1026 codec# cp1125 codec# cp1140 codec# cp1250 codec# cp1251 codec# cp1252 codec# cp1253 codec# cp1254 codec# cp1255 codec# cp1256 codec# cp1257 codec# cp1258 codec# cp273 codec# cp424 codec# cp437 codec# cp500 codec# cp775 codec# cp850 codec# cp852 codec# cp855 codec# cp857 codec# cp858 codec# cp860 codec# cp861 codec# cp862 codec# cp863 codec# cp864 codec# cp865 codec# cp866 codec# cp869 codec# cp932 codec# cp949 codec# cp950 codec# euc_jis_2004 codec# euc_jisx0213 codec# euc_jp codec# euc_kr codec# gb18030 codec# gb2312 codec# gbk codec# hex_codec codec# hp_roman8 codec# hz codec# iso2022_jp codec# iso2022_jp_1 codec# iso2022_jp_2 codec# iso2022_jp_2004 codec# iso2022_jp_3 codec# iso2022_jp_ext codec# iso2022_kr codec# iso8859_10 codec# iso8859_11 codec# iso8859_13 codec# iso8859_14 codec# iso8859_15 codec# iso8859_16 codec# iso8859_2 codec# iso8859_3 codec# iso8859_4 codec# iso8859_5 codec# iso8859_6 codec# iso8859_7 codec# iso8859_8 codec# iso8859_9 codec# johab codec# koi8_r codec# kz1048 codec# latin_1 codec# Note that the latin_1 codec is implemented internally in C and a# lot faster than the charmap codec iso8859_1 which uses the same# encoding. This is why we discourage the use of the iso8859_1# codec and alias it to latin_1 instead.# mac_cyrillic codec# mac_greek codec# mac_iceland codec# mac_latin2 codec# mac_roman codec# mac_turkish codec# mbcs codec# ptcp154 codec# quopri_codec codec# rot_13 codec# shift_jis codec# shift_jis_2004 codec# shift_jisx0213 codec# tactis codec# tis_620 codec# utf_16 codec# utf_16_be codec# utf_16_le codec# utf_32 codec# utf_32_be codec# utf_32_le codec# utf_7 codec# utf_8 codec# uu_codec codec# zlib_codec codec# temporary mac CJK aliases, will be replaced by proper codecs in 3.1b' Encoding Aliases Support + + This module is used by the encodings package search function to + map encodings names to module names. + + Note that the search function normalizes the encoding names before + doing the lookup, so the mapping will have to map normalized + encoding names to module names. + + Contents: + + The following aliases dictionary contains mappings of all IANA + character set names for which the Python core library provides + codecs. In addition to these, a few Python specific codec + aliases have also been added. + +'u' Encoding Aliases Support + + This module is used by the encodings package search function to + map encodings names to module names. + + Note that the search function normalizes the encoding names before + doing the lookup, so the mapping will have to map normalized + encoding names to module names. + + Contents: + + The following aliases dictionary contains mappings of all IANA + character set names for which the Python core library provides + codecs. In addition to these, a few Python specific codec + aliases have also been added. + +'b'646'u'646'b'ansi_x3.4_1968'u'ansi_x3.4_1968'b'ansi_x3_4_1968'u'ansi_x3_4_1968'b'ansi_x3.4_1986'u'ansi_x3.4_1986'b'cp367'u'cp367'b'csascii'u'csascii'b'ibm367'u'ibm367'b'iso646_us'u'iso646_us'b'iso_646.irv_1991'u'iso_646.irv_1991'b'iso_ir_6'u'iso_ir_6'b'us'u'us'b'us_ascii'u'us_ascii'b'base64_codec'u'base64_codec'b'base64'u'base64'b'base_64'u'base_64'b'big5'u'big5'b'big5_tw'u'big5_tw'b'csbig5'u'csbig5'b'big5hkscs'u'big5hkscs'b'big5_hkscs'u'big5_hkscs'b'hkscs'u'hkscs'b'bz2_codec'u'bz2_codec'b'cp037'u'cp037'b'037'u'037'b'csibm037'u'csibm037'b'ebcdic_cp_ca'u'ebcdic_cp_ca'b'ebcdic_cp_nl'u'ebcdic_cp_nl'b'ebcdic_cp_us'u'ebcdic_cp_us'b'ebcdic_cp_wt'u'ebcdic_cp_wt'b'ibm037'u'ibm037'b'ibm039'u'ibm039'b'cp1026'u'cp1026'b'1026'u'1026'b'csibm1026'u'csibm1026'b'ibm1026'u'ibm1026'b'cp1125'u'cp1125'b'1125'u'1125'b'ibm1125'u'ibm1125'b'cp866u'u'cp866u'b'ruscii'u'ruscii'b'cp1140'u'cp1140'b'1140'u'1140'b'ibm1140'u'ibm1140'b'cp1250'u'cp1250'b'1250'u'1250'b'windows_1250'u'windows_1250'b'cp1251'u'cp1251'b'1251'u'1251'b'windows_1251'u'windows_1251'b'cp1252'u'cp1252'b'1252'u'1252'b'windows_1252'u'windows_1252'b'cp1253'u'cp1253'b'1253'u'1253'b'windows_1253'u'windows_1253'b'cp1254'u'cp1254'b'1254'u'1254'b'windows_1254'u'windows_1254'b'cp1255'u'cp1255'b'1255'u'1255'b'windows_1255'u'windows_1255'b'cp1256'u'cp1256'b'1256'u'1256'b'windows_1256'u'windows_1256'b'cp1257'u'cp1257'b'1257'u'1257'b'windows_1257'u'windows_1257'b'cp1258'u'cp1258'b'1258'u'1258'b'windows_1258'u'windows_1258'b'cp273'u'cp273'b'273'u'273'b'ibm273'u'ibm273'b'csibm273'u'csibm273'b'cp424'u'cp424'b'424'u'424'b'csibm424'u'csibm424'b'ebcdic_cp_he'u'ebcdic_cp_he'b'ibm424'u'ibm424'b'cp437'u'cp437'b'437'u'437'b'cspc8codepage437'u'cspc8codepage437'b'ibm437'u'ibm437'b'cp500'u'cp500'b'500'u'500'b'csibm500'u'csibm500'b'ebcdic_cp_be'u'ebcdic_cp_be'b'ebcdic_cp_ch'u'ebcdic_cp_ch'b'ibm500'u'ibm500'b'cp775'u'cp775'b'775'u'775'b'cspc775baltic'u'cspc775baltic'b'ibm775'u'ibm775'b'cp850'u'cp850'b'850'u'850'b'cspc850multilingual'u'cspc850multilingual'b'ibm850'u'ibm850'b'cp852'u'cp852'b'852'u'852'b'cspcp852'u'cspcp852'b'ibm852'u'ibm852'b'cp855'u'cp855'b'855'u'855'b'csibm855'u'csibm855'b'ibm855'u'ibm855'b'cp857'u'cp857'b'857'u'857'b'csibm857'u'csibm857'b'ibm857'u'ibm857'b'cp858'u'cp858'b'858'u'858'b'csibm858'u'csibm858'b'ibm858'u'ibm858'b'cp860'u'cp860'b'860'u'860'b'csibm860'u'csibm860'b'ibm860'u'ibm860'b'cp861'u'cp861'b'861'u'861'b'cp_is'u'cp_is'b'csibm861'u'csibm861'b'ibm861'u'ibm861'b'cp862'u'cp862'b'862'u'862'b'cspc862latinhebrew'u'cspc862latinhebrew'b'ibm862'u'ibm862'b'cp863'u'cp863'b'863'u'863'b'csibm863'u'csibm863'b'ibm863'u'ibm863'b'cp864'u'cp864'b'864'u'864'b'csibm864'u'csibm864'b'ibm864'u'ibm864'b'cp865'u'cp865'b'865'u'865'b'csibm865'u'csibm865'b'ibm865'u'ibm865'b'cp866'u'cp866'b'866'u'866'b'csibm866'u'csibm866'b'ibm866'u'ibm866'b'cp869'u'cp869'b'869'u'869'b'cp_gr'u'cp_gr'b'csibm869'u'csibm869'b'ibm869'u'ibm869'b'cp932'u'cp932'b'932'u'932'b'ms932'u'ms932'b'mskanji'u'mskanji'b'ms_kanji'u'ms_kanji'b'cp949'u'cp949'b'949'u'949'b'ms949'u'ms949'b'uhc'u'uhc'b'cp950'u'cp950'b'950'u'950'b'ms950'u'ms950'b'euc_jis_2004'u'euc_jis_2004'b'jisx0213'u'jisx0213'b'eucjis2004'u'eucjis2004'b'euc_jis2004'u'euc_jis2004'b'euc_jisx0213'u'euc_jisx0213'b'eucjisx0213'u'eucjisx0213'b'euc_jp'u'euc_jp'b'eucjp'u'eucjp'b'ujis'u'ujis'b'u_jis'u'u_jis'b'euc_kr'u'euc_kr'b'euckr'u'euckr'b'korean'u'korean'b'ksc5601'u'ksc5601'b'ks_c_5601'u'ks_c_5601'b'ks_c_5601_1987'u'ks_c_5601_1987'b'ksx1001'u'ksx1001'b'ks_x_1001'u'ks_x_1001'b'gb18030'u'gb18030'b'gb18030_2000'u'gb18030_2000'b'gb2312'u'gb2312'b'chinese'u'chinese'b'csiso58gb231280'u'csiso58gb231280'b'euc_cn'u'euc_cn'b'euccn'u'euccn'b'eucgb2312_cn'u'eucgb2312_cn'b'gb2312_1980'u'gb2312_1980'b'gb2312_80'u'gb2312_80'b'iso_ir_58'u'iso_ir_58'b'gbk'u'gbk'b'936'u'936'b'cp936'u'cp936'b'ms936'u'ms936'b'hex_codec'u'hex_codec'b'hex'u'hex'b'hp_roman8'u'hp_roman8'b'roman8'u'roman8'b'r8'u'r8'b'csHPRoman8'u'csHPRoman8'b'cp1051'u'cp1051'b'ibm1051'u'ibm1051'b'hz'u'hz'b'hzgb'u'hzgb'b'hz_gb'u'hz_gb'b'hz_gb_2312'u'hz_gb_2312'b'iso2022_jp'u'iso2022_jp'b'csiso2022jp'u'csiso2022jp'b'iso2022jp'u'iso2022jp'b'iso_2022_jp'u'iso_2022_jp'b'iso2022_jp_1'u'iso2022_jp_1'b'iso2022jp_1'u'iso2022jp_1'b'iso_2022_jp_1'u'iso_2022_jp_1'b'iso2022_jp_2'u'iso2022_jp_2'b'iso2022jp_2'u'iso2022jp_2'b'iso_2022_jp_2'u'iso_2022_jp_2'b'iso2022_jp_2004'u'iso2022_jp_2004'b'iso_2022_jp_2004'u'iso_2022_jp_2004'b'iso2022jp_2004'u'iso2022jp_2004'b'iso2022_jp_3'u'iso2022_jp_3'b'iso2022jp_3'u'iso2022jp_3'b'iso_2022_jp_3'u'iso_2022_jp_3'b'iso2022_jp_ext'u'iso2022_jp_ext'b'iso2022jp_ext'u'iso2022jp_ext'b'iso_2022_jp_ext'u'iso_2022_jp_ext'b'iso2022_kr'u'iso2022_kr'b'csiso2022kr'u'csiso2022kr'b'iso2022kr'u'iso2022kr'b'iso_2022_kr'u'iso_2022_kr'b'iso8859_10'u'iso8859_10'b'csisolatin6'u'csisolatin6'b'iso_8859_10'u'iso_8859_10'b'iso_8859_10_1992'u'iso_8859_10_1992'b'iso_ir_157'u'iso_ir_157'b'l6'u'l6'b'latin6'u'latin6'b'iso8859_11'u'iso8859_11'b'thai'u'thai'b'iso_8859_11'u'iso_8859_11'b'iso_8859_11_2001'u'iso_8859_11_2001'b'iso8859_13'u'iso8859_13'b'iso_8859_13'u'iso_8859_13'b'l7'u'l7'b'latin7'u'latin7'b'iso8859_14'u'iso8859_14'b'iso_8859_14'u'iso_8859_14'b'iso_8859_14_1998'u'iso_8859_14_1998'b'iso_celtic'u'iso_celtic'b'iso_ir_199'u'iso_ir_199'b'l8'u'l8'b'latin8'u'latin8'b'iso8859_15'u'iso8859_15'b'iso_8859_15'u'iso_8859_15'b'l9'u'l9'b'latin9'u'latin9'b'iso8859_16'u'iso8859_16'b'iso_8859_16'u'iso_8859_16'b'iso_8859_16_2001'u'iso_8859_16_2001'b'iso_ir_226'u'iso_ir_226'b'l10'u'l10'b'latin10'u'latin10'b'iso8859_2'u'iso8859_2'b'csisolatin2'u'csisolatin2'b'iso_8859_2'u'iso_8859_2'b'iso_8859_2_1987'u'iso_8859_2_1987'b'iso_ir_101'u'iso_ir_101'b'l2'u'l2'b'latin2'u'latin2'b'iso8859_3'u'iso8859_3'b'csisolatin3'u'csisolatin3'b'iso_8859_3'u'iso_8859_3'b'iso_8859_3_1988'u'iso_8859_3_1988'b'iso_ir_109'u'iso_ir_109'b'l3'u'l3'b'latin3'u'latin3'b'iso8859_4'u'iso8859_4'b'csisolatin4'u'csisolatin4'b'iso_8859_4'u'iso_8859_4'b'iso_8859_4_1988'u'iso_8859_4_1988'b'iso_ir_110'u'iso_ir_110'b'l4'u'l4'b'latin4'u'latin4'b'iso8859_5'u'iso8859_5'b'csisolatincyrillic'u'csisolatincyrillic'b'cyrillic'u'cyrillic'b'iso_8859_5'u'iso_8859_5'b'iso_8859_5_1988'u'iso_8859_5_1988'b'iso_ir_144'u'iso_ir_144'b'iso8859_6'u'iso8859_6'b'arabic'u'arabic'b'asmo_708'u'asmo_708'b'csisolatinarabic'u'csisolatinarabic'b'ecma_114'u'ecma_114'b'iso_8859_6'u'iso_8859_6'b'iso_8859_6_1987'u'iso_8859_6_1987'b'iso_ir_127'u'iso_ir_127'b'iso8859_7'u'iso8859_7'b'csisolatingreek'u'csisolatingreek'b'ecma_118'u'ecma_118'b'elot_928'u'elot_928'b'greek'u'greek'b'greek8'u'greek8'b'iso_8859_7'u'iso_8859_7'b'iso_8859_7_1987'u'iso_8859_7_1987'b'iso_ir_126'u'iso_ir_126'b'iso8859_8'u'iso8859_8'b'csisolatinhebrew'u'csisolatinhebrew'b'hebrew'u'hebrew'b'iso_8859_8'u'iso_8859_8'b'iso_8859_8_1988'u'iso_8859_8_1988'b'iso_ir_138'u'iso_ir_138'b'iso8859_9'u'iso8859_9'b'csisolatin5'u'csisolatin5'b'iso_8859_9'u'iso_8859_9'b'iso_8859_9_1989'u'iso_8859_9_1989'b'iso_ir_148'u'iso_ir_148'b'l5'u'l5'b'latin5'u'latin5'b'johab'u'johab'b'cp1361'u'cp1361'b'ms1361'u'ms1361'b'koi8_r'u'koi8_r'b'cskoi8r'u'cskoi8r'b'kz1048'u'kz1048'b'kz_1048'u'kz_1048'b'rk1048'u'rk1048'b'strk1048_2002'u'strk1048_2002'b'latin_1'u'latin_1'b'8859'u'8859'b'cp819'u'cp819'b'csisolatin1'u'csisolatin1'b'ibm819'u'ibm819'b'iso8859'u'iso8859'b'iso8859_1'u'iso8859_1'b'iso_8859_1'u'iso_8859_1'b'iso_8859_1_1987'u'iso_8859_1_1987'b'iso_ir_100'u'iso_ir_100'b'l1'u'l1'b'latin'u'latin'b'latin1'u'latin1'b'mac_cyrillic'u'mac_cyrillic'b'maccyrillic'u'maccyrillic'b'mac_greek'u'mac_greek'b'macgreek'u'macgreek'b'mac_iceland'u'mac_iceland'b'maciceland'u'maciceland'b'mac_latin2'u'mac_latin2'b'maccentraleurope'u'maccentraleurope'b'maclatin2'u'maclatin2'b'mac_roman'u'mac_roman'b'macintosh'u'macintosh'b'macroman'u'macroman'b'mac_turkish'u'mac_turkish'b'macturkish'u'macturkish'b'mbcs'u'mbcs'b'ansi'u'ansi'b'dbcs'u'dbcs'b'ptcp154'u'ptcp154'b'csptcp154'u'csptcp154'b'pt154'u'pt154'b'cp154'u'cp154'b'cyrillic_asian'u'cyrillic_asian'b'quopri_codec'u'quopri_codec'b'quopri'u'quopri'b'quoted_printable'u'quoted_printable'b'quotedprintable'u'quotedprintable'b'rot_13'u'rot_13'b'rot13'u'rot13'b'shift_jis'u'shift_jis'b'csshiftjis'u'csshiftjis'b'shiftjis'u'shiftjis'b'sjis'u'sjis'b's_jis'u's_jis'b'shift_jis_2004'u'shift_jis_2004'b'shiftjis2004'u'shiftjis2004'b'sjis_2004'u'sjis_2004'b's_jis_2004'u's_jis_2004'b'shift_jisx0213'u'shift_jisx0213'b'shiftjisx0213'u'shiftjisx0213'b'sjisx0213'u'sjisx0213'b's_jisx0213'u's_jisx0213'b'tactis'u'tactis'b'tis260'u'tis260'b'tis_620'u'tis_620'b'tis620'u'tis620'b'tis_620_0'u'tis_620_0'b'tis_620_2529_0'u'tis_620_2529_0'b'tis_620_2529_1'u'tis_620_2529_1'b'iso_ir_166'u'iso_ir_166'b'utf_16'u'utf_16'b'u16'u'u16'b'utf16'u'utf16'b'utf_16_be'u'utf_16_be'b'unicodebigunmarked'u'unicodebigunmarked'b'utf_16be'u'utf_16be'b'utf_16_le'u'utf_16_le'b'unicodelittleunmarked'u'unicodelittleunmarked'b'utf_16le'u'utf_16le'b'utf_32'u'utf_32'b'u32'u'u32'b'utf32'u'utf32'b'utf_32_be'u'utf_32_be'b'utf_32be'u'utf_32be'b'utf_32_le'u'utf_32_le'b'utf_32le'u'utf_32le'b'utf_7'u'utf_7'b'u7'u'u7'b'utf7'u'utf7'b'unicode_1_1_utf_7'u'unicode_1_1_utf_7'b'utf_8'u'utf_8'b'u8'u'u8'b'utf'u'utf'b'utf8'u'utf8'b'utf8_ucs2'u'utf8_ucs2'b'utf8_ucs4'u'utf8_ucs4'b'uu_codec'u'uu_codec'b'uu'u'uu'b'zlib_codec'u'zlib_codec'b'zlib'u'zlib'b'x_mac_japanese'u'x_mac_japanese'b'x_mac_korean'u'x_mac_korean'b'x_mac_simp_chinese'u'x_mac_simp_chinese'b'x_mac_trad_chinese'u'x_mac_trad_chinese'u'encodings.aliases'u'aliases'Command-line parsing library + +This module is an optparse-inspired command-line parsing library that: + + - handles both optional and positional arguments + - produces highly informative usage messages + - supports parsers that dispatch to sub-parsers + +The following is a simple usage example that sums integers from the +command-line and writes the result to a file:: + + parser = argparse.ArgumentParser( + description='sum the integers at the command line') + parser.add_argument( + 'integers', metavar='int', nargs='+', type=int, + help='an integer to be summed') + parser.add_argument( + '--log', default=sys.stdout, type=argparse.FileType('w'), + help='the file where the sum should be written') + args = parser.parse_args() + args.log.write('%s' % sum(args.integers)) + args.log.close() + +The module contains the following public classes: + + - ArgumentParser -- The main entry point for command-line parsing. As the + example above shows, the add_argument() method is used to populate + the parser with actions for optional and positional arguments. Then + the parse_args() method is invoked to convert the args at the + command-line into an object with attributes. + + - ArgumentError -- The exception raised by ArgumentParser objects when + there are errors with the parser's actions. Errors raised while + parsing the command-line are caught by ArgumentParser and emitted + as command-line messages. + + - FileType -- A factory for defining types of files to be created. As the + example above shows, instances of FileType are typically passed as + the type= argument of add_argument() calls. + + - Action -- The base class for parser actions. Typically actions are + selected by passing strings like 'store_true' or 'append_const' to + the action= argument of add_argument(). However, for greater + customization of ArgumentParser actions, subclasses of Action may + be defined and passed as the action= argument. + + - HelpFormatter, RawDescriptionHelpFormatter, RawTextHelpFormatter, + ArgumentDefaultsHelpFormatter -- Formatter classes which + may be passed as the formatter_class= argument to the + ArgumentParser constructor. HelpFormatter is the default, + RawDescriptionHelpFormatter and RawTextHelpFormatter tell the parser + not to change the formatting for help text, and + ArgumentDefaultsHelpFormatter adds information about argument defaults + to the help. + +All other classes in this module are considered implementation details. +(Also note that HelpFormatter and RawDescriptionHelpFormatter are only +considered public as object names -- the API of the formatter objects is +still considered an implementation detail.) +1.1ArgumentParserArgumentTypeErrorFileTypeHelpFormatterArgumentDefaultsHelpFormatterRawDescriptionHelpFormatterRawTextHelpFormatterMetavarTypeHelpFormatterNamespaceActionONE_OR_MOREOPTIONALPARSERREMAINDERSUPPRESSZERO_OR_MORE_shutilngettext==SUPPRESS==A......_unrecognized_args_UNRECOGNIZED_ARGS_ATTR_AttributeHolderAbstract base class that provides __repr__. + + The __repr__ method returns a string in the format:: + ClassName(attr=name, attr=name, ...) + The attributes are determined either by a class-level attribute, + '_kwarg_names', or by inspecting the instance __dict__. + type_namearg_stringsstar_args_get_args_get_kwargs%s=%r**%s_copy_itemsFormatter for generating usage messages and argument help strings. + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + progindent_incrementmax_help_positionget_terminal_sizecolumns_prog_indent_increment_max_help_position_width_current_indent_level_action_max_length_Section_root_section_current_section\s+_whitespace_matcher\n\n\n+_long_break_matcher_indent_dedentIndent decreased below 0.headingformat_help_join_partsitem_helpcurrent_indent%*s%s: +_add_itemstart_sectionsectionend_sectionadd_text_format_textadd_usageusageactions_format_usageadd_argumentactionhelp_format_action_invocationget_invocationinvocationssubaction_iter_indented_subactionsinvocation_lengthaction_length_format_actionadd_arguments + +part_stringsusage: %(prog)soptionalspositionalsoption_strings_format_actions_usageaction_usagetext_width\(.*?\)+(?=\s|$)|\[.*?\]+(?=\s|$)|\S+r'\(.*?\)+(?=\s|$)|'r'\[.*?\]+(?=\s|$)|'r'\S+'part_regexpopt_usagepos_usageopt_partspos_partsget_linesindentline_len0.75%s%s + +group_actionsinserts_group_actions [_get_default_metavar_for_positional_format_argsoption_stringnargs_get_default_metavar_for_optionalargs_string%s %s[\[(][\])](%s) \1 (%s)%s *%s\(([^|]*)\)%(prog)_fill_texthelp_positionhelp_widthaction_widthaction_headertup%*s%s +%*s%-*s indent_first_expand_helphelp_text_split_lineshelp_lines_metavar_formattermetavardefault_metavarchoiceschoicechoice_strstuple_sizeget_metavar[%s [%s ...]]%s [%s ...]%s ...formatsinvalid nargs valueparamschoices_str_get_help_string_get_subactionsget_subactionstextwrapwrapinitial_indentsubsequent_indentdestHelp message formatter which retains any formatting in descriptions. + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + Help message formatter which retains formatting of all help text. + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + Help message formatter which adds default values to argument help. + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + %(default)defaulting_nargs (default: %(default)s)Help message formatter which uses the argument 'type' as the default + metavar value (instead of the argument 'dest') + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + _get_action_nameargumentAn error from creating or using an argument (optional or positional). + + The string value of this exception is the message, augmented with + information about the argument that caused it. + argument_nameargument %(argument_name)s: %(message)sAn error from trying to convert a command line string to a type.Information about how to convert command line strings to Python objects. + + Action objects are used by an ArgumentParser to represent the information + needed to parse a single argument from one or more strings from the + command line. The keyword arguments to the Action constructor are also + all attributes of Action instances. + + Keyword Arguments: + + - option_strings -- A list of command-line option strings which + should be associated with this action. + + - dest -- The name of the attribute to hold the created object(s) + + - nargs -- The number of command-line arguments that should be + consumed. By default, one argument will be consumed and a single + value will be produced. Other values include: + - N (an integer) consumes N arguments (and produces a list) + - '?' consumes zero or one arguments + - '*' consumes zero or more arguments (and produces a list) + - '+' consumes one or more arguments (and produces a list) + Note that the difference between the default and nargs=1 is that + with the default, a single value will be produced, while with + nargs=1, a list containing a single value will be produced. + + - const -- The value to be produced if the option is specified and the + option uses an action that takes no values. + + - default -- The value to be produced if the option is not specified. + + - type -- A callable that accepts a single string argument, and + returns the converted value. The standard Python types str, int, + float, and complex are useful examples of such callables. If None, + str is used. + + - choices -- A container of values that should be allowed. If not None, + after a command-line argument has been converted to the appropriate + type, an exception will be raised if it is not a member of this + collection. + + - required -- True if the action must always be specified at the + command line. This is only meaningful for optional command-line + arguments. + + - help -- The help string describing the argument. + + - metavar -- The name to be used for the option's argument with the + help string. If None, the 'dest' value will be used as the name. + const.__call__() not defined_StoreActionnargs for store actions must be != 0; if you have nothing to store, actions such as store true or store const may be more appropriate'nargs for store actions must be != 0; if you ''have nothing to store, actions such as store ''true or store const may be more appropriate'nargs must be %r to supply const_StoreConstAction_StoreTrueAction_StoreFalseAction_AppendActionnargs for append actions must be != 0; if arg strings are not supplying the value to append, the append const action may be more appropriate'nargs for append actions must be != 0; if arg ''strings are not supplying the value to append, ''the append const action may be more appropriate'_AppendConstAction_CountAction_HelpActionprint_help_VersionActionshow program's version number and exit_get_formatter_print_message_SubParsersAction_ChoicesPseudoActionsupparser_class_prog_prefix_parser_class_name_parser_map_choices_actionsadd_parserchoice_actionparser_nameunknown parser %(parser_name)r (choices: %(choices)s)parse_known_argssubnamespace_ExtendActionFactory for creating file object types + + Instances of FileType are typically passed as type= arguments to the + ArgumentParser add_argument() method. + + Keyword Arguments: + - mode -- A string indicating how the file is to be opened. Accepts the + same values as the builtin open() function. + - bufsize -- The file's desired buffer size. Accepts the same values as + the builtin open() function. + - encoding -- The file's encoding. Accepts the same values as the + builtin open() function. + - errors -- A string indicating how encoding and decoding errors are to + be handled. Accepts the same value as the builtin open() function. + bufsize_bufsize_encoding_errorsargument "-" with mode %rcan't open '%(filename)s': %(error)sargs_strSimple object for storing attributes. + + Implements equality by attribute names and values, and provides a simple + string representation. + _ActionsContainerprefix_charsargument_defaultconflict_handler_registriesstorestore_conststore_truestore_falseappend_const_get_handler_actions_option_string_actions_action_groups_mutually_exclusive_groups_defaults^-\d+$|^-\d*\.\d+$_negative_number_matcher_has_negative_number_optionalsregistry_name_registry_getset_defaultsget_default + add_argument(dest, ..., name=value, ...) + add_argument(option_string, option_string, ..., name=value, ...) + dest supplied twice for positional argument_get_positional_kwargs_get_optional_kwargs_pop_action_classaction_classunknown action "%s"type_func%r is not callable%r is a FileType class object, instance of it must be passed'%r is a FileType class object, instance of it'' must be passed'length of metavar tuple does not match nargs_add_actionadd_argument_group_ArgumentGroupadd_mutually_exclusive_group_MutuallyExclusiveGroup_check_conflictcontainer_remove_action_add_container_actionstitle_group_mapcannot merge actions - two groups are named %rgroup_mapmutex_group'required' is an invalid argument for positionalslong_option_stringsinvalid option string %(option)r: must start with a character %(prefix_chars)r'invalid option string %(option)r: ''must start with a character %(prefix_chars)r'dest_option_stringdest= is required for options like %r_handle_conflict_%shandler_func_nameinvalid conflict_resolution value: %rconfl_optionalsconfl_optional_handle_conflict_errorconflicting_actionsconflicting option string: %sconflicting option strings: %sconflict_string_handle_conflict_resolvesuper_init_containermutually exclusive arguments must be optionalObject for parsing command line strings into Python objects. + + Keyword Arguments: + - prog -- The name of the program (default: sys.argv[0]) + - usage -- A usage message (default: auto-generated from arguments) + - description -- A description of what the program does + - epilog -- Text following the argument descriptions + - parents -- Parsers whose arguments should be copied into this one + - formatter_class -- HelpFormatter class for printing help messages + - prefix_chars -- Characters that prefix optional arguments + - fromfile_prefix_chars -- Characters that prefix files containing + additional arguments + - argument_default -- The default value for all arguments + - conflict_handler -- String indicating how to handle conflicts + - add_help -- Add a -h/-help option + - allow_abbrev -- Allow long options to be abbreviated unambiguously + epilogformatter_classfromfile_prefix_charsadd_helpallow_abbrevsuperinitadd_grouppositional arguments_positionalsoptional arguments_optionals_subparsersidentitydefault_prefixshow this help message and exitadd_subparserscannot have multiple subparser argumentssubcommands_get_positional_actionsparsers_class_get_optional_actionsparse_argsunrecognized arguments: %s_parse_known_args_read_args_from_filesaction_conflictsmutex_actionconflictsoption_string_indicesarg_string_pattern_partsarg_strings_iterarg_string_parse_optionaloption_tuplearg_strings_patternseen_actionsseen_non_default_actionstake_actionargument_strings_get_valuesargument_valuesconflict_actionnot allowed with argument %saction_nameconsume_optionalstart_indexexplicit_arg_match_argumentmatch_argumentaction_tuplesextrasarg_countnew_explicit_argoptionals_mapignored explicit argument %rselected_patternsconsume_positionals_match_arguments_partialmatch_partialselected_patternarg_countsmax_option_string_indexnext_option_string_indexpositionals_end_indexstringsstop_indexrequired_actionsthe following arguments are required: %sone of the arguments %s is requirednew_arg_stringsargs_filearg_lineconvert_arg_line_to_args_get_nargs_patternnargs_patternexpected one argumentexpected at most one argumentexpected at least one argumentnargs_errorsexpected %s argumentexpected %s argumentsactions_slice_get_option_tuplesoption_tuplesambiguous option: %(option)s could match %(matches)soption_prefixshort_option_prefixshort_explicit_argunexpected option string: %s(-*A-*)(-*A?-*)(-*[A-]*)(-*A[A-]*)([-AO]*)(-*A[-AO]*)(-*-*)(-*%s-*)-*parse_intermixed_argsparse_known_intermixed_argsparse_intermixed_args: positional arg with nargs=%s'parse_intermixed_args: positional arg'' with nargs=%s'parse_intermixed_args: positional in mutuallyExclusiveGroup'parse_intermixed_args: positional in'' mutuallyExclusiveGroup'save_usageformat_usagesave_nargssave_defaultremaining_argsDo not expect %s in %ssave_required_check_valueinvalid %(type)s value: %(value)rinvalid choice: %(value)r (choose from %(choices)s)action_groupprint_usageerror(message: string) + + Prints a usage message incorporating the message to stderr and + exits. + + If you override this in a subclass, it should not return -- it + should either exit or raise an exception. + %(prog)s: error: %(message)s +# Author: Steven J. Bethard .# New maintainer as of 29 August 2019: Raymond Hettinger # =============================# Utility functions and classes# The copy module is used only in the 'append' and 'append_const'# actions, and it is needed only when the default value isn't a list.# Delay its import for speeding up the common case.# ===============# Formatting Help# default setting for width# ===============================# Section and indentation methods# format the indented section# return nothing if the section was empty# add the heading if the section was non-empty# join the section-initial newline, the heading and the help# ========================# Message building methods# find all invocations# update the maximum item length# add the item to the list# =======================# Help-formatting methods# if usage is specified, use that# if no optionals or positionals are available, usage is just prog# if optionals and positionals are available, calculate usage# split optionals from positionals# build full usage string# wrap the usage parts if it's too long# break usage into wrappable parts# helper for wrapping lines# if prog is short, follow it with optionals or positionals# if prog is long, put it on its own line# join lines into usage# prefix with 'usage:'# find group indices and identify actions in groups# collect all actions format strings# suppressed arguments are marked with None# remove | separators for suppressed arguments# produce all arg strings# if it's in a group, strip the outer []# add the action string to the list# produce the first way to invoke the option in brackets# if the Optional doesn't take a value, format is:# -s or --long# if the Optional takes a value, format is:# -s ARGS or --long ARGS# make it look optional if it's not required or in a group# insert things at the necessary indices# join all the action items with spaces# clean up separators for mutually exclusive groups# return the text# determine the required width and the entry label# no help; start on same line and add a final newline# short action name; start on the same line and pad two spaces# long action name; start on the next line# collect the pieces of the action help# if there was help for the action, add lines of help text# or add a newline if the description doesn't end with one# if there are any sub-actions, add their help as well# return a single string# -s, --long# -s ARGS, --long ARGS# The textwrap module is used only for formatting help.# Delay its import for speeding up the common usage of argparse.# =====================# Options and Arguments# ==============# Action classes# set prog from the existing prefix# create a pseudo-action to hold the choice help# create the parser and add it to the map# make parser available under aliases also# set the parser name if requested# select the parser# parse all the remaining options into the namespace# store any unrecognized options on the object, so that the top# level parser can decide what to do with them# In case this subparser defines new defaults, we parse them# in a new namespace object and then update the original# namespace for the relevant parts.# Type classes# the special argument "-" means sys.std{in,out}# all other arguments are used as file names# ===========================# Optional and Positional Parsing# set up registries# register actions# raise an exception if the conflict handler is invalid# action storage# groups# defaults storage# determines whether an "option" looks like a negative number# whether or not there are any optionals that look like negative# numbers -- uses a list so it can be shared and edited# ====================# Registration methods# ==================================# Namespace default accessor methods# if these defaults match any existing arguments, replace# the previous default on the object with the new one# Adding argument actions# if no positional args are supplied or only one is supplied and# it doesn't look like an option string, parse a positional# argument# otherwise, we're adding an optional argument# if no default was supplied, use the parser-level default# create the action object, and add it to the parser# raise an error if the action type is not callable# raise an error if the metavar does not match the type# resolve any conflicts# add to actions list# index the action by any option strings it has# set the flag if any option strings look like negative numbers# return the created action# collect groups by titles# map each action to its group# if a group with the title exists, use that, otherwise# create a new group matching the container's group# map the actions to their new group# add container's mutually exclusive groups# NOTE: if add_mutually_exclusive_group ever gains title= and# description= then this code will need to be expanded as above# map the actions to their new mutex group# add all actions to this container or their group# make sure required is not specified# mark positional arguments as required if at least one is# always required# return the keyword arguments with no option strings# determine short and long option strings# error on strings that don't start with an appropriate prefix# strings starting with two prefix characters are long options# infer destination, '--foo-bar' -> 'foo_bar' and '-x' -> 'x'# return the updated keyword arguments# determine function from conflict handler string# find all options that conflict with this option# remove all conflicting options# remove the conflicting option# if the option now has no option string, remove it from the# container holding it# add any missing keyword arguments by checking the container# group attributes# share most attributes with the container# default setting for prog# register types# add help argument if necessary# (using explicit default to override global argument_default)# add parent arguments and defaults# Pretty __repr__ methods# Optional/Positional adding methods# add the parser class to the arguments if it's not present# prog defaults to the usage message of this parser, skipping# optional arguments and with no "usage:" prefix# create the parsers action and add it to the positionals list# return the created parsers action# =====================================# Command line argument parsing methods# args default to the system args# make sure that args are mutable# default Namespace built from parser defaults# add any action defaults that aren't present# add any parser defaults that aren't present# parse the arguments and exit if there are any errors# replace arg strings that are file references# map all mutually exclusive arguments to the other arguments# they can't occur with# find all option indices, and determine the arg_string_pattern# which has an 'O' if there is an option at an index,# an 'A' if there is an argument, or a '-' if there is a '--'# all args after -- are non-options# otherwise, add the arg to the arg strings# and note the index if it was an option# join the pieces together to form the pattern# converts arg strings to the appropriate and then takes the action# error if this argument is not allowed with other previously# seen arguments, assuming that actions that use the default# value don't really count as "present"# take the action if we didn't receive a SUPPRESS value# (e.g. from a default)# function to convert arg_strings into an optional action# get the optional identified at this index# identify additional optionals in the same arg string# (e.g. -xyz is the same as -x -y -z if no args are required)# if we found no optional action, skip it# if there is an explicit argument, try to match the# optional's string arguments to only this# if the action is a single-dash option and takes no# arguments, try to parse more single-dash options out# of the tail of the option string# if the action expect exactly one argument, we've# successfully matched the option; exit the loop# error if a double-dash option did not use the# explicit argument# if there is no explicit argument, try to match the# optional's string arguments with the following strings# if successful, exit the loop# add the Optional to the list and return the index at which# the Optional's string args stopped# the list of Positionals left to be parsed; this is modified# by consume_positionals()# function to convert arg_strings into positional actions# match as many Positionals as possible# slice off the appropriate arg strings for each Positional# and add the Positional and its args to the list# slice off the Positionals that we just parsed and return the# index at which the Positionals' string args stopped# consume Positionals and Optionals alternately, until we have# passed the last option string# consume any Positionals preceding the next option# only try to parse the next optional if we didn't consume# the option string during the positionals parsing# if we consumed all the positionals we could and we're not# at the index of an option string, there were extra arguments# consume the next optional and any arguments for it# consume any positionals following the last Optional# if we didn't consume all the argument strings, there were extras# make sure all required actions were present and also convert# action defaults which were not given as arguments# Convert action default now instead of doing it before# parsing arguments to avoid calling convert functions# twice (which may fail) if the argument was given, but# only if it was defined already in the namespace# make sure all required groups had one option present# if no actions were used, report the error# return the updated namespace and the extra arguments# expand arguments referencing files# for regular arguments, just add them back into the list# replace arguments referencing files with the file content# return the modified argument list# match the pattern for this action to the arg strings# raise an exception if we weren't able to find a match# return the number of arguments matched# progressively shorten the actions list by slicing off the# final actions until we find a match# return the list of arg string counts# if it's an empty string, it was meant to be a positional# if it doesn't start with a prefix, it was meant to be positional# if the option string is present in the parser, return the action# if it's just a single character, it was meant to be positional# if the option string before the "=" is present, return the action# search through all possible prefixes of the option string# and all actions in the parser for possible interpretations# if multiple actions match, the option string was ambiguous# if exactly one action matched, this segmentation is good,# so return the parsed action# if it was not found as an option, but it looks like a negative# number, it was meant to be positional# unless there are negative-number-like options# if it contains a space, it was meant to be a positional# it was meant to be an optional but there is no such option# in this parser (though it might be a valid option in a subparser)# option strings starting with two prefix characters are only# split at the '='# single character options can be concatenated with their arguments# but multiple character options always have to have their argument# separate# shouldn't ever get here# return the collected option tuples# in all examples below, we have to allow for '--' args# which are represented as '-' in the pattern# the default (None) is assumed to be a single argument# allow zero or one arguments# allow zero or more arguments# allow one or more arguments# allow any number of options or arguments# allow one argument followed by any number of options or arguments# suppress action, like nargs=0# all others should be integers# if this is an optional action, -- is not allowed# return the pattern# Alt command line argument parsing, allowing free intermix# returns a namespace and list of extras# positional can be freely intermixed with optionals. optionals are# first parsed with all positional arguments deactivated. The 'extras'# are then parsed. If the parser definition is incompatible with the# intermixed assumptions (e.g. use of REMAINDER, subparsers) a# TypeError is raised.# positionals are 'deactivated' by setting nargs and default to# SUPPRESS. This blocks the addition of that positional to the# namespace# capture the full usage for use in error messages# deactivate positionals# action.nargs = 0# remove the empty positional values from namespace# restore nargs and usage before exiting# parse positionals. optionals aren't normally required, but# they could be, so make sure they aren't.# restore parser values before exiting# Value conversion methods# for everything but PARSER, REMAINDER args, strip out first '--'# optional argument produces a default when not present# when nargs='*' on a positional, if there were no command-line# args, use the default if it is anything other than None# single argument or optional argument produces a single value# REMAINDER arguments convert all values, checking none# PARSER arguments convert all values, but check only the first# SUPPRESS argument does not put anything in the namespace# all other types of nargs produce a list# return the converted value# convert the value to the appropriate type# ArgumentTypeErrors indicate errors# TypeErrors or ValueErrors also indicate errors# converted value must be one of the choices (if specified)# usage# description# positionals, optionals and user-defined groups# epilog# determine help from format above# Help-printing methods# Exiting methodsb'Command-line parsing library + +This module is an optparse-inspired command-line parsing library that: + + - handles both optional and positional arguments + - produces highly informative usage messages + - supports parsers that dispatch to sub-parsers + +The following is a simple usage example that sums integers from the +command-line and writes the result to a file:: + + parser = argparse.ArgumentParser( + description='sum the integers at the command line') + parser.add_argument( + 'integers', metavar='int', nargs='+', type=int, + help='an integer to be summed') + parser.add_argument( + '--log', default=sys.stdout, type=argparse.FileType('w'), + help='the file where the sum should be written') + args = parser.parse_args() + args.log.write('%s' % sum(args.integers)) + args.log.close() + +The module contains the following public classes: + + - ArgumentParser -- The main entry point for command-line parsing. As the + example above shows, the add_argument() method is used to populate + the parser with actions for optional and positional arguments. Then + the parse_args() method is invoked to convert the args at the + command-line into an object with attributes. + + - ArgumentError -- The exception raised by ArgumentParser objects when + there are errors with the parser's actions. Errors raised while + parsing the command-line are caught by ArgumentParser and emitted + as command-line messages. + + - FileType -- A factory for defining types of files to be created. As the + example above shows, instances of FileType are typically passed as + the type= argument of add_argument() calls. + + - Action -- The base class for parser actions. Typically actions are + selected by passing strings like 'store_true' or 'append_const' to + the action= argument of add_argument(). However, for greater + customization of ArgumentParser actions, subclasses of Action may + be defined and passed as the action= argument. + + - HelpFormatter, RawDescriptionHelpFormatter, RawTextHelpFormatter, + ArgumentDefaultsHelpFormatter -- Formatter classes which + may be passed as the formatter_class= argument to the + ArgumentParser constructor. HelpFormatter is the default, + RawDescriptionHelpFormatter and RawTextHelpFormatter tell the parser + not to change the formatting for help text, and + ArgumentDefaultsHelpFormatter adds information about argument defaults + to the help. + +All other classes in this module are considered implementation details. +(Also note that HelpFormatter and RawDescriptionHelpFormatter are only +considered public as object names -- the API of the formatter objects is +still considered an implementation detail.) +'u'Command-line parsing library + +This module is an optparse-inspired command-line parsing library that: + + - handles both optional and positional arguments + - produces highly informative usage messages + - supports parsers that dispatch to sub-parsers + +The following is a simple usage example that sums integers from the +command-line and writes the result to a file:: + + parser = argparse.ArgumentParser( + description='sum the integers at the command line') + parser.add_argument( + 'integers', metavar='int', nargs='+', type=int, + help='an integer to be summed') + parser.add_argument( + '--log', default=sys.stdout, type=argparse.FileType('w'), + help='the file where the sum should be written') + args = parser.parse_args() + args.log.write('%s' % sum(args.integers)) + args.log.close() + +The module contains the following public classes: + + - ArgumentParser -- The main entry point for command-line parsing. As the + example above shows, the add_argument() method is used to populate + the parser with actions for optional and positional arguments. Then + the parse_args() method is invoked to convert the args at the + command-line into an object with attributes. + + - ArgumentError -- The exception raised by ArgumentParser objects when + there are errors with the parser's actions. Errors raised while + parsing the command-line are caught by ArgumentParser and emitted + as command-line messages. + + - FileType -- A factory for defining types of files to be created. As the + example above shows, instances of FileType are typically passed as + the type= argument of add_argument() calls. + + - Action -- The base class for parser actions. Typically actions are + selected by passing strings like 'store_true' or 'append_const' to + the action= argument of add_argument(). However, for greater + customization of ArgumentParser actions, subclasses of Action may + be defined and passed as the action= argument. + + - HelpFormatter, RawDescriptionHelpFormatter, RawTextHelpFormatter, + ArgumentDefaultsHelpFormatter -- Formatter classes which + may be passed as the formatter_class= argument to the + ArgumentParser constructor. HelpFormatter is the default, + RawDescriptionHelpFormatter and RawTextHelpFormatter tell the parser + not to change the formatting for help text, and + ArgumentDefaultsHelpFormatter adds information about argument defaults + to the help. + +All other classes in this module are considered implementation details. +(Also note that HelpFormatter and RawDescriptionHelpFormatter are only +considered public as object names -- the API of the formatter objects is +still considered an implementation detail.) +'b'1.1'u'1.1'b'ArgumentParser'u'ArgumentParser'b'ArgumentError'u'ArgumentError'b'ArgumentTypeError'u'ArgumentTypeError'b'FileType'u'FileType'b'HelpFormatter'u'HelpFormatter'b'ArgumentDefaultsHelpFormatter'u'ArgumentDefaultsHelpFormatter'b'RawDescriptionHelpFormatter'u'RawDescriptionHelpFormatter'b'RawTextHelpFormatter'u'RawTextHelpFormatter'b'MetavarTypeHelpFormatter'u'MetavarTypeHelpFormatter'b'Namespace'u'Namespace'b'Action'u'Action'b'ONE_OR_MORE'u'ONE_OR_MORE'b'OPTIONAL'u'OPTIONAL'b'PARSER'u'PARSER'b'REMAINDER'u'REMAINDER'b'SUPPRESS'u'SUPPRESS'b'ZERO_OR_MORE'u'ZERO_OR_MORE'b'==SUPPRESS=='u'==SUPPRESS=='b'A...'u'A...'b'...'u'...'b'_unrecognized_args'u'_unrecognized_args'b'Abstract base class that provides __repr__. + + The __repr__ method returns a string in the format:: + ClassName(attr=name, attr=name, ...) + The attributes are determined either by a class-level attribute, + '_kwarg_names', or by inspecting the instance __dict__. + 'u'Abstract base class that provides __repr__. + + The __repr__ method returns a string in the format:: + ClassName(attr=name, attr=name, ...) + The attributes are determined either by a class-level attribute, + '_kwarg_names', or by inspecting the instance __dict__. + 'b'%s=%r'u'%s=%r'b'**%s'u'**%s'b'Formatter for generating usage messages and argument help strings. + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + 'u'Formatter for generating usage messages and argument help strings. + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + 'b'\s+'u'\s+'b'\n\n\n+'u'\n\n\n+'b'Indent decreased below 0.'u'Indent decreased below 0.'b'%*s%s: +'u'%*s%s: +'b' + +'u' + +'b'usage: 'u'usage: 'b'%(prog)s'u'%(prog)s'b'\(.*?\)+(?=\s|$)|\[.*?\]+(?=\s|$)|\S+'u'\(.*?\)+(?=\s|$)|\[.*?\]+(?=\s|$)|\S+'b'%s%s + +'u'%s%s + +'b' ['u' ['b'%s %s'u'%s %s'b'[\[(]'u'[\[(]'b'[\])]'u'[\])]'b'(%s) 'u'(%s) 'b'\1'u'\1'b' (%s)'u' (%s)'b'%s *%s'u'%s *%s'b'\(([^|]*)\)'u'\(([^|]*)\)'b'%(prog)'u'%(prog)'b'%*s%s +'u'%*s%s +'b'%*s%-*s 'u'%*s%-*s 'b'[%s [%s ...]]'u'[%s [%s ...]]'b'%s [%s ...]'u'%s [%s ...]'b'%s ...'u'%s ...'b'invalid nargs value'u'invalid nargs value'b'choices'u'choices'b'Help message formatter which retains any formatting in descriptions. + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + 'u'Help message formatter which retains any formatting in descriptions. + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + 'b'Help message formatter which retains formatting of all help text. + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + 'u'Help message formatter which retains formatting of all help text. + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + 'b'Help message formatter which adds default values to argument help. + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + 'u'Help message formatter which adds default values to argument help. + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + 'b'%(default)'u'%(default)'b' (default: %(default)s)'u' (default: %(default)s)'b'Help message formatter which uses the argument 'type' as the default + metavar value (instead of the argument 'dest') + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + 'u'Help message formatter which uses the argument 'type' as the default + metavar value (instead of the argument 'dest') + + Only the name of this class is considered a public API. All the methods + provided by the class are considered an implementation detail. + 'b'An error from creating or using an argument (optional or positional). + + The string value of this exception is the message, augmented with + information about the argument that caused it. + 'u'An error from creating or using an argument (optional or positional). + + The string value of this exception is the message, augmented with + information about the argument that caused it. + 'b'argument %(argument_name)s: %(message)s'u'argument %(argument_name)s: %(message)s'b'An error from trying to convert a command line string to a type.'u'An error from trying to convert a command line string to a type.'b'Information about how to convert command line strings to Python objects. + + Action objects are used by an ArgumentParser to represent the information + needed to parse a single argument from one or more strings from the + command line. The keyword arguments to the Action constructor are also + all attributes of Action instances. + + Keyword Arguments: + + - option_strings -- A list of command-line option strings which + should be associated with this action. + + - dest -- The name of the attribute to hold the created object(s) + + - nargs -- The number of command-line arguments that should be + consumed. By default, one argument will be consumed and a single + value will be produced. Other values include: + - N (an integer) consumes N arguments (and produces a list) + - '?' consumes zero or one arguments + - '*' consumes zero or more arguments (and produces a list) + - '+' consumes one or more arguments (and produces a list) + Note that the difference between the default and nargs=1 is that + with the default, a single value will be produced, while with + nargs=1, a list containing a single value will be produced. + + - const -- The value to be produced if the option is specified and the + option uses an action that takes no values. + + - default -- The value to be produced if the option is not specified. + + - type -- A callable that accepts a single string argument, and + returns the converted value. The standard Python types str, int, + float, and complex are useful examples of such callables. If None, + str is used. + + - choices -- A container of values that should be allowed. If not None, + after a command-line argument has been converted to the appropriate + type, an exception will be raised if it is not a member of this + collection. + + - required -- True if the action must always be specified at the + command line. This is only meaningful for optional command-line + arguments. + + - help -- The help string describing the argument. + + - metavar -- The name to be used for the option's argument with the + help string. If None, the 'dest' value will be used as the name. + 'u'Information about how to convert command line strings to Python objects. + + Action objects are used by an ArgumentParser to represent the information + needed to parse a single argument from one or more strings from the + command line. The keyword arguments to the Action constructor are also + all attributes of Action instances. + + Keyword Arguments: + + - option_strings -- A list of command-line option strings which + should be associated with this action. + + - dest -- The name of the attribute to hold the created object(s) + + - nargs -- The number of command-line arguments that should be + consumed. By default, one argument will be consumed and a single + value will be produced. Other values include: + - N (an integer) consumes N arguments (and produces a list) + - '?' consumes zero or one arguments + - '*' consumes zero or more arguments (and produces a list) + - '+' consumes one or more arguments (and produces a list) + Note that the difference between the default and nargs=1 is that + with the default, a single value will be produced, while with + nargs=1, a list containing a single value will be produced. + + - const -- The value to be produced if the option is specified and the + option uses an action that takes no values. + + - default -- The value to be produced if the option is not specified. + + - type -- A callable that accepts a single string argument, and + returns the converted value. The standard Python types str, int, + float, and complex are useful examples of such callables. If None, + str is used. + + - choices -- A container of values that should be allowed. If not None, + after a command-line argument has been converted to the appropriate + type, an exception will be raised if it is not a member of this + collection. + + - required -- True if the action must always be specified at the + command line. This is only meaningful for optional command-line + arguments. + + - help -- The help string describing the argument. + + - metavar -- The name to be used for the option's argument with the + help string. If None, the 'dest' value will be used as the name. + 'b'option_strings'u'option_strings'b'dest'u'dest'b'nargs'u'nargs'b'const'u'const'b'default'b'help'u'help'b'metavar'u'metavar'b'.__call__() not defined'u'.__call__() not defined'b'nargs for store actions must be != 0; if you have nothing to store, actions such as store true or store const may be more appropriate'u'nargs for store actions must be != 0; if you have nothing to store, actions such as store true or store const may be more appropriate'b'nargs must be %r to supply const'u'nargs must be %r to supply const'b'nargs for append actions must be != 0; if arg strings are not supplying the value to append, the append const action may be more appropriate'u'nargs for append actions must be != 0; if arg strings are not supplying the value to append, the append const action may be more appropriate'b'show program's version number and exit'u'show program's version number and exit'b'prog'u'prog'b'aliases'b'parser_name'u'parser_name'b'unknown parser %(parser_name)r (choices: %(choices)s)'u'unknown parser %(parser_name)r (choices: %(choices)s)'b'Factory for creating file object types + + Instances of FileType are typically passed as type= arguments to the + ArgumentParser add_argument() method. + + Keyword Arguments: + - mode -- A string indicating how the file is to be opened. Accepts the + same values as the builtin open() function. + - bufsize -- The file's desired buffer size. Accepts the same values as + the builtin open() function. + - encoding -- The file's encoding. Accepts the same values as the + builtin open() function. + - errors -- A string indicating how encoding and decoding errors are to + be handled. Accepts the same value as the builtin open() function. + 'u'Factory for creating file object types + + Instances of FileType are typically passed as type= arguments to the + ArgumentParser add_argument() method. + + Keyword Arguments: + - mode -- A string indicating how the file is to be opened. Accepts the + same values as the builtin open() function. + - bufsize -- The file's desired buffer size. Accepts the same values as + the builtin open() function. + - encoding -- The file's encoding. Accepts the same values as the + builtin open() function. + - errors -- A string indicating how encoding and decoding errors are to + be handled. Accepts the same value as the builtin open() function. + 'b'argument "-" with mode %r'u'argument "-" with mode %r'b'can't open '%(filename)s': %(error)s'u'can't open '%(filename)s': %(error)s'b'encoding'u'encoding'b'Simple object for storing attributes. + + Implements equality by attribute names and values, and provides a simple + string representation. + 'u'Simple object for storing attributes. + + Implements equality by attribute names and values, and provides a simple + string representation. + 'b'action'u'action'b'store'u'store'b'store_const'u'store_const'b'store_true'u'store_true'b'store_false'u'store_false'b'append_const'u'append_const'b'version'u'version'b'extend'u'extend'b'^-\d+$|^-\d*\.\d+$'u'^-\d+$|^-\d*\.\d+$'b' + add_argument(dest, ..., name=value, ...) + add_argument(option_string, option_string, ..., name=value, ...) + 'u' + add_argument(dest, ..., name=value, ...) + add_argument(option_string, option_string, ..., name=value, ...) + 'b'dest supplied twice for positional argument'u'dest supplied twice for positional argument'b'unknown action "%s"'u'unknown action "%s"'b'%r is not callable'u'%r is not callable'b'%r is a FileType class object, instance of it must be passed'u'%r is a FileType class object, instance of it must be passed'b'_get_formatter'u'_get_formatter'b'length of metavar tuple does not match nargs'u'length of metavar tuple does not match nargs'b'cannot merge actions - two groups are named %r'u'cannot merge actions - two groups are named %r'b'required'u'required'b''required' is an invalid argument for positionals'u''required' is an invalid argument for positionals'b'prefix_chars'u'prefix_chars'b'invalid option string %(option)r: must start with a character %(prefix_chars)r'u'invalid option string %(option)r: must start with a character %(prefix_chars)r'b'dest= is required for options like %r'u'dest= is required for options like %r'b'_handle_conflict_%s'u'_handle_conflict_%s'b'invalid conflict_resolution value: %r'u'invalid conflict_resolution value: %r'b'conflicting option string: %s'u'conflicting option string: %s'b'conflicting option strings: %s'u'conflicting option strings: %s'b'conflict_handler'u'conflict_handler'b'argument_default'u'argument_default'b'mutually exclusive arguments must be optional'u'mutually exclusive arguments must be optional'b'Object for parsing command line strings into Python objects. + + Keyword Arguments: + - prog -- The name of the program (default: sys.argv[0]) + - usage -- A usage message (default: auto-generated from arguments) + - description -- A description of what the program does + - epilog -- Text following the argument descriptions + - parents -- Parsers whose arguments should be copied into this one + - formatter_class -- HelpFormatter class for printing help messages + - prefix_chars -- Characters that prefix optional arguments + - fromfile_prefix_chars -- Characters that prefix files containing + additional arguments + - argument_default -- The default value for all arguments + - conflict_handler -- String indicating how to handle conflicts + - add_help -- Add a -h/-help option + - allow_abbrev -- Allow long options to be abbreviated unambiguously + 'u'Object for parsing command line strings into Python objects. + + Keyword Arguments: + - prog -- The name of the program (default: sys.argv[0]) + - usage -- A usage message (default: auto-generated from arguments) + - description -- A description of what the program does + - epilog -- Text following the argument descriptions + - parents -- Parsers whose arguments should be copied into this one + - formatter_class -- HelpFormatter class for printing help messages + - prefix_chars -- Characters that prefix optional arguments + - fromfile_prefix_chars -- Characters that prefix files containing + additional arguments + - argument_default -- The default value for all arguments + - conflict_handler -- String indicating how to handle conflicts + - add_help -- Add a -h/-help option + - allow_abbrev -- Allow long options to be abbreviated unambiguously + 'b'positional arguments'u'positional arguments'b'optional arguments'u'optional arguments'b'show this help message and exit'u'show this help message and exit'b'usage'u'usage'b'description'u'description'b'formatter_class'u'formatter_class'b'add_help'u'add_help'b'cannot have multiple subparser arguments'u'cannot have multiple subparser arguments'b'parser_class'u'parser_class'b'subcommands'u'subcommands'b'unrecognized arguments: %s'u'unrecognized arguments: %s'b'A'u'A'b'not allowed with argument %s'u'not allowed with argument %s'b'ignored explicit argument %r'u'ignored explicit argument %r'b'the following arguments are required: %s'u'the following arguments are required: %s'b'one of the arguments %s is required'u'one of the arguments %s is required'b'expected one argument'u'expected one argument'b'expected at most one argument'u'expected at most one argument'b'expected at least one argument'u'expected at least one argument'b'expected %s argument'u'expected %s argument'b'expected %s arguments'u'expected %s arguments'b'matches'u'matches'b'ambiguous option: %(option)s could match %(matches)s'u'ambiguous option: %(option)s could match %(matches)s'b'unexpected option string: %s'u'unexpected option string: %s'b'(-*A-*)'u'(-*A-*)'b'(-*A?-*)'u'(-*A?-*)'b'(-*[A-]*)'u'(-*[A-]*)'b'(-*A[A-]*)'u'(-*A[A-]*)'b'([-AO]*)'u'([-AO]*)'b'(-*A[-AO]*)'u'(-*A[-AO]*)'b'(-*-*)'u'(-*-*)'b'(-*%s-*)'u'(-*%s-*)'b'-*'u'-*'b'parse_intermixed_args: positional arg with nargs=%s'u'parse_intermixed_args: positional arg with nargs=%s'b'parse_intermixed_args: positional in mutuallyExclusiveGroup'u'parse_intermixed_args: positional in mutuallyExclusiveGroup'b'Do not expect %s in %s'u'Do not expect %s in %s'b'value'b'invalid %(type)s value: %(value)r'u'invalid %(type)s value: %(value)r'b'invalid choice: %(value)r (choose from %(choices)s)'u'invalid choice: %(value)r (choose from %(choices)s)'b'error(message: string) + + Prints a usage message incorporating the message to stderr and + exits. + + If you override this in a subclass, it should not return -- it + should either exit or raise an exception. + 'u'error(message: string) + + Prints a usage message incorporating the message to stderr and + exits. + + If you override this in a subclass, it should not return -- it + should either exit or raise an exception. + 'b'%(prog)s: error: %(message)s +'u'%(prog)s: error: %(message)s +'u'argparse'u'array(typecode [, initializer]) -> array + +Return a new array whose items are restricted by typecode, and +initialized from the optional initializer value, which must be a list, +string or iterable over elements of the appropriate type. + +Arrays represent basic values and behave very much like lists, except +the type of objects stored in them is constrained. The type is specified +at object creation time by using a type code, which is a single character. +The following type codes are defined: + + Type code C Type Minimum size in bytes + 'b' signed integer 1 + 'B' unsigned integer 1 + 'u' Unicode character 2 (see note) + 'h' signed integer 2 + 'H' unsigned integer 2 + 'i' signed integer 2 + 'I' unsigned integer 2 + 'l' signed integer 4 + 'L' unsigned integer 4 + 'q' signed integer 8 (see note) + 'Q' unsigned integer 8 (see note) + 'f' floating point 4 + 'd' floating point 8 + +NOTE: The 'u' typecode corresponds to Python's unicode character. On +narrow builds this is 2-bytes on wide builds this is 4-bytes. + +NOTE: The 'q' and 'Q' type codes are only available if the platform +C compiler used to build Python supports 'long long', or, on Windows, +'__int64'. + +Methods: + +append() -- append a new item to the end of the array +buffer_info() -- return information giving the current memory info +byteswap() -- byteswap all the items of the array +count() -- return number of occurrences of an object +extend() -- extend array by appending multiple elements from an iterable +fromfile() -- read items from a file object +fromlist() -- append items from the list +frombytes() -- append items from the string +index() -- return index of first occurrence of an object +insert() -- insert a new item into the array at a provided position +pop() -- remove and return item (default last) +remove() -- remove first occurrence of an object +reverse() -- reverse the order of the items in the array +tofile() -- write all items to a file object +tolist() -- return the array converted to an ordinary list +tobytes() -- return the array converted to a string + +Attributes: + +typecode -- the typecode character used to create the array +itemsize -- the length in bytes of one array item +'byteswapfrombytesfromfilefromunicodeu'the size, in bytes, of one array item'u'array.itemsize'tofiletounicodeu'the typecode character used to create the array'u'array.typecode'array.arrayArrayTypeu'This module defines an object type which can efficiently represent +an array of basic values: characters, integers, floating point +numbers. Arrays are sequence types and behave very much like lists, +except that the type of objects stored in them is constrained. +'u'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/array.cpython-38-darwin.so'u'array'_array_reconstructorarrayu'bBuhHiIlLqQfd'typecodes + ast + ~~~ + + The `ast` module helps Python applications to process trees of the Python + abstract syntax grammar. The abstract syntax itself might change with + each Python release; this module helps to find out programmatically what + the current grammar looks like and allows modifications of it. + + An abstract syntax tree can be generated by passing `ast.PyCF_ONLY_AST` as + a flag to the `compile()` builtin function or by using the `parse()` + function from this module. The result will be a tree of objects whose + classes all inherit from `ast.AST`. + + A modified abstract syntax tree can be compiled into a Python code object + using the built-in `compile()` function. + + Additionally various helper functions are provided that make working with + the trees simpler. The main intention of the helper functions and this + module in general is to provide an easy to use interface for libraries + that work tightly with the python syntax (template engines for example). + + + :copyright: Copyright 2008 by Armin Ronacher. + :license: Python License. +type_commentsfeature_version + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). + Pass type_comments=True to get back type comments where the syntax allows. + _feature_versionliteral_evalnode_or_string + Safely evaluate an expression node or a string containing a Python + expression. The string or node provided may only consist of the following + Python literal structures: strings, bytes, numbers, tuples, lists, dicts, + sets, booleans, and None. + _raise_malformed_nodenodemalformed node or string: _convert_num_convert_signed_numoperand_converteltsleftrightannotate_fieldsinclude_attributes + Return a formatted dump of the tree in node. This is mainly useful for + debugging purposes. If annotate_fields is true (by default), + the returned string will show the names and the values for fields. + If annotate_fields is false, the result string will be more compact by + omitting unambiguous field names. Attributes such as line + numbers and column offsets are not dumped by default. If this is wanted, + include_attributes can be set to true. + %s=%sexpected AST, got %rcopy_locationnew_nodeold_node + Copy source location (`lineno`, `col_offset`, `end_lineno`, and `end_col_offset` + attributes) from *old_node* to *new_node* if possible, and return *new_node*. + col_offsetend_linenoend_col_offsetend_fix_missing_locations + When you compile a node tree with compile(), the compiler expects lineno and + col_offset attributes for every node that supports them. This is rather + tedious to fill in for generated nodes, so this helper adds these attributes + recursively where not already set, by setting them to the values of the + parent node. It works recursively starting at *node*. + iter_child_nodesincrement_lineno + Increment the line number and end line number of each node in the tree + starting at *node* by *n*. This is useful to "move code" to a different + location in a file. + walkiter_fields + Yield a tuple of ``(fieldname, value)`` for each field in ``node._fields`` + that is present on *node*. + + Yield all direct child nodes of *node*, that is, all fields that are nodes + and all items of fields that are lists of nodes. + get_docstringclean + Return the docstring for the given node or None if no docstring can + be found. If the node provided does not have docstrings a TypeError + will be raised. + + If *clean* is `True`, all tabs are expanded to spaces and any whitespace + that can be uniformly removed from the second line onwards is removed. + %r can't have docstringsStrcleandoc_splitlines_no_ffSplit a string into lines ignoring form feed and other chars. + + This mimics how the Python parser splits source code. + next_line_pad_whitespaceReplace all chars except '\f\t' in a line with spaces. get_source_segmentpaddedGet source code segment of the *source* that generated *node*. + + If some location information (`lineno`, `end_lineno`, `col_offset`, + or `end_col_offset`) is missing, return None. + + If *padded* is `True`, the first line of a multi-line statement will + be padded with spaces to match its original position. + + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. + todoNodeVisitor + A node visitor base class that walks the abstract syntax tree and calls a + visitor function for every node found. This function may return a value + which is forwarded by the `visit` method. + + This class is meant to be subclassed, with the subclass adding visitor + methods. + + Per default the visitor functions for the nodes are ``'visit_'`` + + class name of the node. So a `TryFinally` node visit function would + be `visit_TryFinally`. This behavior can be changed by overriding + the `visit` method. If no visitor function exists for a node + (return value `None`) the `generic_visit` visitor is used instead. + + Don't use the `NodeVisitor` if you want to apply changes to nodes during + traversing. For this a special visitor exists (`NodeTransformer`) that + allows modifications. + visitVisit a node.visit_generic_visitvisitorCalled if no explicit visitor function exists for a node.visit_Constant_const_node_type_names is deprecated; add visit_ConstantNodeTransformer + A :class:`NodeVisitor` subclass that walks the abstract syntax tree and + allows modification of nodes. + + The `NodeTransformer` will walk the AST and use the return value of the + visitor methods to replace or remove the old node. If the return value of + the visitor method is ``None``, the node will be removed from its location, + otherwise it is replaced with the return value. The return value may be the + original node in which case no replacement takes place. + + Here is an example transformer that rewrites all occurrences of name lookups + (``foo``) to ``data['foo']``:: + + class RewriteName(NodeTransformer): + + def visit_Name(self, node): + return Subscript( + value=Name(id='data', ctx=Load()), + slice=Index(value=Str(s=node.id)), + ctx=node.ctx + ) + + Keep in mind that if the node you're operating on has child nodes you must + either transform the child nodes yourself or call the :meth:`generic_visit` + method for the node first. + + For nodes that were part of a collection of statements (that applies to all + statement nodes), the visitor may also return a list of nodes rather than + just a single node. + + Usually you use the transformer like this:: + + node = YourTransformer().visit(node) + new_values_getter_setter_ABC_const_types_const_types_not_new got multiple values for argument NumBytesNameConstant# Should be a 2-tuple.# Else it should be an int giving the minor version for 3.x.# end_lineno and end_col_offset are optional attributes, and they# should be copied whether the value is None or not.# Keep \r\n together# The following code is for backward compatibility.# It will be removed in future.# arbitrary keyword arguments are accepted# should be before intb' + ast + ~~~ + + The `ast` module helps Python applications to process trees of the Python + abstract syntax grammar. The abstract syntax itself might change with + each Python release; this module helps to find out programmatically what + the current grammar looks like and allows modifications of it. + + An abstract syntax tree can be generated by passing `ast.PyCF_ONLY_AST` as + a flag to the `compile()` builtin function or by using the `parse()` + function from this module. The result will be a tree of objects whose + classes all inherit from `ast.AST`. + + A modified abstract syntax tree can be compiled into a Python code object + using the built-in `compile()` function. + + Additionally various helper functions are provided that make working with + the trees simpler. The main intention of the helper functions and this + module in general is to provide an easy to use interface for libraries + that work tightly with the python syntax (template engines for example). + + + :copyright: Copyright 2008 by Armin Ronacher. + :license: Python License. +'u' + ast + ~~~ + + The `ast` module helps Python applications to process trees of the Python + abstract syntax grammar. The abstract syntax itself might change with + each Python release; this module helps to find out programmatically what + the current grammar looks like and allows modifications of it. + + An abstract syntax tree can be generated by passing `ast.PyCF_ONLY_AST` as + a flag to the `compile()` builtin function or by using the `parse()` + function from this module. The result will be a tree of objects whose + classes all inherit from `ast.AST`. + + A modified abstract syntax tree can be compiled into a Python code object + using the built-in `compile()` function. + + Additionally various helper functions are provided that make working with + the trees simpler. The main intention of the helper functions and this + module in general is to provide an easy to use interface for libraries + that work tightly with the python syntax (template engines for example). + + + :copyright: Copyright 2008 by Armin Ronacher. + :license: Python License. +'b' + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). + Pass type_comments=True to get back type comments where the syntax allows. + 'u' + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). + Pass type_comments=True to get back type comments where the syntax allows. + 'b' + Safely evaluate an expression node or a string containing a Python + expression. The string or node provided may only consist of the following + Python literal structures: strings, bytes, numbers, tuples, lists, dicts, + sets, booleans, and None. + 'u' + Safely evaluate an expression node or a string containing a Python + expression. The string or node provided may only consist of the following + Python literal structures: strings, bytes, numbers, tuples, lists, dicts, + sets, booleans, and None. + 'b'eval'u'eval'b'malformed node or string: 'u'malformed node or string: 'b' + Return a formatted dump of the tree in node. This is mainly useful for + debugging purposes. If annotate_fields is true (by default), + the returned string will show the names and the values for fields. + If annotate_fields is false, the result string will be more compact by + omitting unambiguous field names. Attributes such as line + numbers and column offsets are not dumped by default. If this is wanted, + include_attributes can be set to true. + 'u' + Return a formatted dump of the tree in node. This is mainly useful for + debugging purposes. If annotate_fields is true (by default), + the returned string will show the names and the values for fields. + If annotate_fields is false, the result string will be more compact by + omitting unambiguous field names. Attributes such as line + numbers and column offsets are not dumped by default. If this is wanted, + include_attributes can be set to true. + 'b'%s=%s'u'%s=%s'b'expected AST, got %r'u'expected AST, got %r'b' + Copy source location (`lineno`, `col_offset`, `end_lineno`, and `end_col_offset` + attributes) from *old_node* to *new_node* if possible, and return *new_node*. + 'u' + Copy source location (`lineno`, `col_offset`, `end_lineno`, and `end_col_offset` + attributes) from *old_node* to *new_node* if possible, and return *new_node*. + 'b'lineno'b'col_offset'b'end_lineno'b'end_col_offset'b'end_'u'end_'b' + When you compile a node tree with compile(), the compiler expects lineno and + col_offset attributes for every node that supports them. This is rather + tedious to fill in for generated nodes, so this helper adds these attributes + recursively where not already set, by setting them to the values of the + parent node. It works recursively starting at *node*. + 'u' + When you compile a node tree with compile(), the compiler expects lineno and + col_offset attributes for every node that supports them. This is rather + tedious to fill in for generated nodes, so this helper adds these attributes + recursively where not already set, by setting them to the values of the + parent node. It works recursively starting at *node*. + 'b' + Increment the line number and end line number of each node in the tree + starting at *node* by *n*. This is useful to "move code" to a different + location in a file. + 'u' + Increment the line number and end line number of each node in the tree + starting at *node* by *n*. This is useful to "move code" to a different + location in a file. + 'b' + Yield a tuple of ``(fieldname, value)`` for each field in ``node._fields`` + that is present on *node*. + 'u' + Yield a tuple of ``(fieldname, value)`` for each field in ``node._fields`` + that is present on *node*. + 'b' + Yield all direct child nodes of *node*, that is, all fields that are nodes + and all items of fields that are lists of nodes. + 'u' + Yield all direct child nodes of *node*, that is, all fields that are nodes + and all items of fields that are lists of nodes. + 'b' + Return the docstring for the given node or None if no docstring can + be found. If the node provided does not have docstrings a TypeError + will be raised. + + If *clean* is `True`, all tabs are expanded to spaces and any whitespace + that can be uniformly removed from the second line onwards is removed. + 'u' + Return the docstring for the given node or None if no docstring can + be found. If the node provided does not have docstrings a TypeError + will be raised. + + If *clean* is `True`, all tabs are expanded to spaces and any whitespace + that can be uniformly removed from the second line onwards is removed. + 'b'%r can't have docstrings'u'%r can't have docstrings'b'Split a string into lines ignoring form feed and other chars. + + This mimics how the Python parser splits source code. + 'u'Split a string into lines ignoring form feed and other chars. + + This mimics how the Python parser splits source code. + 'b'Replace all chars except '\f\t' in a line with spaces.'u'Replace all chars except '\f\t' in a line with spaces.'b' 'u' 'b'Get source code segment of the *source* that generated *node*. + + If some location information (`lineno`, `end_lineno`, `col_offset`, + or `end_col_offset`) is missing, return None. + + If *padded* is `True`, the first line of a multi-line statement will + be padded with spaces to match its original position. + 'u'Get source code segment of the *source* that generated *node*. + + If some location information (`lineno`, `end_lineno`, `col_offset`, + or `end_col_offset`) is missing, return None. + + If *padded* is `True`, the first line of a multi-line statement will + be padded with spaces to match its original position. + 'b' + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. + 'u' + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. + 'b' + A node visitor base class that walks the abstract syntax tree and calls a + visitor function for every node found. This function may return a value + which is forwarded by the `visit` method. + + This class is meant to be subclassed, with the subclass adding visitor + methods. + + Per default the visitor functions for the nodes are ``'visit_'`` + + class name of the node. So a `TryFinally` node visit function would + be `visit_TryFinally`. This behavior can be changed by overriding + the `visit` method. If no visitor function exists for a node + (return value `None`) the `generic_visit` visitor is used instead. + + Don't use the `NodeVisitor` if you want to apply changes to nodes during + traversing. For this a special visitor exists (`NodeTransformer`) that + allows modifications. + 'u' + A node visitor base class that walks the abstract syntax tree and calls a + visitor function for every node found. This function may return a value + which is forwarded by the `visit` method. + + This class is meant to be subclassed, with the subclass adding visitor + methods. + + Per default the visitor functions for the nodes are ``'visit_'`` + + class name of the node. So a `TryFinally` node visit function would + be `visit_TryFinally`. This behavior can be changed by overriding + the `visit` method. If no visitor function exists for a node + (return value `None`) the `generic_visit` visitor is used instead. + + Don't use the `NodeVisitor` if you want to apply changes to nodes during + traversing. For this a special visitor exists (`NodeTransformer`) that + allows modifications. + 'b'Visit a node.'u'Visit a node.'b'visit_'u'visit_'b'Called if no explicit visitor function exists for a node.'u'Called if no explicit visitor function exists for a node.'b' is deprecated; add visit_Constant'u' is deprecated; add visit_Constant'b' + A :class:`NodeVisitor` subclass that walks the abstract syntax tree and + allows modification of nodes. + + The `NodeTransformer` will walk the AST and use the return value of the + visitor methods to replace or remove the old node. If the return value of + the visitor method is ``None``, the node will be removed from its location, + otherwise it is replaced with the return value. The return value may be the + original node in which case no replacement takes place. + + Here is an example transformer that rewrites all occurrences of name lookups + (``foo``) to ``data['foo']``:: + + class RewriteName(NodeTransformer): + + def visit_Name(self, node): + return Subscript( + value=Name(id='data', ctx=Load()), + slice=Index(value=Str(s=node.id)), + ctx=node.ctx + ) + + Keep in mind that if the node you're operating on has child nodes you must + either transform the child nodes yourself or call the :meth:`generic_visit` + method for the node first. + + For nodes that were part of a collection of statements (that applies to all + statement nodes), the visitor may also return a list of nodes rather than + just a single node. + + Usually you use the transformer like this:: + + node = YourTransformer().visit(node) + 'u' + A :class:`NodeVisitor` subclass that walks the abstract syntax tree and + allows modification of nodes. + + The `NodeTransformer` will walk the AST and use the return value of the + visitor methods to replace or remove the old node. If the return value of + the visitor method is ``None``, the node will be removed from its location, + otherwise it is replaced with the return value. The return value may be the + original node in which case no replacement takes place. + + Here is an example transformer that rewrites all occurrences of name lookups + (``foo``) to ``data['foo']``:: + + class RewriteName(NodeTransformer): + + def visit_Name(self, node): + return Subscript( + value=Name(id='data', ctx=Load()), + slice=Index(value=Str(s=node.id)), + ctx=node.ctx + ) + + Keep in mind that if the node you're operating on has child nodes you must + either transform the child nodes yourself or call the :meth:`generic_visit` + method for the node first. + + For nodes that were part of a collection of statements (that applies to all + statement nodes), the visitor may also return a list of nodes rather than + just a single node. + + Usually you use the transformer like this:: + + node = YourTransformer().visit(node) + 'b' got multiple values for argument 'u' got multiple values for argument 'b's'u's'b'NameConstant'u'NameConstant'b'Num'u'Num'b'Str'u'Str'b'Bytes'u'Bytes'b'Ellipsis'u'Ellipsis'u'ast'runTestmethodName_asyncioTestLoop_asyncioCallsQueueasyncSetUpasyncTearDownaddAsyncCleanup_callSetUpsetUp_callAsync_callTestMethod_callMaybeAsync_callTearDowntearDown_callCleanupretisawaitablecreate_futurefutrun_until_complete_asyncioLoopRunnerquerytask_doneawaitable_setupAsyncioLoopnew_event_looploopset_event_loopset_debugcreate_task_asyncioCallsTask_tearDownAsyncioLoopto_canceltaskgatherreturn_exceptionscall_exception_handlerunhandled exception during test shutdownshutdown_asyncgens# Names intentionally have a long prefix# to reduce a chance of clashing with user-defined attributes# from inherited test case# The class doesn't call loop.run_until_complete(self.setUp()) and family# but uses a different approach:# 1. create a long-running task that reads self.setUp()# awaitable from queue along with a future# 2. await the awaitable object passing in and set the result# into the future object# 3. Outer code puts the awaitable and the future object into a queue# with waiting for the future# The trick is necessary because every run_until_complete() call# creates a new task with embedded ContextVar context.# To share contextvars between setUp(), test and tearDown() we need to execute# them inside the same task.# Note: the test case modifies event loop policy if the policy was not instantiated# yet.# asyncio.get_event_loop_policy() creates a default policy on demand but never# returns None# I believe this is not an issue in user level tests but python itself for testing# should reset a policy in every test module# by calling asyncio.set_event_loop_policy(None) in tearDownModule()# A trivial trampoline to addCleanup()# the function exists because it has a different semantics# and signature:# addCleanup() accepts regular functions# but addAsyncCleanup() accepts coroutines# We intentionally don't add inspect.iscoroutinefunction() check# for func argument because there is no way# to check for async function reliably:# 1. It can be "async def func()" iself# 2. Class can implement "async def __call__()" method# 3. Regular "def func()" that returns awaitable object# cancel all tasks# shutdown asyncgensb'runTest'u'runTest'b'unhandled exception during test shutdown'u'unhandled exception during test shutdown'b'task'u'task'u'unittest.async_case'u'async_case'u'allow programmer to define multiple exit functions to be executedupon normal program termination. + +Two public functions, register and unregister, are defined. +'_clear_ncallbacks_run_exitfuncsunregisterBase16, Base32, Base64 (RFC 3548), Base85 and Ascii85 data encodingsencodebytesdecodebytesb32encodeb32decodeb16encodeb16decodeb85encodeb85decodea85encodea85decodestandard_b64encodestandard_b64decodeurlsafe_b64encodeurlsafe_b64decodebytes_types_bytes_from_decode_datastring argument should contain only ASCII charactersargument should be a bytes-like object or ASCII string, not %r"argument should be a bytes-like object or ASCII ""string, not %r"altcharsEncode the bytes-like object s using Base64 and return a bytes object. + + Optional altchars should be a byte string of length 2 which specifies an + alternative alphabet for the '+' and '/' characters. This allows an + application to e.g. generate url or filesystem safe Base64 strings. + b2a_base64+/Decode the Base64 encoded bytes-like object or ASCII string s. + + Optional altchars must be a bytes-like object or ASCII string of length 2 + which specifies the alternative alphabet used instead of the '+' and '/' + characters. + + The result is returned as a bytes object. A binascii.Error is raised if + s is incorrectly padded. + + If validate is False (the default), characters that are neither in the + normal base-64 alphabet nor the alternative alphabet are discarded prior + to the padding check. If validate is True, these non-alphabet characters + in the input result in a binascii.Error. + fullmatch[A-Za-z0-9+/]*={0,2}Non-base64 digit founda2b_base64Encode bytes-like object s using the standard Base64 alphabet. + + The result is returned as a bytes object. + Decode bytes encoded with the standard Base64 alphabet. + + Argument s is a bytes-like object or ASCII string to decode. The result + is returned as a bytes object. A binascii.Error is raised if the input + is incorrectly padded. Characters that are not in the standard alphabet + are discarded prior to the padding check. + -__urlsafe_encode_translation_urlsafe_decode_translationEncode bytes using the URL- and filesystem-safe Base64 alphabet. + + Argument s is a bytes-like object to encode. The result is returned as a + bytes object. The alphabet uses '-' instead of '+' and '_' instead of + '/'. + Decode bytes using the URL- and filesystem-safe Base64 alphabet. + + Argument s is a bytes-like object or ASCII string to decode. The result + is returned as a bytes object. A binascii.Error is raised if the input + is incorrectly padded. Characters that are not in the URL-safe base-64 + alphabet, and are not a plus '+' or slash '/', are discarded prior to the + padding check. + + The alphabet uses '-' instead of '+' and '_' instead of '/'. + ABCDEFGHIJKLMNOPQRSTUVWXYZ234567_b32alphabet_b32tab2_b32revEncode the bytes-like object s using Base32 and return a bytes object. + b32tabb32tab210230x3ff==========map01Decode the Base32 encoded bytes-like object or ASCII string s. + + Optional casefold is a flag specifying whether a lowercase alphabet is + acceptable as input. For security purposes, the default is False. + + RFC 3548 allows for optional mapping of the digit 0 (zero) to the + letter O (oh), and for optional mapping of the digit 1 (one) to + either the letter I (eye) or letter L (el). The optional argument + map01 when not None, specifies which letter the digit 1 should be + mapped to (when map01 is not None, the digit 0 is always mapped to + the letter O). For security purposes the default is None, so that + 0 and 1 are not allowed in the input. + + The result is returned as a bytes object. A binascii.Error is raised if + the input is incorrectly padded or if there are non-alphabet + characters present in the input. + Incorrect paddingpadcharsdecodedb32revquantaaccNon-base32 digit foundEncode the bytes-like object s using Base16 and return a bytes object. + hexlifyDecode the Base16 encoded bytes-like object or ASCII string s. + + Optional casefold is a flag specifying whether a lowercase alphabet is + acceptable as input. For security purposes, the default is False. + + The result is returned as a bytes object. A binascii.Error is raised if + s is incorrectly padded or if there are non-alphabet characters present + in the input. + [^0-9A-F]Non-base16 digit foundunhexlify_a85chars_a85chars2<~_A85START~>_A85END_85encodechars2padfoldnulsfoldspaces!%dIwordsword5389762880x20202020614125857225chunkswrapcoladobeEncode bytes-like object b using Ascii85 and return a bytes object. + + foldspaces is an optional flag that uses the special short sequence 'y' + instead of 4 consecutive spaces (ASCII 0x20) as supported by 'btoa'. This + feature is not supported by the "standard" Adobe encoding. + + wrapcol controls whether the output should have newline (b'\n') characters + added to it. If this is non-zero, each output line will be at most this + many characters long. + + pad controls whether the input is padded to a multiple of 4 before + encoding. Note that the btoa implementation always pads. + + adobe controls whether the encoded byte sequence is framed with <~ and ~>, + which is used by the Adobe implementation. + 118 + ignorecharsDecode the Ascii85 encoded bytes-like object or ASCII string b. + + foldspaces is a flag that specifies whether the 'y' short sequence should be + accepted as shorthand for 4 consecutive spaces (ASCII 0x20). This feature is + not supported by the "standard" Adobe encoding. + + adobe controls whether the input sequence is in Adobe Ascii85 format (i.e. + is framed with <~ and ~>). + + ignorechars should be a byte string containing characters to ignore from the + input. This should only contain whitespace characters, and by default + contains all whitespace characters in ASCII. + + The result is returned as a bytes object. + Ascii85 encoded byte sequences must end with {!r}"Ascii85 encoded byte sequences must end ""with {!r}"!IpackIdecoded_appendcurr_appendcurr_clear!Ascii85 overflowz inside Ascii85 5-tupley inside Ascii85 5-tuple Non-Ascii85 digit found: %c0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz!#$%&()*+-;<=>?@^_`{|}~b"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"b"abcdefghijklmnopqrstuvwxyz!#$%&()*+-;<=>?@^_`{|}~"_b85alphabet_b85chars_b85chars2_b85decEncode bytes-like object b in base85 format and return a bytes object. + + If pad is true, the input is padded with b'\0' so its length is a multiple of + 4 bytes before encoding. + Decode the base85-encoded bytes-like object or ASCII string b + + The result is returned as a bytes object. + ~chunkjbad base85 character at position %dbase85 overflow in hunk starting at byte %d76MAXLINESIZEMAXBINSIZEoutputEncode a file; input and output are binary files.Decode a file; input and output are binary files._input_type_checkexpected bytes-like object, not %sexpected single byte elements, not %r from %sexpected 1-D data, not %d-D data from %sEncode a bytestring into a bytes object containing multiple lines + of base-64 data.piecesencodestringLegacy alias of encodebytes().encodestring() is a deprecated alias since 3.1, use encodebytes()"encodestring() is a deprecated alias since 3.1, ""use encodebytes()"Decode a bytestring of base-64 data into a bytes object.decodestringLegacy alias of decodebytes().decodestring() is a deprecated alias since Python 3.1, use decodebytes()"decodestring() is a deprecated alias since Python 3.1, ""use decodebytes()"Small main programgetoptdeutoptsusage: %s [-d|-e|-u|-t] [file|-] + -d, -u: decode + -e: encode (default) + -t: encode and decode string 'Aladdin:open sesame'-e-d-u-tAladdin:open sesames0s1s2Script#! /usr/bin/env python3# Modified 04-Oct-1995 by Jack Jansen to use binascii module# Modified 30-Dec-2003 by Barry Warsaw to add full RFC 3548 support# Modified 22-May-2007 by Guido van Rossum to use bytes everywhere# Legacy interface exports traditional RFC 2045 Base64 encodings# Generalized interface for other encodings# Base85 and Ascii85 encodings# Standard Base64 encoding# Some common Base64 alternatives. As referenced by RFC 3458, see thread# starting at:# http://zgp.org/pipermail/p2p-hackers/2001-September/000316.html# Types acceptable as binary data# Base64 encoding/decoding uses binascii# Base32 encoding/decoding must be done in Python# Delay the initialization of the table to not waste memory# if the function is never called# Pad the last quantum with zero bits if necessary# Don't use += !# bits 1 - 10# bits 11 - 20# bits 21 - 30# bits 31 - 40# Adjust for any leftover partial quanta# Handle section 2.4 zero and one mapping. The flag map01 will be either# False, or the character to map the digit 1 (one) to. It should be# either L (el) or I (eye).# Strip off pad characters from the right. We need to count the pad# characters because this will tell us how many null bytes to remove from# the end of the decoded string.# Now decode the full quanta# Process the last, partial quanta# 1: 4, 3: 3, 4: 2, 6: 1# RFC 3548, Base 16 Alphabet specifies uppercase, but hexlify() returns# lowercase. The RFC also recommends against accepting input case# insensitively.# Ascii85 encoding/decoding# Helper function for a85encode and b85encode# Delay the initialization of tables to not waste memory# Strip off start/end markers# We have to go through this stepwise, so as to ignore spaces and handle# special short sequences# Skip whitespace# Throw away the extra padding# The following code is originally taken (with permission) from Mercurial# Legacy interface. This code could be cleaned up since I don't believe# binascii has any line length limitations. It just doesn't seem worth it# though. The files should be opened in binary mode.# Excluding the CRLF# Usable as a script...b'Base16, Base32, Base64 (RFC 3548), Base85 and Ascii85 data encodings'u'Base16, Base32, Base64 (RFC 3548), Base85 and Ascii85 data encodings'b'encodebytes'u'encodebytes'b'decodebytes'u'decodebytes'b'b64encode'u'b64encode'b'b64decode'u'b64decode'b'b32encode'u'b32encode'b'b32decode'u'b32decode'b'b16encode'u'b16encode'b'b16decode'u'b16decode'b'b85encode'u'b85encode'b'b85decode'u'b85decode'b'a85encode'u'a85encode'b'a85decode'u'a85decode'b'standard_b64encode'u'standard_b64encode'b'standard_b64decode'u'standard_b64decode'b'urlsafe_b64encode'u'urlsafe_b64encode'b'urlsafe_b64decode'u'urlsafe_b64decode'b'string argument should contain only ASCII characters'u'string argument should contain only ASCII characters'b'argument should be a bytes-like object or ASCII string, not %r'u'argument should be a bytes-like object or ASCII string, not %r'b'Encode the bytes-like object s using Base64 and return a bytes object. + + Optional altchars should be a byte string of length 2 which specifies an + alternative alphabet for the '+' and '/' characters. This allows an + application to e.g. generate url or filesystem safe Base64 strings. + 'u'Encode the bytes-like object s using Base64 and return a bytes object. + + Optional altchars should be a byte string of length 2 which specifies an + alternative alphabet for the '+' and '/' characters. This allows an + application to e.g. generate url or filesystem safe Base64 strings. + 'b'+/'b'Decode the Base64 encoded bytes-like object or ASCII string s. + + Optional altchars must be a bytes-like object or ASCII string of length 2 + which specifies the alternative alphabet used instead of the '+' and '/' + characters. + + The result is returned as a bytes object. A binascii.Error is raised if + s is incorrectly padded. + + If validate is False (the default), characters that are neither in the + normal base-64 alphabet nor the alternative alphabet are discarded prior + to the padding check. If validate is True, these non-alphabet characters + in the input result in a binascii.Error. + 'u'Decode the Base64 encoded bytes-like object or ASCII string s. + + Optional altchars must be a bytes-like object or ASCII string of length 2 + which specifies the alternative alphabet used instead of the '+' and '/' + characters. + + The result is returned as a bytes object. A binascii.Error is raised if + s is incorrectly padded. + + If validate is False (the default), characters that are neither in the + normal base-64 alphabet nor the alternative alphabet are discarded prior + to the padding check. If validate is True, these non-alphabet characters + in the input result in a binascii.Error. + 'b'[A-Za-z0-9+/]*={0,2}'b'Non-base64 digit found'u'Non-base64 digit found'b'Encode bytes-like object s using the standard Base64 alphabet. + + The result is returned as a bytes object. + 'u'Encode bytes-like object s using the standard Base64 alphabet. + + The result is returned as a bytes object. + 'b'Decode bytes encoded with the standard Base64 alphabet. + + Argument s is a bytes-like object or ASCII string to decode. The result + is returned as a bytes object. A binascii.Error is raised if the input + is incorrectly padded. Characters that are not in the standard alphabet + are discarded prior to the padding check. + 'u'Decode bytes encoded with the standard Base64 alphabet. + + Argument s is a bytes-like object or ASCII string to decode. The result + is returned as a bytes object. A binascii.Error is raised if the input + is incorrectly padded. Characters that are not in the standard alphabet + are discarded prior to the padding check. + 'b'-_'b'Encode bytes using the URL- and filesystem-safe Base64 alphabet. + + Argument s is a bytes-like object to encode. The result is returned as a + bytes object. The alphabet uses '-' instead of '+' and '_' instead of + '/'. + 'u'Encode bytes using the URL- and filesystem-safe Base64 alphabet. + + Argument s is a bytes-like object to encode. The result is returned as a + bytes object. The alphabet uses '-' instead of '+' and '_' instead of + '/'. + 'b'Decode bytes using the URL- and filesystem-safe Base64 alphabet. + + Argument s is a bytes-like object or ASCII string to decode. The result + is returned as a bytes object. A binascii.Error is raised if the input + is incorrectly padded. Characters that are not in the URL-safe base-64 + alphabet, and are not a plus '+' or slash '/', are discarded prior to the + padding check. + + The alphabet uses '-' instead of '+' and '_' instead of '/'. + 'u'Decode bytes using the URL- and filesystem-safe Base64 alphabet. + + Argument s is a bytes-like object or ASCII string to decode. The result + is returned as a bytes object. A binascii.Error is raised if the input + is incorrectly padded. Characters that are not in the URL-safe base-64 + alphabet, and are not a plus '+' or slash '/', are discarded prior to the + padding check. + + The alphabet uses '-' instead of '+' and '_' instead of '/'. + 'b'ABCDEFGHIJKLMNOPQRSTUVWXYZ234567'b'Encode the bytes-like object s using Base32 and return a bytes object. + 'u'Encode the bytes-like object s using Base32 and return a bytes object. + 'b''b'======'b'===='b'Decode the Base32 encoded bytes-like object or ASCII string s. + + Optional casefold is a flag specifying whether a lowercase alphabet is + acceptable as input. For security purposes, the default is False. + + RFC 3548 allows for optional mapping of the digit 0 (zero) to the + letter O (oh), and for optional mapping of the digit 1 (one) to + either the letter I (eye) or letter L (el). The optional argument + map01 when not None, specifies which letter the digit 1 should be + mapped to (when map01 is not None, the digit 0 is always mapped to + the letter O). For security purposes the default is None, so that + 0 and 1 are not allowed in the input. + + The result is returned as a bytes object. A binascii.Error is raised if + the input is incorrectly padded or if there are non-alphabet + characters present in the input. + 'u'Decode the Base32 encoded bytes-like object or ASCII string s. + + Optional casefold is a flag specifying whether a lowercase alphabet is + acceptable as input. For security purposes, the default is False. + + RFC 3548 allows for optional mapping of the digit 0 (zero) to the + letter O (oh), and for optional mapping of the digit 1 (one) to + either the letter I (eye) or letter L (el). The optional argument + map01 when not None, specifies which letter the digit 1 should be + mapped to (when map01 is not None, the digit 0 is always mapped to + the letter O). For security purposes the default is None, so that + 0 and 1 are not allowed in the input. + + The result is returned as a bytes object. A binascii.Error is raised if + the input is incorrectly padded or if there are non-alphabet + characters present in the input. + 'b'Incorrect padding'u'Incorrect padding'b'Non-base32 digit found'u'Non-base32 digit found'b'Encode the bytes-like object s using Base16 and return a bytes object. + 'u'Encode the bytes-like object s using Base16 and return a bytes object. + 'b'Decode the Base16 encoded bytes-like object or ASCII string s. + + Optional casefold is a flag specifying whether a lowercase alphabet is + acceptable as input. For security purposes, the default is False. + + The result is returned as a bytes object. A binascii.Error is raised if + s is incorrectly padded or if there are non-alphabet characters present + in the input. + 'u'Decode the Base16 encoded bytes-like object or ASCII string s. + + Optional casefold is a flag specifying whether a lowercase alphabet is + acceptable as input. For security purposes, the default is False. + + The result is returned as a bytes object. A binascii.Error is raised if + s is incorrectly padded or if there are non-alphabet characters present + in the input. + 'b'[^0-9A-F]'b'Non-base16 digit found'u'Non-base16 digit found'b'<~'b'~>'b'!%dI'u'!%dI'b'Encode bytes-like object b using Ascii85 and return a bytes object. + + foldspaces is an optional flag that uses the special short sequence 'y' + instead of 4 consecutive spaces (ASCII 0x20) as supported by 'btoa'. This + feature is not supported by the "standard" Adobe encoding. + + wrapcol controls whether the output should have newline (b'\n') characters + added to it. If this is non-zero, each output line will be at most this + many characters long. + + pad controls whether the input is padded to a multiple of 4 before + encoding. Note that the btoa implementation always pads. + + adobe controls whether the encoded byte sequence is framed with <~ and ~>, + which is used by the Adobe implementation. + 'u'Encode bytes-like object b using Ascii85 and return a bytes object. + + foldspaces is an optional flag that uses the special short sequence 'y' + instead of 4 consecutive spaces (ASCII 0x20) as supported by 'btoa'. This + feature is not supported by the "standard" Adobe encoding. + + wrapcol controls whether the output should have newline (b'\n') characters + added to it. If this is non-zero, each output line will be at most this + many characters long. + + pad controls whether the input is padded to a multiple of 4 before + encoding. Note that the btoa implementation always pads. + + adobe controls whether the encoded byte sequence is framed with <~ and ~>, + which is used by the Adobe implementation. + 'b' + 'b'Decode the Ascii85 encoded bytes-like object or ASCII string b. + + foldspaces is a flag that specifies whether the 'y' short sequence should be + accepted as shorthand for 4 consecutive spaces (ASCII 0x20). This feature is + not supported by the "standard" Adobe encoding. + + adobe controls whether the input sequence is in Adobe Ascii85 format (i.e. + is framed with <~ and ~>). + + ignorechars should be a byte string containing characters to ignore from the + input. This should only contain whitespace characters, and by default + contains all whitespace characters in ASCII. + + The result is returned as a bytes object. + 'u'Decode the Ascii85 encoded bytes-like object or ASCII string b. + + foldspaces is a flag that specifies whether the 'y' short sequence should be + accepted as shorthand for 4 consecutive spaces (ASCII 0x20). This feature is + not supported by the "standard" Adobe encoding. + + adobe controls whether the input sequence is in Adobe Ascii85 format (i.e. + is framed with <~ and ~>). + + ignorechars should be a byte string containing characters to ignore from the + input. This should only contain whitespace characters, and by default + contains all whitespace characters in ASCII. + + The result is returned as a bytes object. + 'b'Ascii85 encoded byte sequences must end with {!r}'u'Ascii85 encoded byte sequences must end with {!r}'b'!I'u'!I'b'!'b'Ascii85 overflow'u'Ascii85 overflow'b'z inside Ascii85 5-tuple'u'z inside Ascii85 5-tuple'b''b'y inside Ascii85 5-tuple'u'y inside Ascii85 5-tuple'b' 'b'Non-Ascii85 digit found: %c'u'Non-Ascii85 digit found: %c'b'0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz!#$%&()*+-;<=>?@^_`{|}~'b'Encode bytes-like object b in base85 format and return a bytes object. + + If pad is true, the input is padded with b'\0' so its length is a multiple of + 4 bytes before encoding. + 'u'Encode bytes-like object b in base85 format and return a bytes object. + + If pad is true, the input is padded with b'\0' so its length is a multiple of + 4 bytes before encoding. + 'b'Decode the base85-encoded bytes-like object or ASCII string b + + The result is returned as a bytes object. + 'u'Decode the base85-encoded bytes-like object or ASCII string b + + The result is returned as a bytes object. + 'b'~'b'bad base85 character at position %d'u'bad base85 character at position %d'b'base85 overflow in hunk starting at byte %d'u'base85 overflow in hunk starting at byte %d'b'Encode a file; input and output are binary files.'u'Encode a file; input and output are binary files.'b'Decode a file; input and output are binary files.'u'Decode a file; input and output are binary files.'b'expected bytes-like object, not %s'u'expected bytes-like object, not %s'b'expected single byte elements, not %r from %s'u'expected single byte elements, not %r from %s'b'expected 1-D data, not %d-D data from %s'u'expected 1-D data, not %d-D data from %s'b'Encode a bytestring into a bytes object containing multiple lines + of base-64 data.'u'Encode a bytestring into a bytes object containing multiple lines + of base-64 data.'b'Legacy alias of encodebytes().'u'Legacy alias of encodebytes().'b'encodestring() is a deprecated alias since 3.1, use encodebytes()'u'encodestring() is a deprecated alias since 3.1, use encodebytes()'b'Decode a bytestring of base-64 data into a bytes object.'u'Decode a bytestring of base-64 data into a bytes object.'b'Legacy alias of decodebytes().'u'Legacy alias of decodebytes().'b'decodestring() is a deprecated alias since Python 3.1, use decodebytes()'u'decodestring() is a deprecated alias since Python 3.1, use decodebytes()'b'Small main program'u'Small main program'b'deut'u'deut'b'usage: %s [-d|-e|-u|-t] [file|-] + -d, -u: decode + -e: encode (default) + -t: encode and decode string 'Aladdin:open sesame''u'usage: %s [-d|-e|-u|-t] [file|-] + -d, -u: decode + -e: encode (default) + -t: encode and decode string 'Aladdin:open sesame''b'-e'u'-e'b'-d'u'-d'b'-u'u'-u'b'-t'u'-t'b'Aladdin:open sesame'Base64 content transfer encoding per RFCs 2045-2047. + +This module handles the content transfer encoding method defined in RFC 2045 +to encode arbitrary 8-bit data using the three 8-bit bytes in four 7-bit +characters encoding known as Base64. + +It is used in the MIME standards for email to attach images, audio, and text +using some 8-bit character sets to messages. + +This module provides an interface to encode and decode both headers and bodies +with Base64 encoding. + +RFC 2045 defines a method for including character set information in an +`encoded-word' in a header. This method is commonly used for 8-bit real names +in To:, From:, Cc:, etc. fields, as well as Subject: lines. + +This module does not do the line wrapping or end-of-line character conversion +necessary for proper internationalized headers; it only does dumb encoding and +decoding. To deal with the various line wrapping issues, use the email.header +module. +body_decodebody_encodeheader_encodeheader_lengthCRLFNLMISC_LENReturn the length of s when it is encoded with base64.iso-8859-1header_bytesEncode a single header line with Base64 encoding in a given charset. + + charset names the character set to use to encode the header. It defaults + to iso-8859-1. Base64 encoding is defined in RFC 2045. + =?%s?b?%s?=eolEncode a string with base64. + + Each line will be wrapped at, at most, maxlinelen characters (defaults to + 76 characters). + + Each line of encoded text will end with eol, which defaults to "\n". Set + this to "\r\n" if you will be using the result of this function directly + in an email. + encvecmax_unencodedencDecode a raw base64 string, returning a bytes object. + + This function does not parse a full MIME header value encoded with + base64 (like =?iso-8859-1?b?bmloISBuaWgh?=) -- please use the high + level email.header class for that functionality. + raw-unicode-escape# Author: Ben Gertzfield# See also Charset.py# Helpers# BAW: should encode() inherit b2a_base64()'s dubious behavior in# adding a newline to the encoded string?# For convenience and backwards compatibility w/ standard base64 moduleb'Base64 content transfer encoding per RFCs 2045-2047. + +This module handles the content transfer encoding method defined in RFC 2045 +to encode arbitrary 8-bit data using the three 8-bit bytes in four 7-bit +characters encoding known as Base64. + +It is used in the MIME standards for email to attach images, audio, and text +using some 8-bit character sets to messages. + +This module provides an interface to encode and decode both headers and bodies +with Base64 encoding. + +RFC 2045 defines a method for including character set information in an +`encoded-word' in a header. This method is commonly used for 8-bit real names +in To:, From:, Cc:, etc. fields, as well as Subject: lines. + +This module does not do the line wrapping or end-of-line character conversion +necessary for proper internationalized headers; it only does dumb encoding and +decoding. To deal with the various line wrapping issues, use the email.header +module. +'u'Base64 content transfer encoding per RFCs 2045-2047. + +This module handles the content transfer encoding method defined in RFC 2045 +to encode arbitrary 8-bit data using the three 8-bit bytes in four 7-bit +characters encoding known as Base64. + +It is used in the MIME standards for email to attach images, audio, and text +using some 8-bit character sets to messages. + +This module provides an interface to encode and decode both headers and bodies +with Base64 encoding. + +RFC 2045 defines a method for including character set information in an +`encoded-word' in a header. This method is commonly used for 8-bit real names +in To:, From:, Cc:, etc. fields, as well as Subject: lines. + +This module does not do the line wrapping or end-of-line character conversion +necessary for proper internationalized headers; it only does dumb encoding and +decoding. To deal with the various line wrapping issues, use the email.header +module. +'b'body_decode'u'body_decode'b'body_encode'u'body_encode'b'decodestring'u'decodestring'b'header_encode'u'header_encode'b'header_length'u'header_length'b'Return the length of s when it is encoded with base64.'u'Return the length of s when it is encoded with base64.'b'iso-8859-1'u'iso-8859-1'b'Encode a single header line with Base64 encoding in a given charset. + + charset names the character set to use to encode the header. It defaults + to iso-8859-1. Base64 encoding is defined in RFC 2045. + 'u'Encode a single header line with Base64 encoding in a given charset. + + charset names the character set to use to encode the header. It defaults + to iso-8859-1. Base64 encoding is defined in RFC 2045. + 'b'=?%s?b?%s?='u'=?%s?b?%s?='b'Encode a string with base64. + + Each line will be wrapped at, at most, maxlinelen characters (defaults to + 76 characters). + + Each line of encoded text will end with eol, which defaults to "\n". Set + this to "\r\n" if you will be using the result of this function directly + in an email. + 'u'Encode a string with base64. + + Each line will be wrapped at, at most, maxlinelen characters (defaults to + 76 characters). + + Each line of encoded text will end with eol, which defaults to "\n". Set + this to "\r\n" if you will be using the result of this function directly + in an email. + 'b'Decode a raw base64 string, returning a bytes object. + + This function does not parse a full MIME header value encoded with + base64 (like =?iso-8859-1?b?bmloISBuaWgh?=) -- please use the high + level email.header class for that functionality. + 'u'Decode a raw base64 string, returning a bytes object. + + This function does not parse a full MIME header value encoded with + base64 (like =?iso-8859-1?b?bmloISBuaWgh?=) -- please use the high + level email.header class for that functionality. + 'b'raw-unicode-escape'u'raw-unicode-escape'u'email.base64mime'Base implementation of event loop. + +The event loop can be broken up into a multiplexer (the part +responsible for notifying us of I/O events) and the event loop proper, +which wraps a multiplexer with functionality for scheduling callbacks, +immediately or at a given time in the future. + +Whenever a public API takes a callback, subsequent positional +arguments will be passed to the callback if/when it is called. This +avoids the proliferation of trivial lambdas implementing closures. +Keyword arguments for the callback are not supported; this is a +conscious design decision, leaving the door open for keyword arguments +to modify the meaning of the API call itself. +concurrentsslconstantssslprotostaggeredtrsockBaseEventLoop_MIN_SCHEDULED_TIMER_HANDLES0.5_MIN_CANCELLED_TIMER_HANDLES_FRACTION_HAS_IPv6MAXIMUM_SELECT_TIMEOUT_unset_format_handle_callback_format_pipeSTDOUT_set_reuseportreuse_port not supported by socket modulereuse_port not supported by socket module, SO_REUSEPORT defined but not implemented.'reuse_port not supported by socket module, ''SO_REUSEPORT defined but not implemented.'_ipaddr_infoflowinfoscopeidafsidnaaf_interleave_addrinfosaddrinfosfirst_address_family_countInterleave list of addrinfo tuples by family.addrinfos_by_familyaddrinfos_listsreordered_run_until_complete_cb_get_loop_set_nodelay_SendfileFallbackProtocolProtocoltransp_FlowControlMixintransport should be _FlowControlMixin instance_transportget_protocol_protois_reading_should_resume_reading_protocol_paused_should_resume_writingpause_readingset_protocol_write_ready_futdrainis_closingConnection closed by peerconnection_madetransportInvalid state: connection should have been established already."Invalid state: ""connection should have been established already."connection_lostConnection is closed by peerpause_writingresume_writingdata_receivedInvalid state: reading should be pausedeof_receivedresume_readingServerAbstractServersocketsprotocol_factoryssl_contextbacklogssl_handshake_timeout_sockets_active_count_protocol_factory_backlog_ssl_context_ssl_handshake_timeout_serving_serving_forever_fut sockets=_attach_detach_wakeup_start_servingis_servingTransportSocket_stop_servingstart_servingserve_foreverserver is already being awaited on serve_forever() is closedwait_closedAbstractEventLoop_timer_cancelled_count_closed_stopping_ready_scheduled_default_executor_internal_fds_thread_idget_clock_info_clock_resolution_exception_handler_is_debug_modeslow_callback_duration_current_handle_task_factory_coroutine_origin_tracking_enabled_coroutine_origin_tracking_saved_depth_asyncgens_asyncgens_shutdown_called running=is_running closed=' ''closed='is_closed debug=get_debugCreate a Future object attached to the loop.coroSchedule a coroutine object. + + Return a task object. + _check_closed_set_task_nameset_task_factorySet a task factory that will be used by loop.create_task(). + + If factory is None the default task factory will be set. + + If factory is a callable, it should have a signature matching + '(loop, coro)', where 'loop' will be a reference to the active + event loop, 'coro' will be a coroutine object. The callable + must return a Future. + task factory must be a callable or Noneget_task_factoryReturn a task factory, or None if the default one is in use._make_socket_transportCreate socket transport._make_ssl_transportrawsocksslcontextcall_connection_madeCreate SSL transport._make_datagram_transportaddressCreate datagram transport._make_read_pipe_transportpipeCreate read pipe transport._make_write_pipe_transportCreate write pipe transport._make_subprocess_transportshellCreate subprocess transport._write_to_selfWrite a byte to self-pipe, to wake up the event loop. + + This may be called from a different thread. + + The subclass is responsible for implementing the self-pipe. + _process_eventsevent_listProcess selector events.Event loop is closed_asyncgen_finalizer_hookagencall_soon_threadsafe_asyncgen_firstiter_hookasynchronous generator was scheduled after loop.shutdown_asyncgens() call" was scheduled after ""loop.shutdown_asyncgens() call"Shutdown all active asynchronous generators.closing_agensagresultsan error occurred during closing of asynchronous generator 'an error occurred during closing of ''asynchronous generator 'asyncgen_check_runningThis event loop is already runningCannot run the event loop while another loop is runningrun_foreverRun until stop() is called._set_coroutine_origin_tracking_debugold_agen_hooksfirstiterfinalizer_run_onceRun until the Future is done. + + If the argument is a coroutine, it is wrapped in a Task. + + WARNING: It would be disastrous to call run_until_complete() + with the same coroutine twice -- it would wrap it in two + different Tasks and that can't be good. + + Return the Future's result, or raise its exception. + isfuturenew_taskensure_futureEvent loop stopped before Future completed.Stop running the event loop. + + Every callback already scheduled will still run. This simply informs + run_forever to stop looping after a complete iteration. + Close the event loop. + + This clears the queues and shuts down the executor, + but does not wait for the executor to finish. + + The event loop must not be running. + Cannot close a running event loopClose %rexecutorReturns True if the event loop was closed._warnunclosed event loop Returns True if the event loop is running.Return the time according to the event loop's clock. + + This is a float expressed in seconds since an epoch, but the + epoch, precision, accuracy and drift are unspecified and may + differ per event loop. + call_laterArrange for a callback to be called at a given time. + + Return a Handle: an opaque object with a cancel() method that + can be used to cancel the call. + + The delay can be an int or float, expressed in seconds. It is + always relative to the current time. + + Each callback will be called exactly once. If two callbacks + are scheduled for exactly the same time, it undefined which + will be called first. + + Any positional arguments after the callback will be passed to + the callback when it is called. + call_attimerwhenLike call_later(), but uses an absolute time. + + Absolute time corresponds to the event loop's time() method. + _check_thread_check_callbackTimerHandlecall_soonArrange for a callback to be called as soon as possible. + + This operates as a FIFO queue: callbacks are called in the + order in which they are registered. Each callback will be + called exactly once. + + Any positional arguments after the callback will be passed to + the callback when it is called. + _call_sooniscoroutineiscoroutinefunctioncoroutines cannot be used with ()a callable object was expected by (), got '(), ''got 'HandleCheck that the current thread is the thread running the event loop. + + Non-thread-safe methods of this class make this assumption and will + likely behave incorrectly when the assumption is violated. + + Should only be called when (self._debug == True). The caller is + responsible for checking this condition for performance reasons. + thread_idNon-thread-safe operation invoked on an event loop other than the current one"Non-thread-safe operation invoked on an event loop other ""than the current one"Like call_soon(), but thread-safe.run_in_executorwrap_futureset_default_executorUsing the default executor that is not an instance of ThreadPoolExecutor is deprecated and will be prohibited in Python 3.9'Using the default executor that is not an instance of ''ThreadPoolExecutor is deprecated and will be prohibited ''in Python 3.9'_getaddrinfo_debugfamily=type=proto=flags=Get address info %st0addrinfoGetting address info took 1000.01e3ms: getaddr_funcsockaddrsock_sendfilefallbackthe socket must be non-blocking_check_sendfile_params_sock_sendfile_nativeSendfileNotAvailableError_sock_sendfile_fallbacksyscall sendfile is not available for socket and file {file!r} combination"and file {file!r} combination"SENDFILE_FALLBACK_READBUFFER_SIZEblocksizetotal_sentsock_sendallfile should be opened in binary modeonly SOCK_STREAM type sockets are supportedcount must be a positive integer (got {!r})offset must be a non-negative integer (got {!r})_connect_sockaddr_infolocal_addr_infosCreate, bind and connect one socket.my_exceptionsladdrerror while attempting to bind on address 'error while attempting to bind on ''address '': 'sock_connectcreate_connectionlocal_addrhappy_eyeballs_delayinterleaveConnect to a TCP server. + + Create a streaming transport connection to a given Internet host and + port: socket family AF_INET or socket.AF_INET6 depending on host (or + family if specified), socket type SOCK_STREAM. protocol_factory must be + a callable returning a protocol instance. + + This method is a coroutine which will try to establish the connection + in the background. When successful, the coroutine returns a + (transport, protocol) pair. + server_hostname is only meaningful with sslYou must set server_hostname when using ssl without a host'You must set server_hostname ''when using ssl without a host'ssl_handshake_timeout is only meaningful with sslhost/port and sock can not be specified at the same time_ensure_resolvedinfosgetaddrinfo() returned empty listladdr_infosstaggered_raceMultiple exceptions: {}host and port was not specified and no sock specifiedA Stream Socket was expected, got _create_connection_transportget_extra_info%r connected to %s:%r: (%r, %r)sendfileSend a file to transport. + + Return the total number of bytes which were sent. + + The method uses high-performance os.sendfile if available. + + file must be a regular file object opened in binary mode. + + offset tells from where to start reading the file. If specified, + count is the total number of bytes to transmit as opposed to + sending the file until EOF is reached. File position is updated on + return or also in case of error in which case file.tell() + can be used to figure out the number of bytes + which were sent. + + fallback set to True makes asyncio to manually read and send + the file when the platform does not support the sendfile syscall + (e.g. Windows or SSL socket on Unix). + + Raise SendfileNotAvailableError if the system does not support + sendfile syscall and fallback is False. + Transport is closing_sendfile_compatible_SendfileModeUNSUPPORTEDsendfile is not supported for transport TRY_NATIVE_sendfile_nativefallback is disabled and native sendfile is not supported for transport "fallback is disabled and native sendfile is not ""supported for transport "_sendfile_fallbacksendfile syscall is not supportedstart_tlsUpgrade transport to TLS. + + Return a new transport that *protocol* should start using + immediately. + Python ssl module is not availableSSLContextsslcontext is expected to be an instance of ssl.SSLContext, got 'sslcontext is expected to be an instance of ssl.SSLContext, '_start_tls_compatibletransport is not supported by start_tls()SSLProtocolssl_protocolconmade_cbresume_cb_app_transportcreate_datagram_endpointremote_addrreuse_addressreuse_portallow_broadcastCreate datagram connection.A UDP Socket was expected, got problemssocket modifier keyword arguments can not be used when sock is specified. ('socket modifier keyword arguments can not be used ''when sock is specified. ('r_addrunexpected address familyaddr_pairs_infostring is expectedUnable to check or remove stale UNIX socket %r: %r'Unable to check or remove stale UNIX ''socket %r: %r'addr_infos2-tuple is expectedfamproaddr_paircan not get address informationPassing `reuse_address=True` is no longer supported, as the usage of SO_REUSEPORT in UDP poses a significant security concern."Passing `reuse_address=True` is no ""longer supported, as the usage of ""SO_REUSEPORT in UDP poses a significant ""security concern."The *reuse_address* parameter has been deprecated as of 3.5.10 and is scheduled for removal in 3.11."The *reuse_address* parameter has been ""deprecated as of 3.5.10 and is scheduled ""for removal in 3.11."local_addressremote_addressDatagram endpoint local_addr=%r remote_addr=%r created: (%r, %r)"Datagram endpoint local_addr=%r remote_addr=%r ""created: (%r, %r)"Datagram endpoint remote_addr=%r created: (%r, %r)"Datagram endpoint remote_addr=%r created: ""(%r, %r)"_create_server_getaddrinfogetaddrinfo() returned empty listcreate_serverCreate a TCP server. + + The host parameter can be a string, in that case the TCP server is + bound to host and port. + + The host parameter can also be a sequence of strings and in that case + the TCP server is bound to all hosts of the sequence. If a host + appears multiple times (possibly indirectly e.g. when hostnames + resolve to the same IP address), the server is only bound once to that + host. + + Return a Server object which can be used to stop the service. + + This method is a coroutine. + ssl argument must be an SSLContext or Nonehostscompletedcanonnamesacreate_server() failed to create socket.socket(%r, %r, %r)'create_server() failed to create ''socket.socket(%r, %r, %r)'error while attempting to bind on address %r: %s'error while attempting ''to bind on address %r: %s'Neither host/port nor sock were specified%r is servingconnect_accepted_socketHandle an accepted connection. + + This is used by servers that accept connections outside of + asyncio but that use asyncio to handle connections. + + This method is a coroutine. When completed, the coroutine + returns a (transport, protocol) pair. + %r handled: (%r, %r)connect_read_pipeRead pipe %r connected: (%r, %r)connect_write_pipeWrite pipe %r connected: (%r, %r)_log_subprocessstdin=stdout=stderr=stdout=stderr=subprocess_shellcmd must be a stringuniversal_newlines must be Falseshell must be Truebufsize must be 0text must be Falseencoding must be Noneerrors must be Nonedebug_logrun shell command %r%s: %rsubprocess_execprogramshell must be Falsepopen_argsexecute program get_exception_handlerReturn an exception handler, or None if the default one is in use. + set_exception_handlerSet handler as the new event loop exception handler. + + If handler is None, the default exception handler will + be set. + + If handler is a callable object, it should have a + signature matching '(loop, context)', where 'loop' + will be a reference to the active event loop, 'context' + will be a dict object (see `call_exception_handler()` + documentation for details about context). + A callable object or None is expected, got 'A callable object or None is expected, 'default_exception_handlerDefault exception handler. + + This is called when an exception occurs and no exception + handler is set, and can be called by a custom exception + handler that wants to defer to the default behavior. + + This default handler logs the error message and other + context-dependent information. In debug mode, a truncated + stack trace is also appended showing where the given object + (e.g. a handle or future or task) was created, if any. + + The context parameter has the same meaning as in + `call_exception_handler()`. + Unhandled exception in event loopsource_tracebackhandle_tracebacklog_linesformat_listObject created at (most recent call last): +Handle created at (most recent call last): +Call the current event loop's exception handler. + + The context argument is a dict containing the following keys: + + - 'message': Error message; + - 'exception' (optional): Exception object; + - 'future' (optional): Future instance; + - 'task' (optional): Task instance; + - 'handle' (optional): Handle instance; + - 'protocol' (optional): Protocol instance; + - 'transport' (optional): Transport instance; + - 'socket' (optional): Socket instance; + - 'asyncgen' (optional): Asynchronous generator that caused + the exception. + + New keys maybe introduced in the future. + + Note: do not overload this method in an event loop subclass. + For custom exception handling, use the + `set_exception_handler()` method. + Exception in default exception handlerUnhandled error in exception handlerException in default exception handler while handling an unexpected error in custom exception handler'Exception in default exception handler ''while handling an unexpected error ''in custom exception handler'_add_callbackAdd a Handle to _scheduled (TimerHandle) or _ready.A Handle is required here_cancelled_add_callback_signalsafeLike _add_callback() but called from a signal handler._timer_handle_cancelledNotification that a TimerHandle has been cancelled.Run one full iteration of the event loop. + + This calls all currently ready callbacks, polls for I/O, + schedules the resulting callbacks, and finally schedules + 'call_later' callbacks. + sched_countnew_scheduled_when_selectorntodo_runExecuting %s took %.3f secondsenabledDEBUG_STACK_DEPTH# Minimum number of _scheduled timer handles before cleanup of# cancelled handles is performed.# Minimum fraction of _scheduled timer handles that are cancelled# before cleanup of cancelled handles is performed.# Maximum timeout passed to select to avoid OS limitations# Used for deprecation and removal of `loop.create_datagram_endpoint()`'s# *reuse_address* parameter# format the task# Try to skip getaddrinfo if "host" is already an IP. Users might have# handled name resolution in their own code and pass in resolved IPs.# If port's a service name like "http", don't skip getaddrinfo.# Linux's inet_pton doesn't accept an IPv6 zone index after host,# like '::1%lo0'.# The host has already been resolved.# "host" is not an IP address.# Group addresses by family# Issue #22429: run_forever() already finished, no need to# stop it.# Never happens if peer disconnects after sending the whole content# Thus disconnection is always an exception from user perspective# Cancel the future.# Basically it has no effect because protocol is switched back,# no code should wait for it anymore.# Skip one loop iteration so that all 'loop.add_reader'# go through.# Identifier of the thread running the event loop, or None if the# event loop is not running# In debug mode, if the execution of a callback or a step of a task# exceed this duration in seconds, the slow callback/task is logged.# A weak set of all asynchronous generators that are# being iterated by the loop.# Set to True when `loop.shutdown_asyncgens` is called.# If Python version is <3.6 or we don't have any asynchronous# generators alive.# An exception is raised if the future didn't complete, so there# is no need to log the "destroy pending task" message# The coroutine raised a BaseException. Consume the exception# to not log a warning, the caller doesn't have access to the# local task.# NB: sendfile syscall is not supported for SSL sockets and# non-mmap files even if sendfile is supported by OS# EOF# all bind attempts failed# Use host as default for server_hostname. It is an error# if host is empty or not set, e.g. when an# already-connected socket was passed or when only a port# is given. To avoid this error, you can pass# server_hostname='' -- this will bypass the hostname# check. (This also means that if host is a numeric# IP/IPv6 address, we will attempt to verify that exact# address; this will probably fail, but it is possible to# create a certificate for a specific IP address, so we# don't judge it here.)# If using happy eyeballs, default to interleave addresses by family# not using happy eyeballs# using happy eyeballs# If they all have the same str(), raise one.# Raise a combined exception so the user can see all# the various error messages.# We allow AF_INET, AF_INET6, AF_UNIX as long as they# are SOCK_STREAM.# We support passing AF_UNIX sockets even though we have# a dedicated API for that: create_unix_connection.# Disallowing AF_UNIX in this method, breaks backwards# compatibility.# Get the socket from the transport because SSL transport closes# the old socket and creates a new SSL socket# Pause early so that "ssl_protocol.data_received()" doesn't# have a chance to get called before "ssl_protocol.connection_made()".# show the problematic kwargs in exception msg# Directory may have permissions only to create socket.# join address by (family, protocol)# Using order preserving dict# each addr has to have info for each (family, proto) pair# bpo-37228# "host" is already a resolved IP.# Assume it's a bad family/type/protocol combination.# Disable IPv4/IPv6 dual stack support (enabled by# default on Linux) which makes a single socket# listen on both address families.# don't log parameters: they may contain sensitive information# (password) and may be too long# Second protection layer for unexpected errors# in the default implementation, as well as for subclassed# event loops with overloaded "default_exception_handler".# Exception in the user set custom exception handler.# Let's try default handler.# Guard 'default_exception_handler' in case it is# overloaded.# Remove delayed calls that were cancelled if their number# is too high# Remove delayed calls that were cancelled from head of queue.# Compute the desired timeout.# Handle 'later' callbacks that are ready.# This is the only place where callbacks are actually *called*.# All other places just add them to ready.# Note: We run all currently scheduled callbacks, but not any# callbacks scheduled by callbacks run this time around --# they will be run the next time (after another I/O poll).# Use an idiom that is thread-safe without using locks.# Needed to break cycles when an exception occurs.b'Base implementation of event loop. + +The event loop can be broken up into a multiplexer (the part +responsible for notifying us of I/O events) and the event loop proper, +which wraps a multiplexer with functionality for scheduling callbacks, +immediately or at a given time in the future. + +Whenever a public API takes a callback, subsequent positional +arguments will be passed to the callback if/when it is called. This +avoids the proliferation of trivial lambdas implementing closures. +Keyword arguments for the callback are not supported; this is a +conscious design decision, leaving the door open for keyword arguments +to modify the meaning of the API call itself. +'u'Base implementation of event loop. + +The event loop can be broken up into a multiplexer (the part +responsible for notifying us of I/O events) and the event loop proper, +which wraps a multiplexer with functionality for scheduling callbacks, +immediately or at a given time in the future. + +Whenever a public API takes a callback, subsequent positional +arguments will be passed to the callback if/when it is called. This +avoids the proliferation of trivial lambdas implementing closures. +Keyword arguments for the callback are not supported; this is a +conscious design decision, leaving the door open for keyword arguments +to modify the meaning of the API call itself. +'b'BaseEventLoop'u'BaseEventLoop'b'AF_INET6'u'AF_INET6'b'__self__'u'__self__'b''u''b''u''b'reuse_port not supported by socket module'u'reuse_port not supported by socket module'b'reuse_port not supported by socket module, SO_REUSEPORT defined but not implemented.'u'reuse_port not supported by socket module, SO_REUSEPORT defined but not implemented.'b'inet_pton'u'inet_pton'b'idna'u'idna'b'Interleave list of addrinfo tuples by family.'u'Interleave list of addrinfo tuples by family.'b'TCP_NODELAY'u'TCP_NODELAY'b'transport should be _FlowControlMixin instance'u'transport should be _FlowControlMixin instance'b'Connection closed by peer'u'Connection closed by peer'b'Invalid state: connection should have been established already.'u'Invalid state: connection should have been established already.'b'Connection is closed by peer'u'Connection is closed by peer'b'Invalid state: reading should be paused'u'Invalid state: reading should be paused'b' sockets='u' sockets='b'server 'u'server 'b' is already being awaited on serve_forever()'u' is already being awaited on serve_forever()'b' is closed'u' is closed'b'monotonic'u'monotonic'b' running='u' running='b' closed='u' closed='b' debug='u' debug='b'Create a Future object attached to the loop.'u'Create a Future object attached to the loop.'b'Schedule a coroutine object. + + Return a task object. + 'u'Schedule a coroutine object. + + Return a task object. + 'b'Set a task factory that will be used by loop.create_task(). + + If factory is None the default task factory will be set. + + If factory is a callable, it should have a signature matching + '(loop, coro)', where 'loop' will be a reference to the active + event loop, 'coro' will be a coroutine object. The callable + must return a Future. + 'u'Set a task factory that will be used by loop.create_task(). + + If factory is None the default task factory will be set. + + If factory is a callable, it should have a signature matching + '(loop, coro)', where 'loop' will be a reference to the active + event loop, 'coro' will be a coroutine object. The callable + must return a Future. + 'b'task factory must be a callable or None'u'task factory must be a callable or None'b'Return a task factory, or None if the default one is in use.'u'Return a task factory, or None if the default one is in use.'b'Create socket transport.'u'Create socket transport.'b'Create SSL transport.'u'Create SSL transport.'b'Create datagram transport.'u'Create datagram transport.'b'Create read pipe transport.'u'Create read pipe transport.'b'Create write pipe transport.'u'Create write pipe transport.'b'Create subprocess transport.'u'Create subprocess transport.'b'Write a byte to self-pipe, to wake up the event loop. + + This may be called from a different thread. + + The subclass is responsible for implementing the self-pipe. + 'u'Write a byte to self-pipe, to wake up the event loop. + + This may be called from a different thread. + + The subclass is responsible for implementing the self-pipe. + 'b'Process selector events.'u'Process selector events.'b'Event loop is closed'u'Event loop is closed'b'asynchronous generator 'u'asynchronous generator 'b' was scheduled after loop.shutdown_asyncgens() call'u' was scheduled after loop.shutdown_asyncgens() call'b'Shutdown all active asynchronous generators.'u'Shutdown all active asynchronous generators.'b'an error occurred during closing of asynchronous generator 'u'an error occurred during closing of asynchronous generator 'b'asyncgen'u'asyncgen'b'This event loop is already running'u'This event loop is already running'b'Cannot run the event loop while another loop is running'u'Cannot run the event loop while another loop is running'b'Run until stop() is called.'u'Run until stop() is called.'b'Run until the Future is done. + + If the argument is a coroutine, it is wrapped in a Task. + + WARNING: It would be disastrous to call run_until_complete() + with the same coroutine twice -- it would wrap it in two + different Tasks and that can't be good. + + Return the Future's result, or raise its exception. + 'u'Run until the Future is done. + + If the argument is a coroutine, it is wrapped in a Task. + + WARNING: It would be disastrous to call run_until_complete() + with the same coroutine twice -- it would wrap it in two + different Tasks and that can't be good. + + Return the Future's result, or raise its exception. + 'b'Event loop stopped before Future completed.'u'Event loop stopped before Future completed.'b'Stop running the event loop. + + Every callback already scheduled will still run. This simply informs + run_forever to stop looping after a complete iteration. + 'u'Stop running the event loop. + + Every callback already scheduled will still run. This simply informs + run_forever to stop looping after a complete iteration. + 'b'Close the event loop. + + This clears the queues and shuts down the executor, + but does not wait for the executor to finish. + + The event loop must not be running. + 'u'Close the event loop. + + This clears the queues and shuts down the executor, + but does not wait for the executor to finish. + + The event loop must not be running. + 'b'Cannot close a running event loop'u'Cannot close a running event loop'b'Close %r'u'Close %r'b'Returns True if the event loop was closed.'u'Returns True if the event loop was closed.'b'unclosed event loop 'u'unclosed event loop 'b'Returns True if the event loop is running.'u'Returns True if the event loop is running.'b'Return the time according to the event loop's clock. + + This is a float expressed in seconds since an epoch, but the + epoch, precision, accuracy and drift are unspecified and may + differ per event loop. + 'u'Return the time according to the event loop's clock. + + This is a float expressed in seconds since an epoch, but the + epoch, precision, accuracy and drift are unspecified and may + differ per event loop. + 'b'Arrange for a callback to be called at a given time. + + Return a Handle: an opaque object with a cancel() method that + can be used to cancel the call. + + The delay can be an int or float, expressed in seconds. It is + always relative to the current time. + + Each callback will be called exactly once. If two callbacks + are scheduled for exactly the same time, it undefined which + will be called first. + + Any positional arguments after the callback will be passed to + the callback when it is called. + 'u'Arrange for a callback to be called at a given time. + + Return a Handle: an opaque object with a cancel() method that + can be used to cancel the call. + + The delay can be an int or float, expressed in seconds. It is + always relative to the current time. + + Each callback will be called exactly once. If two callbacks + are scheduled for exactly the same time, it undefined which + will be called first. + + Any positional arguments after the callback will be passed to + the callback when it is called. + 'b'Like call_later(), but uses an absolute time. + + Absolute time corresponds to the event loop's time() method. + 'u'Like call_later(), but uses an absolute time. + + Absolute time corresponds to the event loop's time() method. + 'b'call_at'u'call_at'b'Arrange for a callback to be called as soon as possible. + + This operates as a FIFO queue: callbacks are called in the + order in which they are registered. Each callback will be + called exactly once. + + Any positional arguments after the callback will be passed to + the callback when it is called. + 'u'Arrange for a callback to be called as soon as possible. + + This operates as a FIFO queue: callbacks are called in the + order in which they are registered. Each callback will be + called exactly once. + + Any positional arguments after the callback will be passed to + the callback when it is called. + 'b'call_soon'u'call_soon'b'coroutines cannot be used with 'u'coroutines cannot be used with 'b'()'u'()'b'a callable object was expected by 'u'a callable object was expected by 'b'(), got 'u'(), got 'b'Check that the current thread is the thread running the event loop. + + Non-thread-safe methods of this class make this assumption and will + likely behave incorrectly when the assumption is violated. + + Should only be called when (self._debug == True). The caller is + responsible for checking this condition for performance reasons. + 'u'Check that the current thread is the thread running the event loop. + + Non-thread-safe methods of this class make this assumption and will + likely behave incorrectly when the assumption is violated. + + Should only be called when (self._debug == True). The caller is + responsible for checking this condition for performance reasons. + 'b'Non-thread-safe operation invoked on an event loop other than the current one'u'Non-thread-safe operation invoked on an event loop other than the current one'b'Like call_soon(), but thread-safe.'u'Like call_soon(), but thread-safe.'b'call_soon_threadsafe'u'call_soon_threadsafe'b'run_in_executor'u'run_in_executor'b'Using the default executor that is not an instance of ThreadPoolExecutor is deprecated and will be prohibited in Python 3.9'u'Using the default executor that is not an instance of ThreadPoolExecutor is deprecated and will be prohibited in Python 3.9'b'family='u'family='b'type='u'type='b'proto='u'proto='b'flags='u'flags='b'Get address info %s'u'Get address info %s'b'Getting address info 'u'Getting address info 'b' took 'u' took 'b'ms: 'u'ms: 'b'the socket must be non-blocking'u'the socket must be non-blocking'b'syscall sendfile is not available for socket 'u'syscall sendfile is not available for socket 'b' and file {file!r} combination'u' and file {file!r} combination'b'seek'u'seek'b'mode'u'mode'b'file should be opened in binary mode'u'file should be opened in binary mode'b'only SOCK_STREAM type sockets are supported'u'only SOCK_STREAM type sockets are supported'b'count must be a positive integer (got {!r})'u'count must be a positive integer (got {!r})'b'offset must be a non-negative integer (got {!r})'u'offset must be a non-negative integer (got {!r})'b'Create, bind and connect one socket.'u'Create, bind and connect one socket.'b'error while attempting to bind on address 'u'error while attempting to bind on address 'b'Connect to a TCP server. + + Create a streaming transport connection to a given Internet host and + port: socket family AF_INET or socket.AF_INET6 depending on host (or + family if specified), socket type SOCK_STREAM. protocol_factory must be + a callable returning a protocol instance. + + This method is a coroutine which will try to establish the connection + in the background. When successful, the coroutine returns a + (transport, protocol) pair. + 'u'Connect to a TCP server. + + Create a streaming transport connection to a given Internet host and + port: socket family AF_INET or socket.AF_INET6 depending on host (or + family if specified), socket type SOCK_STREAM. protocol_factory must be + a callable returning a protocol instance. + + This method is a coroutine which will try to establish the connection + in the background. When successful, the coroutine returns a + (transport, protocol) pair. + 'b'server_hostname is only meaningful with ssl'u'server_hostname is only meaningful with ssl'b'You must set server_hostname when using ssl without a host'u'You must set server_hostname when using ssl without a host'b'ssl_handshake_timeout is only meaningful with ssl'u'ssl_handshake_timeout is only meaningful with ssl'b'host/port and sock can not be specified at the same time'u'host/port and sock can not be specified at the same time'b'getaddrinfo() returned empty list'u'getaddrinfo() returned empty list'b'Multiple exceptions: {}'u'Multiple exceptions: {}'b'host and port was not specified and no sock specified'u'host and port was not specified and no sock specified'b'A Stream Socket was expected, got 'u'A Stream Socket was expected, got 'b'%r connected to %s:%r: (%r, %r)'u'%r connected to %s:%r: (%r, %r)'b'Send a file to transport. + + Return the total number of bytes which were sent. + + The method uses high-performance os.sendfile if available. + + file must be a regular file object opened in binary mode. + + offset tells from where to start reading the file. If specified, + count is the total number of bytes to transmit as opposed to + sending the file until EOF is reached. File position is updated on + return or also in case of error in which case file.tell() + can be used to figure out the number of bytes + which were sent. + + fallback set to True makes asyncio to manually read and send + the file when the platform does not support the sendfile syscall + (e.g. Windows or SSL socket on Unix). + + Raise SendfileNotAvailableError if the system does not support + sendfile syscall and fallback is False. + 'u'Send a file to transport. + + Return the total number of bytes which were sent. + + The method uses high-performance os.sendfile if available. + + file must be a regular file object opened in binary mode. + + offset tells from where to start reading the file. If specified, + count is the total number of bytes to transmit as opposed to + sending the file until EOF is reached. File position is updated on + return or also in case of error in which case file.tell() + can be used to figure out the number of bytes + which were sent. + + fallback set to True makes asyncio to manually read and send + the file when the platform does not support the sendfile syscall + (e.g. Windows or SSL socket on Unix). + + Raise SendfileNotAvailableError if the system does not support + sendfile syscall and fallback is False. + 'b'Transport is closing'u'Transport is closing'b'_sendfile_compatible'u'_sendfile_compatible'b'sendfile is not supported for transport 'u'sendfile is not supported for transport 'b'fallback is disabled and native sendfile is not supported for transport 'u'fallback is disabled and native sendfile is not supported for transport 'b'sendfile syscall is not supported'u'sendfile syscall is not supported'b'Upgrade transport to TLS. + + Return a new transport that *protocol* should start using + immediately. + 'u'Upgrade transport to TLS. + + Return a new transport that *protocol* should start using + immediately. + 'b'Python ssl module is not available'u'Python ssl module is not available'b'sslcontext is expected to be an instance of ssl.SSLContext, got 'u'sslcontext is expected to be an instance of ssl.SSLContext, got 'b'_start_tls_compatible'u'_start_tls_compatible'b'transport 'u'transport 'b' is not supported by start_tls()'u' is not supported by start_tls()'b'Create datagram connection.'u'Create datagram connection.'b'A UDP Socket was expected, got 'u'A UDP Socket was expected, got 'b'socket modifier keyword arguments can not be used when sock is specified. ('u'socket modifier keyword arguments can not be used when sock is specified. ('b'unexpected address family'u'unexpected address family'b'string is expected'u'string is expected'u''b'Unable to check or remove stale UNIX socket %r: %r'u'Unable to check or remove stale UNIX socket %r: %r'b'2-tuple is expected'u'2-tuple is expected'b'can not get address information'u'can not get address information'b'Passing `reuse_address=True` is no longer supported, as the usage of SO_REUSEPORT in UDP poses a significant security concern.'u'Passing `reuse_address=True` is no longer supported, as the usage of SO_REUSEPORT in UDP poses a significant security concern.'b'The *reuse_address* parameter has been deprecated as of 3.5.10 and is scheduled for removal in 3.11.'u'The *reuse_address* parameter has been deprecated as of 3.5.10 and is scheduled for removal in 3.11.'b'Datagram endpoint local_addr=%r remote_addr=%r created: (%r, %r)'u'Datagram endpoint local_addr=%r remote_addr=%r created: (%r, %r)'b'Datagram endpoint remote_addr=%r created: (%r, %r)'u'Datagram endpoint remote_addr=%r created: (%r, %r)'b'getaddrinfo('u'getaddrinfo('b') returned empty list'u') returned empty list'b'Create a TCP server. + + The host parameter can be a string, in that case the TCP server is + bound to host and port. + + The host parameter can also be a sequence of strings and in that case + the TCP server is bound to all hosts of the sequence. If a host + appears multiple times (possibly indirectly e.g. when hostnames + resolve to the same IP address), the server is only bound once to that + host. + + Return a Server object which can be used to stop the service. + + This method is a coroutine. + 'u'Create a TCP server. + + The host parameter can be a string, in that case the TCP server is + bound to host and port. + + The host parameter can also be a sequence of strings and in that case + the TCP server is bound to all hosts of the sequence. If a host + appears multiple times (possibly indirectly e.g. when hostnames + resolve to the same IP address), the server is only bound once to that + host. + + Return a Server object which can be used to stop the service. + + This method is a coroutine. + 'b'ssl argument must be an SSLContext or None'u'ssl argument must be an SSLContext or None'b'create_server() failed to create socket.socket(%r, %r, %r)'u'create_server() failed to create socket.socket(%r, %r, %r)'b'IPPROTO_IPV6'u'IPPROTO_IPV6'b'error while attempting to bind on address %r: %s'u'error while attempting to bind on address %r: %s'b'Neither host/port nor sock were specified'u'Neither host/port nor sock were specified'b'%r is serving'u'%r is serving'b'Handle an accepted connection. + + This is used by servers that accept connections outside of + asyncio but that use asyncio to handle connections. + + This method is a coroutine. When completed, the coroutine + returns a (transport, protocol) pair. + 'u'Handle an accepted connection. + + This is used by servers that accept connections outside of + asyncio but that use asyncio to handle connections. + + This method is a coroutine. When completed, the coroutine + returns a (transport, protocol) pair. + 'b'%r handled: (%r, %r)'u'%r handled: (%r, %r)'b'Read pipe %r connected: (%r, %r)'u'Read pipe %r connected: (%r, %r)'b'Write pipe %r connected: (%r, %r)'u'Write pipe %r connected: (%r, %r)'b'stdin='u'stdin='b'stdout=stderr='u'stdout=stderr='b'stdout='u'stdout='b'stderr='u'stderr='b'cmd must be a string'u'cmd must be a string'b'universal_newlines must be False'u'universal_newlines must be False'b'shell must be True'u'shell must be True'b'bufsize must be 0'u'bufsize must be 0'b'text must be False'u'text must be False'b'encoding must be None'u'encoding must be None'b'errors must be None'u'errors must be None'b'run shell command %r'u'run shell command %r'b'%s: %r'u'%s: %r'b'shell must be False'u'shell must be False'b'execute program 'u'execute program 'b'Return an exception handler, or None if the default one is in use. + 'u'Return an exception handler, or None if the default one is in use. + 'b'Set handler as the new event loop exception handler. + + If handler is None, the default exception handler will + be set. + + If handler is a callable object, it should have a + signature matching '(loop, context)', where 'loop' + will be a reference to the active event loop, 'context' + will be a dict object (see `call_exception_handler()` + documentation for details about context). + 'u'Set handler as the new event loop exception handler. + + If handler is None, the default exception handler will + be set. + + If handler is a callable object, it should have a + signature matching '(loop, context)', where 'loop' + will be a reference to the active event loop, 'context' + will be a dict object (see `call_exception_handler()` + documentation for details about context). + 'b'A callable object or None is expected, got 'u'A callable object or None is expected, got 'b'Default exception handler. + + This is called when an exception occurs and no exception + handler is set, and can be called by a custom exception + handler that wants to defer to the default behavior. + + This default handler logs the error message and other + context-dependent information. In debug mode, a truncated + stack trace is also appended showing where the given object + (e.g. a handle or future or task) was created, if any. + + The context parameter has the same meaning as in + `call_exception_handler()`. + 'u'Default exception handler. + + This is called when an exception occurs and no exception + handler is set, and can be called by a custom exception + handler that wants to defer to the default behavior. + + This default handler logs the error message and other + context-dependent information. In debug mode, a truncated + stack trace is also appended showing where the given object + (e.g. a handle or future or task) was created, if any. + + The context parameter has the same meaning as in + `call_exception_handler()`. + 'b'Unhandled exception in event loop'u'Unhandled exception in event loop'b'source_traceback'u'source_traceback'b'handle_traceback'u'handle_traceback'b'Object created at (most recent call last): +'u'Object created at (most recent call last): +'b'Handle created at (most recent call last): +'u'Handle created at (most recent call last): +'b'Call the current event loop's exception handler. + + The context argument is a dict containing the following keys: + + - 'message': Error message; + - 'exception' (optional): Exception object; + - 'future' (optional): Future instance; + - 'task' (optional): Task instance; + - 'handle' (optional): Handle instance; + - 'protocol' (optional): Protocol instance; + - 'transport' (optional): Transport instance; + - 'socket' (optional): Socket instance; + - 'asyncgen' (optional): Asynchronous generator that caused + the exception. + + New keys maybe introduced in the future. + + Note: do not overload this method in an event loop subclass. + For custom exception handling, use the + `set_exception_handler()` method. + 'u'Call the current event loop's exception handler. + + The context argument is a dict containing the following keys: + + - 'message': Error message; + - 'exception' (optional): Exception object; + - 'future' (optional): Future instance; + - 'task' (optional): Task instance; + - 'handle' (optional): Handle instance; + - 'protocol' (optional): Protocol instance; + - 'transport' (optional): Transport instance; + - 'socket' (optional): Socket instance; + - 'asyncgen' (optional): Asynchronous generator that caused + the exception. + + New keys maybe introduced in the future. + + Note: do not overload this method in an event loop subclass. + For custom exception handling, use the + `set_exception_handler()` method. + 'b'Exception in default exception handler'u'Exception in default exception handler'b'Unhandled error in exception handler'u'Unhandled error in exception handler'b'context'u'context'b'Exception in default exception handler while handling an unexpected error in custom exception handler'u'Exception in default exception handler while handling an unexpected error in custom exception handler'b'Add a Handle to _scheduled (TimerHandle) or _ready.'u'Add a Handle to _scheduled (TimerHandle) or _ready.'b'A Handle is required here'u'A Handle is required here'b'Like _add_callback() but called from a signal handler.'u'Like _add_callback() but called from a signal handler.'b'Notification that a TimerHandle has been cancelled.'u'Notification that a TimerHandle has been cancelled.'b'Run one full iteration of the event loop. + + This calls all currently ready callbacks, polls for I/O, + schedules the resulting callbacks, and finally schedules + 'call_later' callbacks. + 'u'Run one full iteration of the event loop. + + This calls all currently ready callbacks, polls for I/O, + schedules the resulting callbacks, and finally schedules + 'call_later' callbacks. + 'b'Executing %s took %.3f seconds'u'Executing %s took %.3f seconds'u'asyncio.base_events'u'base_events'format_helpers_PENDING_CANCELLED_FINISHEDCheck for a Future. + + This returns True when obj is a Future instance or is advertising + itself as duck-type compatible by setting _asyncio_future_blocking. + See comment in Future for more details. + _format_callbackshelper function for Future.__repr__format_cb_format_callback_source{}, {}{}, <{} more>, {}cb=[_repr_running_future_repr_infoexception=result=created at # States for Future.# bpo-42183: _repr_running is needed for repr protection# when a Future or Task result contains itself directly or indirectly.# The logic is borrowed from @reprlib.recursive_repr decorator.# Unfortunately, the direct decorator usage is impossible because of# AttributeError: '_asyncio.Task' object has no attribute '__module__' error.# After fixing this thing we can return to the decorator based approach.# (Future) -> str# use reprlib to limit the length of the output, especially# for very long stringsb'Check for a Future. + + This returns True when obj is a Future instance or is advertising + itself as duck-type compatible by setting _asyncio_future_blocking. + See comment in Future for more details. + 'u'Check for a Future. + + This returns True when obj is a Future instance or is advertising + itself as duck-type compatible by setting _asyncio_future_blocking. + See comment in Future for more details. + 'b'_asyncio_future_blocking'u'_asyncio_future_blocking'b'helper function for Future.__repr__'u'helper function for Future.__repr__'b'{}, {}'u'{}, {}'b'{}, <{} more>, {}'u'{}, <{} more>, {}'b'cb=['u'cb=['b'exception='u'exception='b'result='u'result='b'created at 'u'created at 'u'asyncio.base_futures'u'base_futures'BaseSubprocessTransportSubprocessTransport_protocol_proc_pid_returncode_exit_waiters_pending_calls_pipes_finished_extraprocess %r created: pid %s_connect_pipespid=returncode=not started<{}>pollClose running child process: kill %rkillunclosed transport get_pidget_returncodeget_pipe_transport_check_procsend_signalWriteSubprocessPipeProtoReadSubprocessPipeProto_pipe_connection_lostpipe_connection_lost_try_finish_pipe_data_receivedpipe_data_received_process_exited%r exited with return code %rprocess_exited_waitWait until the process exit and return the process return code. + + This method is a coroutine.disconnected_call_connection_lostBaseProtocol fd= pipe=# Create the child process: set the _proc attribute# has the child process finished?# the child process has finished, but the# transport hasn't been notified yet?# Don't clear the _proc reference yet: _post_init() may still run# asyncio uses a child watcher: copy the status into the Popen# object. On Python 3.6, it is required to avoid a ResourceWarning.# wake up futures waiting for wait()b'process %r created: pid %s'u'process %r created: pid %s'b'closed'u'closed'b'pid='u'pid='b'returncode='u'returncode='b'not started'u'not started'b'<{}>'u'<{}>'b'Close running child process: kill %r'u'Close running child process: kill %r'b'unclosed transport 'u'unclosed transport 'b'%r exited with return code %r'u'%r exited with return code %r'b'Wait until the process exit and return the process return code. + + This method is a coroutine.'u'Wait until the process exit and return the process return code. + + This method is a coroutine.'b' fd='u' fd='b' pipe='u' pipe='u'asyncio.base_subprocess'u'base_subprocess'linecachebase_futures_task_repr_infocancellingname=%r_format_coroutinecoro=", generated in interactive + mode, are returned unchanged. + Set values of attributes as ready to start debugging.botframe_set_stopinfotrace_dispatchDispatch a trace function for debugged frames based on the event. + + This function is installed as the trace function for debugged + frames. Its return value is the new trace function, which is + usually itself. The default implementation decides how to + dispatch a frame, depending on the type of event (passed in as a + string) that is about to be executed. + + The event can be one of the following: + line: A new line of code is going to be executed. + call: A function is about to be called or another code block + is entered. + return: A function or other code block is about to return. + exception: An exception has occurred. + c_call: A C function is about to be called. + c_return: A C function has returned. + c_exception: A C function has raised an exception. + + For the Python events, specialized functions (see the dispatch_*() + methods) are called. For the C events, no action is taken. + + The arg parameter depends on the previous event. + quittingdispatch_linedispatch_callreturndispatch_returndispatch_exceptionc_callc_exceptionc_returnbdb.Bdb.dispatch: unknown debugging event:Invoke user function and return trace function for line event. + + If the debugger stops on the current line, invoke + self.user_line(). Raise BdbQuit if self.quitting is set. + Return self.trace_dispatch to continue tracing in this scope. + stop_herebreak_hereuser_lineInvoke user function and return trace function for call event. + + If the debugger stops on this function call, invoke + self.user_call(). Raise BbdQuit if self.quitting is set. + Return self.trace_dispatch to continue tracing in this scope. + break_anywherestopframeco_flagsuser_callInvoke user function and return trace function for return event. + + If the debugger stops on this function return, invoke + self.user_return(). Raise BdbQuit if self.quitting is set. + Return self.trace_dispatch to continue tracing in this scope. + returnframeuser_returnstoplinenoInvoke user function and return trace function for exception event. + + If the debugger stops on this exception, invoke + self.user_exception(). Raise BdbQuit if self.quitting is set. + Return self.trace_dispatch to continue tracing in this scope. + user_exceptionis_skipped_moduleReturn True if module_name matches any skip pattern.Return True if frame is below the starting frame in the stack.Return True if there is an effective breakpoint for this line. + + Check for line or function breakpoint and if in effect. + Delete temporary breakpoints if effective() says to. + co_firstlinenoeffectivebpcurrentbptemporarydo_clearRemove temporary breakpoint. + + Must implement in derived classes or get NotImplementedError. + subclass of bdb must implement do_clear()Return True if there is any breakpoint for frame's filename. + argument_listCalled if we might stop in a function.Called when we stop or break at a line.return_valueCalled when a return trap is set here.Called when we stop on an exception.Set the attributes for stopping. + + If stoplineno is greater than or equal to 0, then stop at line + greater than or equal to the stopline. If stoplineno is -1, then + don't stop at all. + set_untilStop when the line with the lineno greater than the current one is + reached or when returning from current frame.set_stepStop after one line of code.caller_framef_traceset_nextStop on the next line in or below the given frame.set_returnStop when returning from the given frame.set_traceStart debugging from frame. + + If frame is not specified, debugging starts from caller's frame. + set_continueStop only at breakpoints or when finished. + + If there are no breakpoints, set the system trace function to None. + set_quitSet quitting attribute to True. + + Raises BdbQuit exception in the next call to a dispatch_*() method. + set_breakcondfuncnameSet a new breakpoint for filename:lineno. + + If lineno doesn't exist for the filename, return an error message. + The filename should be in canonical form. + Line %s:%d does not exist_prune_breaksPrune breakpoints for filename:lineno. + + A list of breakpoints is maintained in the Bdb instance and in + the Breakpoint class. If a breakpoint in the Bdb instance no + longer exists in the Breakpoint class, then it's removed from the + Bdb instance. + bplistclear_breakDelete breakpoints for filename:lineno. + + If no breakpoints were set, return an error message. + There are no breakpoints in %sThere is no breakpoint at %s:%ddeleteMeclear_bpbynumberDelete a breakpoint by its index in Breakpoint.bpbynumber. + + If arg is invalid, return an error message. + get_bpbynumberclear_all_file_breaksDelete all breakpoints in filename. + + If none were set, return an error message. + blistclear_all_breaksDelete all existing breakpoints. + + If none were set, return an error message. + There are no breakpointsbpbynumberReturn a breakpoint by its index in Breakpoint.bybpnumber. + + For invalid arg values or if the breakpoint doesn't exist, + raise a ValueError. + Breakpoint number expectedNon-numeric breakpoint number %sBreakpoint number %d out of rangeBreakpoint %d already deletedget_breakReturn True if there is a breakpoint for filename:lineno.get_breaksReturn all breakpoints for filename:lineno. + + If no breakpoints are set, return an empty list. + get_file_breaksReturn all lines with breakpoints for filename. + + If no breakpoints are set, return an empty list. + get_all_breaksReturn all breakpoints that are set.Return a list of (frame, lineno) in a stack trace and a size. + + List starts with original calling frame, if there is one. + Size may be number of frames above or below f. + tb_linenoformat_stack_entryframe_linenolprefixReturn a string with information about a stack entry. + + The stack entry frame_lineno is a (frame, lineno) tuple. The + return string contains the canonical filename, the function name + or '', the input arguments, the return value, and the + line of code (if it exists). + + __return__f_locals->Debug a statement executed via the exec() function. + + globals defaults to __main__.dict; locals defaults to globals. + runevalDebug an expression executed via the eval() function. + + globals defaults to __main__.dict; locals defaults to globals. + runctxFor backwards-compatibility. Defers to run().runcallDebug a single function call. + + Return the result of the function call. + descriptor 'runcall' of 'Bdb' object needs an argument"descriptor 'runcall' of 'Bdb' object "Passing 'func' as keyword argument is deprecatedruncall expected at least 1 positional argument, got %d'runcall expected at least 1 positional argument, '($self, func, /, *args, **kwds)Start debugging with a Bdb instance from the caller's frame.Breakpoint class. + + Implements temporary breakpoints, ignore counts, disabling and + (re)-enabling, and conditionals. + + Breakpoints are indexed by number through bpbynumber and by + the (file, line) tuple using bplist. The former points to a + single instance of class Breakpoint. The latter points to a + list of such instances since there may be more than one + breakpoint per line. + + When creating a breakpoint, its associated filename should be + in canonical form. If funcname is defined, a breakpoint hit will be + counted when the first line of that function is executed. A + conditional breakpoint always counts a hit. + func_first_executable_linehitsDelete the breakpoint from the list associated to a file:line. + + If it is the last breakpoint in that position, it also deletes + the entry for the file:line. + Mark the breakpoint as enabled.Mark the breakpoint as disabled.bpprintPrint the output of bpformat(). + + The optional out argument directs where the output is sent + and defaults to standard output. + bpformatReturn a string with information about the breakpoint. + + The information includes the breakpoint number, temporary + status, file:line position, break condition, number of times to + ignore, and number of times hit. + + del dispkeep yes no %-4dbreakpoint %s at %s:%d + stop only if %s + ignore next %d hitsss + breakpoint already hit %d time%sReturn a condensed description of the breakpoint.breakpoint %s at %s:%scheckfuncnameReturn True if break should happen here. + + Whether a break should happen depends on the way that b (the breakpoint) + was set. If it was set via line number, check if b.line is the same as + the one in the frame. If it was set via function name, check if this is + the right function and if it is on the first executable line. + Determine which breakpoint for this file:line is to be acted upon. + + Called only if we know there is a breakpoint at this location. Return + the breakpoint that was triggered and a boolean that indicates if it is + ok to delete a temporary breakpoint. Return (None, None) if there is no + matching breakpoint. + possiblesTdb???+++ call+++retval+++ returnexc_stuff+++ exceptionfoofoo(barbar returnedbar(import bdb; bdb.foo(10)# None# XXX 'arg' is no longer used# First call of dispatch since reset()# (CT) Note that this may also be None!# No need to trace this function# Ignore call events in generator except when stepping.# Ignore return events in generator except when stepping.# The user issued a 'next' or 'until' command.# When stepping with next/until/return in a generator frame, skip# the internal StopIteration exception (with no traceback)# triggered by a subiterator run with the 'yield from' statement.# Stop at the StopIteration or GeneratorExit exception when the user# has set stopframe in a generator by issuing a return command, or a# next/until command at the last statement in the generator before the# exception.# Normally derived classes don't override the following# methods, but they may if they want to redefine the# definition of stopping and breakpoints.# some modules do not have names# (CT) stopframe may now also be None, see dispatch_call.# (CT) the former test for None is therefore removed from here.# The line itself has no breakpoint, but maybe the line is the# first line of a function with breakpoint set by function name.# flag says ok to delete temp. bp# Derived classes should override the user_* methods# to gain control.# stoplineno >= 0 means: stop at line >= the stoplineno# stoplineno -1 means: don't stop at all# Derived classes and clients can call the following methods# to affect the stepping state.# the name "until" is borrowed from gdb# Issue #13183: pdb skips frames after hitting a breakpoint and running# step commands.# Restore the trace function in the caller (that may not have been set# for performance reasons) when returning from the current frame.# Don't stop except at breakpoints or when finished# no breakpoints; run without debugger overhead# to manipulate breakpoints. These methods return an# error message if something went wrong, None if all is well.# Set_break prints out the breakpoint line and file:lineno.# Call self.get_*break*() to see the breakpoints or better# for bp in Breakpoint.bpbynumber: if bp: bp.bpprint().# Import as late as possible# If there's only one bp in the list for that file,line# pair, then remove the breaks entry# Derived classes and clients can call the following method# to get a data structure representing a stack trace.# The following methods can be called by clients to use# a debugger to debug a statement or an expression.# Both can be given as a string, or a code object.# B/W compatibility# This method is more useful to debug a single function call.# XXX Keeping state in the class is a mistake -- this means# you cannot have more than one active Bdb instance.# Next bp to be assigned# indexed by (file, lineno) tuple# Each entry is None or an instance of Bpt# index 0 is unused, except for marking an# effective break .... see effective()# Needed if funcname is not None.# This better be in canonical form!# Build the two lists# No longer in list# No more bp for this f:l combo# -----------end of Breakpoint class----------# Breakpoint was set via line number.# Breakpoint was set at a line with a def statement and the function# defined is called: don't break.# Breakpoint set via function name.# It's not a function call, but rather execution of def statement.# We are in the right frame.# The function is entered for the 1st time.# But we are not at the first line number: don't break.# Determines if there is an effective (active) breakpoint at this# line of code. Returns breakpoint number or 0 if none# Count every hit when bp is enabled# If unconditional, and ignoring go on to next, else break# breakpoint and marker that it's ok to delete if temporary# Conditional bp.# Ignore count applies only to those bpt hits where the# condition evaluates to true.# continue# else:# continue# if eval fails, most conservative thing is to stop on# breakpoint regardless of ignore count. Don't delete# temporary, as another hint to user.# -------------------- testing --------------------b'Debugger basics'u'Debugger basics'b'BdbQuit'u'BdbQuit'b'Bdb'u'Bdb'b'Breakpoint'u'Breakpoint'b'Exception to give up completely.'u'Exception to give up completely.'b'Generic Python debugger base class. + + This class takes care of details of the trace facility; + a derived class should implement user interaction. + The standard debugger class (pdb.Pdb) is an example. + + The optional skip argument must be an iterable of glob-style + module name patterns. The debugger will not step into frames + that originate in a module that matches one of these patterns. + Whether a frame is considered to originate in a certain module + is determined by the __name__ in the frame globals. + 'u'Generic Python debugger base class. + + This class takes care of details of the trace facility; + a derived class should implement user interaction. + The standard debugger class (pdb.Pdb) is an example. + + The optional skip argument must be an iterable of glob-style + module name patterns. The debugger will not step into frames + that originate in a module that matches one of these patterns. + Whether a frame is considered to originate in a certain module + is determined by the __name__ in the frame globals. + 'b'Return canonical form of filename. + + For real filenames, the canonical form is a case-normalized (on + case insensitive filesystems) absolute path. 'Filenames' with + angle brackets, such as "", generated in interactive + mode, are returned unchanged. + 'u'Return canonical form of filename. + + For real filenames, the canonical form is a case-normalized (on + case insensitive filesystems) absolute path. 'Filenames' with + angle brackets, such as "", generated in interactive + mode, are returned unchanged. + 'b'Set values of attributes as ready to start debugging.'u'Set values of attributes as ready to start debugging.'b'Dispatch a trace function for debugged frames based on the event. + + This function is installed as the trace function for debugged + frames. Its return value is the new trace function, which is + usually itself. The default implementation decides how to + dispatch a frame, depending on the type of event (passed in as a + string) that is about to be executed. + + The event can be one of the following: + line: A new line of code is going to be executed. + call: A function is about to be called or another code block + is entered. + return: A function or other code block is about to return. + exception: An exception has occurred. + c_call: A C function is about to be called. + c_return: A C function has returned. + c_exception: A C function has raised an exception. + + For the Python events, specialized functions (see the dispatch_*() + methods) are called. For the C events, no action is taken. + + The arg parameter depends on the previous event. + 'u'Dispatch a trace function for debugged frames based on the event. + + This function is installed as the trace function for debugged + frames. Its return value is the new trace function, which is + usually itself. The default implementation decides how to + dispatch a frame, depending on the type of event (passed in as a + string) that is about to be executed. + + The event can be one of the following: + line: A new line of code is going to be executed. + call: A function is about to be called or another code block + is entered. + return: A function or other code block is about to return. + exception: An exception has occurred. + c_call: A C function is about to be called. + c_return: A C function has returned. + c_exception: A C function has raised an exception. + + For the Python events, specialized functions (see the dispatch_*() + methods) are called. For the C events, no action is taken. + + The arg parameter depends on the previous event. + 'b'call'u'call'b'return'u'return'b'c_call'u'c_call'b'c_exception'u'c_exception'b'c_return'u'c_return'b'bdb.Bdb.dispatch: unknown debugging event:'u'bdb.Bdb.dispatch: unknown debugging event:'b'Invoke user function and return trace function for line event. + + If the debugger stops on the current line, invoke + self.user_line(). Raise BdbQuit if self.quitting is set. + Return self.trace_dispatch to continue tracing in this scope. + 'u'Invoke user function and return trace function for line event. + + If the debugger stops on the current line, invoke + self.user_line(). Raise BdbQuit if self.quitting is set. + Return self.trace_dispatch to continue tracing in this scope. + 'b'Invoke user function and return trace function for call event. + + If the debugger stops on this function call, invoke + self.user_call(). Raise BbdQuit if self.quitting is set. + Return self.trace_dispatch to continue tracing in this scope. + 'u'Invoke user function and return trace function for call event. + + If the debugger stops on this function call, invoke + self.user_call(). Raise BbdQuit if self.quitting is set. + Return self.trace_dispatch to continue tracing in this scope. + 'b'Invoke user function and return trace function for return event. + + If the debugger stops on this function return, invoke + self.user_return(). Raise BdbQuit if self.quitting is set. + Return self.trace_dispatch to continue tracing in this scope. + 'u'Invoke user function and return trace function for return event. + + If the debugger stops on this function return, invoke + self.user_return(). Raise BdbQuit if self.quitting is set. + Return self.trace_dispatch to continue tracing in this scope. + 'b'Invoke user function and return trace function for exception event. + + If the debugger stops on this exception, invoke + self.user_exception(). Raise BdbQuit if self.quitting is set. + Return self.trace_dispatch to continue tracing in this scope. + 'u'Invoke user function and return trace function for exception event. + + If the debugger stops on this exception, invoke + self.user_exception(). Raise BdbQuit if self.quitting is set. + Return self.trace_dispatch to continue tracing in this scope. + 'b'Return True if module_name matches any skip pattern.'u'Return True if module_name matches any skip pattern.'b'Return True if frame is below the starting frame in the stack.'u'Return True if frame is below the starting frame in the stack.'b'Return True if there is an effective breakpoint for this line. + + Check for line or function breakpoint and if in effect. + Delete temporary breakpoints if effective() says to. + 'u'Return True if there is an effective breakpoint for this line. + + Check for line or function breakpoint and if in effect. + Delete temporary breakpoints if effective() says to. + 'b'Remove temporary breakpoint. + + Must implement in derived classes or get NotImplementedError. + 'u'Remove temporary breakpoint. + + Must implement in derived classes or get NotImplementedError. + 'b'subclass of bdb must implement do_clear()'u'subclass of bdb must implement do_clear()'b'Return True if there is any breakpoint for frame's filename. + 'u'Return True if there is any breakpoint for frame's filename. + 'b'Called if we might stop in a function.'u'Called if we might stop in a function.'b'Called when we stop or break at a line.'u'Called when we stop or break at a line.'b'Called when a return trap is set here.'u'Called when a return trap is set here.'b'Called when we stop on an exception.'u'Called when we stop on an exception.'b'Set the attributes for stopping. + + If stoplineno is greater than or equal to 0, then stop at line + greater than or equal to the stopline. If stoplineno is -1, then + don't stop at all. + 'u'Set the attributes for stopping. + + If stoplineno is greater than or equal to 0, then stop at line + greater than or equal to the stopline. If stoplineno is -1, then + don't stop at all. + 'b'Stop when the line with the lineno greater than the current one is + reached or when returning from current frame.'u'Stop when the line with the lineno greater than the current one is + reached or when returning from current frame.'b'Stop after one line of code.'u'Stop after one line of code.'b'Stop on the next line in or below the given frame.'u'Stop on the next line in or below the given frame.'b'Stop when returning from the given frame.'u'Stop when returning from the given frame.'b'Start debugging from frame. + + If frame is not specified, debugging starts from caller's frame. + 'u'Start debugging from frame. + + If frame is not specified, debugging starts from caller's frame. + 'b'Stop only at breakpoints or when finished. + + If there are no breakpoints, set the system trace function to None. + 'u'Stop only at breakpoints or when finished. + + If there are no breakpoints, set the system trace function to None. + 'b'Set quitting attribute to True. + + Raises BdbQuit exception in the next call to a dispatch_*() method. + 'u'Set quitting attribute to True. + + Raises BdbQuit exception in the next call to a dispatch_*() method. + 'b'Set a new breakpoint for filename:lineno. + + If lineno doesn't exist for the filename, return an error message. + The filename should be in canonical form. + 'u'Set a new breakpoint for filename:lineno. + + If lineno doesn't exist for the filename, return an error message. + The filename should be in canonical form. + 'b'Line %s:%d does not exist'u'Line %s:%d does not exist'b'Prune breakpoints for filename:lineno. + + A list of breakpoints is maintained in the Bdb instance and in + the Breakpoint class. If a breakpoint in the Bdb instance no + longer exists in the Breakpoint class, then it's removed from the + Bdb instance. + 'u'Prune breakpoints for filename:lineno. + + A list of breakpoints is maintained in the Bdb instance and in + the Breakpoint class. If a breakpoint in the Bdb instance no + longer exists in the Breakpoint class, then it's removed from the + Bdb instance. + 'b'Delete breakpoints for filename:lineno. + + If no breakpoints were set, return an error message. + 'u'Delete breakpoints for filename:lineno. + + If no breakpoints were set, return an error message. + 'b'There are no breakpoints in %s'u'There are no breakpoints in %s'b'There is no breakpoint at %s:%d'u'There is no breakpoint at %s:%d'b'Delete a breakpoint by its index in Breakpoint.bpbynumber. + + If arg is invalid, return an error message. + 'u'Delete a breakpoint by its index in Breakpoint.bpbynumber. + + If arg is invalid, return an error message. + 'b'Delete all breakpoints in filename. + + If none were set, return an error message. + 'u'Delete all breakpoints in filename. + + If none were set, return an error message. + 'b'Delete all existing breakpoints. + + If none were set, return an error message. + 'u'Delete all existing breakpoints. + + If none were set, return an error message. + 'b'There are no breakpoints'u'There are no breakpoints'b'Return a breakpoint by its index in Breakpoint.bybpnumber. + + For invalid arg values or if the breakpoint doesn't exist, + raise a ValueError. + 'u'Return a breakpoint by its index in Breakpoint.bybpnumber. + + For invalid arg values or if the breakpoint doesn't exist, + raise a ValueError. + 'b'Breakpoint number expected'u'Breakpoint number expected'b'Non-numeric breakpoint number %s'u'Non-numeric breakpoint number %s'b'Breakpoint number %d out of range'u'Breakpoint number %d out of range'b'Breakpoint %d already deleted'u'Breakpoint %d already deleted'b'Return True if there is a breakpoint for filename:lineno.'u'Return True if there is a breakpoint for filename:lineno.'b'Return all breakpoints for filename:lineno. + + If no breakpoints are set, return an empty list. + 'u'Return all breakpoints for filename:lineno. + + If no breakpoints are set, return an empty list. + 'b'Return all lines with breakpoints for filename. + + If no breakpoints are set, return an empty list. + 'u'Return all lines with breakpoints for filename. + + If no breakpoints are set, return an empty list. + 'b'Return all breakpoints that are set.'u'Return all breakpoints that are set.'b'Return a list of (frame, lineno) in a stack trace and a size. + + List starts with original calling frame, if there is one. + Size may be number of frames above or below f. + 'u'Return a list of (frame, lineno) in a stack trace and a size. + + List starts with original calling frame, if there is one. + Size may be number of frames above or below f. + 'b'Return a string with information about a stack entry. + + The stack entry frame_lineno is a (frame, lineno) tuple. The + return string contains the canonical filename, the function name + or '', the input arguments, the return value, and the + line of code (if it exists). + + 'u'Return a string with information about a stack entry. + + The stack entry frame_lineno is a (frame, lineno) tuple. The + return string contains the canonical filename, the function name + or '', the input arguments, the return value, and the + line of code (if it exists). + + 'b''u''b'__return__'u'__return__'b'->'u'->'b'Debug a statement executed via the exec() function. + + globals defaults to __main__.dict; locals defaults to globals. + 'u'Debug a statement executed via the exec() function. + + globals defaults to __main__.dict; locals defaults to globals. + 'b'Debug an expression executed via the eval() function. + + globals defaults to __main__.dict; locals defaults to globals. + 'u'Debug an expression executed via the eval() function. + + globals defaults to __main__.dict; locals defaults to globals. + 'b'For backwards-compatibility. Defers to run().'u'For backwards-compatibility. Defers to run().'b'Debug a single function call. + + Return the result of the function call. + 'u'Debug a single function call. + + Return the result of the function call. + 'b'descriptor 'runcall' of 'Bdb' object needs an argument'u'descriptor 'runcall' of 'Bdb' object needs an argument'b'func'b'Passing 'func' as keyword argument is deprecated'u'Passing 'func' as keyword argument is deprecated'b'runcall expected at least 1 positional argument, got %d'u'runcall expected at least 1 positional argument, got %d'b'($self, func, /, *args, **kwds)'u'($self, func, /, *args, **kwds)'b'Start debugging with a Bdb instance from the caller's frame.'u'Start debugging with a Bdb instance from the caller's frame.'b'Breakpoint class. + + Implements temporary breakpoints, ignore counts, disabling and + (re)-enabling, and conditionals. + + Breakpoints are indexed by number through bpbynumber and by + the (file, line) tuple using bplist. The former points to a + single instance of class Breakpoint. The latter points to a + list of such instances since there may be more than one + breakpoint per line. + + When creating a breakpoint, its associated filename should be + in canonical form. If funcname is defined, a breakpoint hit will be + counted when the first line of that function is executed. A + conditional breakpoint always counts a hit. + 'u'Breakpoint class. + + Implements temporary breakpoints, ignore counts, disabling and + (re)-enabling, and conditionals. + + Breakpoints are indexed by number through bpbynumber and by + the (file, line) tuple using bplist. The former points to a + single instance of class Breakpoint. The latter points to a + list of such instances since there may be more than one + breakpoint per line. + + When creating a breakpoint, its associated filename should be + in canonical form. If funcname is defined, a breakpoint hit will be + counted when the first line of that function is executed. A + conditional breakpoint always counts a hit. + 'b'Delete the breakpoint from the list associated to a file:line. + + If it is the last breakpoint in that position, it also deletes + the entry for the file:line. + 'u'Delete the breakpoint from the list associated to a file:line. + + If it is the last breakpoint in that position, it also deletes + the entry for the file:line. + 'b'Mark the breakpoint as enabled.'u'Mark the breakpoint as enabled.'b'Mark the breakpoint as disabled.'u'Mark the breakpoint as disabled.'b'Print the output of bpformat(). + + The optional out argument directs where the output is sent + and defaults to standard output. + 'u'Print the output of bpformat(). + + The optional out argument directs where the output is sent + and defaults to standard output. + 'b'Return a string with information about the breakpoint. + + The information includes the breakpoint number, temporary + status, file:line position, break condition, number of times to + ignore, and number of times hit. + + 'u'Return a string with information about the breakpoint. + + The information includes the breakpoint number, temporary + status, file:line position, break condition, number of times to + ignore, and number of times hit. + + 'b'del 'u'del 'b'keep 'u'keep 'b'yes 'u'yes 'b'no 'u'no 'b'%-4dbreakpoint %s at %s:%d'u'%-4dbreakpoint %s at %s:%d'b' + stop only if %s'u' + stop only if %s'b' + ignore next %d hits'u' + ignore next %d hits'b' + breakpoint already hit %d time%s'u' + breakpoint already hit %d time%s'b'Return a condensed description of the breakpoint.'u'Return a condensed description of the breakpoint.'b'breakpoint %s at %s:%s'u'breakpoint %s at %s:%s'b'Return True if break should happen here. + + Whether a break should happen depends on the way that b (the breakpoint) + was set. If it was set via line number, check if b.line is the same as + the one in the frame. If it was set via function name, check if this is + the right function and if it is on the first executable line. + 'u'Return True if break should happen here. + + Whether a break should happen depends on the way that b (the breakpoint) + was set. If it was set via line number, check if b.line is the same as + the one in the frame. If it was set via function name, check if this is + the right function and if it is on the first executable line. + 'b'Determine which breakpoint for this file:line is to be acted upon. + + Called only if we know there is a breakpoint at this location. Return + the breakpoint that was triggered and a boolean that indicates if it is + ok to delete a temporary breakpoint. Return (None, None) if there is no + matching breakpoint. + 'u'Determine which breakpoint for this file:line is to be acted upon. + + Called only if we know there is a breakpoint at this location. Return + the breakpoint that was triggered and a boolean that indicates if it is + ok to delete a temporary breakpoint. Return (None, None) if there is no + matching breakpoint. + 'b'???'u'???'b'+++ call'u'+++ call'b'+++'u'+++'b'+++ return'u'+++ return'b'+++ exception'u'+++ exception'b'foo('u'foo('b'bar returned'u'bar returned'b'bar('u'bar('b'import bdb; bdb.foo(10)'u'import bdb; bdb.foo(10)'u'bdb'u'binascii'binascii.Erroru'Incomplete.__weakref__'binascii.IncompleteIncompleteu'Conversion between binary data and ASCII'u'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/binascii.cpython-38-darwin.so'a2b_hexa2b_hqxa2b_qpa2b_uub2a_hexb2a_hqxb2a_qpb2a_uucrc32crc_hqxrlecode_hqxrledecode_hqxBisection algorithms.lohiInsert item x in list a, and keep it sorted assuming a is sorted. + + If x is already in a, insert it to the right of the rightmost x. + + Optional args lo (default 0) and hi (default len(a)) bound the + slice of a to be searched. + Return the index where to insert item x in list a, assuming a is sorted. + + The return value i is such that all e in a[:i] have e <= x, and all e in + a[i:] have e > x. So if x already appears in the list, a.insert(x) will + insert just after the rightmost x already there. + + Optional args lo (default 0) and hi (default len(a)) bound the + slice of a to be searched. + lo must be non-negativemidInsert item x in list a, and keep it sorted assuming a is sorted. + + If x is already in a, insert it to the left of the leftmost x. + + Optional args lo (default 0) and hi (default len(a)) bound the + slice of a to be searched. + Return the index where to insert item x in list a, assuming a is sorted. + + The return value i is such that all e in a[:i] have e < x, and all e in + a[i:] have e >= x. So if x already appears in the list, a.insert(x) will + insert just before the leftmost x already there. + + Optional args lo (default 0) and hi (default len(a)) bound the + slice of a to be searched. + bisectinsort# Overwrite above definitions with a fast C implementation# Create aliasesb'Bisection algorithms.'u'Bisection algorithms.'b'Insert item x in list a, and keep it sorted assuming a is sorted. + + If x is already in a, insert it to the right of the rightmost x. + + Optional args lo (default 0) and hi (default len(a)) bound the + slice of a to be searched. + 'u'Insert item x in list a, and keep it sorted assuming a is sorted. + + If x is already in a, insert it to the right of the rightmost x. + + Optional args lo (default 0) and hi (default len(a)) bound the + slice of a to be searched. + 'b'Return the index where to insert item x in list a, assuming a is sorted. + + The return value i is such that all e in a[:i] have e <= x, and all e in + a[i:] have e > x. So if x already appears in the list, a.insert(x) will + insert just after the rightmost x already there. + + Optional args lo (default 0) and hi (default len(a)) bound the + slice of a to be searched. + 'u'Return the index where to insert item x in list a, assuming a is sorted. + + The return value i is such that all e in a[:i] have e <= x, and all e in + a[i:] have e > x. So if x already appears in the list, a.insert(x) will + insert just after the rightmost x already there. + + Optional args lo (default 0) and hi (default len(a)) bound the + slice of a to be searched. + 'b'lo must be non-negative'u'lo must be non-negative'b'Insert item x in list a, and keep it sorted assuming a is sorted. + + If x is already in a, insert it to the left of the leftmost x. + + Optional args lo (default 0) and hi (default len(a)) bound the + slice of a to be searched. + 'u'Insert item x in list a, and keep it sorted assuming a is sorted. + + If x is already in a, insert it to the left of the leftmost x. + + Optional args lo (default 0) and hi (default len(a)) bound the + slice of a to be searched. + 'b'Return the index where to insert item x in list a, assuming a is sorted. + + The return value i is such that all e in a[:i] have e < x, and all e in + a[i:] have e >= x. So if x already appears in the list, a.insert(x) will + insert just before the leftmost x already there. + + Optional args lo (default 0) and hi (default len(a)) bound the + slice of a to be searched. + 'u'Return the index where to insert item x in list a, assuming a is sorted. + + The return value i is such that all e in a[:i] have e < x, and all e in + a[i:] have e >= x. So if x already appears in the list, a.insert(x) will + insert just before the leftmost x already there. + + Optional args lo (default 0) and hi (default len(a)) bound the + slice of a to be searched. + 'u'bisect'A bottom-up tree matching algorithm implementation meant to speed +up 2to3's matching process. After the tree patterns are reduced to +their rarest linear path, a linear Aho-Corasick automaton is +created. The linear automaton traverses the linear paths from the +leaves to the root of the AST and returns a set of nodes for further +matching. This reduces significantly the number of candidate nodes.George Boutsioukis pytreebtm_utilsreduce_treeBMNodeClass for a node of the Aho-Corasick automaton used in matchingtransition_tablefixerscontentBottomMatcherThe main matcher class. After instantiating the patterns should + be added using the add_fixer methodnodesRefactoringTooladd_fixerfixerReduces a fixer's pattern tree to a linear path and adds it + to the matcher(a common Aho-Corasick automaton). The fixer is + appended on the matching states and called when they are + reachedpattern_treeget_linear_subpatternlinearmatch_nodesmatch_nodeRecursively adds a linear pattern to the AC automatonalternativeend_nodesnext_nodeleavesThe main interface with the bottom matcher. The tree is + traversed from the bottom using the constructed + automaton. Nodes are only checked once as the tree is + retraversed. When the automaton fails, we give it one more + shot(in case the above tree matches as a whole with the + rejected leaf), then we break for the next leaf. There is the + special case of multiple arguments(see code comments) where we + recheck the nodes + + Args: + The leaves of the AST tree to be matched + + Returns: + A dictionary of node matches with fixers as the keys + current_ac_nodeleafcurrent_ast_nodewas_checkedLeafnode_tokenprint_acPrints a graphviz diagram of the BM automaton(for debugging)digraph g{print_nodesubnode_keysubnode%d -> %d [label=%s] //%stype_repr_type_reprstype_numpygrampython_symbols#print("adding pattern", pattern, "to", start)#print("empty pattern")#alternatives#print("alternatives")#add all alternatives, and add the rest of the pattern#to each end node#single token#not last#transition did not exist, create new#transition exists already, follow# multiple statements, recheck#name#token matches#matching failed, reset automaton#the rest of the tree upwards has been checked, next leaf#recheck the rejected node once from the root# taken from pytree.py for debugging; only used by print_ac# printing tokens is possible but not as useful# from .pgen2 import token // token.__dict__.items():b'A bottom-up tree matching algorithm implementation meant to speed +up 2to3's matching process. After the tree patterns are reduced to +their rarest linear path, a linear Aho-Corasick automaton is +created. The linear automaton traverses the linear paths from the +leaves to the root of the AST and returns a set of nodes for further +matching. This reduces significantly the number of candidate nodes.'u'A bottom-up tree matching algorithm implementation meant to speed +up 2to3's matching process. After the tree patterns are reduced to +their rarest linear path, a linear Aho-Corasick automaton is +created. The linear automaton traverses the linear paths from the +leaves to the root of the AST and returns a set of nodes for further +matching. This reduces significantly the number of candidate nodes.'b'George Boutsioukis 'u'George Boutsioukis 'b'Class for a node of the Aho-Corasick automaton used in matching'u'Class for a node of the Aho-Corasick automaton used in matching'b'The main matcher class. After instantiating the patterns should + be added using the add_fixer method'u'The main matcher class. After instantiating the patterns should + be added using the add_fixer method'b'RefactoringTool'u'RefactoringTool'b'Reduces a fixer's pattern tree to a linear path and adds it + to the matcher(a common Aho-Corasick automaton). The fixer is + appended on the matching states and called when they are + reached'u'Reduces a fixer's pattern tree to a linear path and adds it + to the matcher(a common Aho-Corasick automaton). The fixer is + appended on the matching states and called when they are + reached'b'Recursively adds a linear pattern to the AC automaton'u'Recursively adds a linear pattern to the AC automaton'b'The main interface with the bottom matcher. The tree is + traversed from the bottom using the constructed + automaton. Nodes are only checked once as the tree is + retraversed. When the automaton fails, we give it one more + shot(in case the above tree matches as a whole with the + rejected leaf), then we break for the next leaf. There is the + special case of multiple arguments(see code comments) where we + recheck the nodes + + Args: + The leaves of the AST tree to be matched + + Returns: + A dictionary of node matches with fixers as the keys + 'u'The main interface with the bottom matcher. The tree is + traversed from the bottom using the constructed + automaton. Nodes are only checked once as the tree is + retraversed. When the automaton fails, we give it one more + shot(in case the above tree matches as a whole with the + rejected leaf), then we break for the next leaf. There is the + special case of multiple arguments(see code comments) where we + recheck the nodes + + Args: + The leaves of the AST tree to be matched + + Returns: + A dictionary of node matches with fixers as the keys + 'b'Prints a graphviz diagram of the BM automaton(for debugging)'u'Prints a graphviz diagram of the BM automaton(for debugging)'b'digraph g{'u'digraph g{'b'%d -> %d [label=%s] //%s'u'%d -> %d [label=%s] //%s'u'lib2to3.btm_matcher'u'btm_matcher'Utility functions used by the btm_matcher modulepgen2grammarpattern_symbolssymspysymsopmaptokenstoken_labelsTYPE_ANYTYPE_ALTERNATIVESTYPE_GROUPMinNodeThis class serves as an intermediate representation of the + pattern tree during the conversion to sets of leaf-to-root + subpatternsalternativesleaf_to_rootInternal method. Returns a characteristic path of the + pattern tree. This method must be run for all leaves until the + linear subpatterns are merged into a singlesubpget_characteristic_subpatternNAMEDrives the leaf_to_root method. The reason that + leaf_to_root must be run multiple times is because we need to + reject 'group' matches; for example the alternative form + (a | b c) creates a group [b c] that needs to be matched. Since + matching multiple linear patterns overcomes the automaton's + capabilities, leaf_to_root merges each group into a single + choice based on 'characteristic'ity, + + i.e. (a|b c) -> (a|b) if b more characteristic than c + + Returns: The most 'characteristic'(as defined by + get_characteristic_subpattern) path for the compiled pattern + tree. + Generator that returns the leaves of the tree + Internal function. Reduces a compiled pattern tree to an + intermediate representation suitable for feeding the + automaton. This also trims off any optional pattern elements(like + [a], a*). + AlternativesreducedAlternativeUnitdetails_nodealternatives_nodehas_repeaterrepeater_nodehas_variable_nameDetailsRepeatername_leafSTRINGsubpatternsPicks the most characteristic from a list of linear patterns + Current order used is: + names > common_names > common_chars + subpatterns_with_namessubpatterns_with_common_namesforifnotcommon_namessubpatterns_with_common_chars[]().,:common_charssubpatternrec_testtest_funcTests test_func on all items of sequence and items of included + sub-iterables#last alternative#probably should check the number of leaves#in case of type=name, use the name instead#switch on the node type#skip#2 cases#just a single 'Alternative', skip this node#real alternatives#skip odd children('|' tokens)# delete the group if all of the children were reduced to None#skip parentheses#skip whole unit if its optional# variable name#skip variable name#skip variable name, '='# skip parenthesis#set node type#(python) non-name or wildcard#(python) name or character; remove the apostrophes from#the string value#handle repeaters#reduce to None#reduce to a single occurrence i.e. do nothing#TODO: handle {min, max} repeaters#add children#skip '<', '>' markers# first pick out the ones containing variable names# of the remaining subpatterns pick out the longest oneb'Utility functions used by the btm_matcher module'u'Utility functions used by the btm_matcher module'b'This class serves as an intermediate representation of the + pattern tree during the conversion to sets of leaf-to-root + subpatterns'u'This class serves as an intermediate representation of the + pattern tree during the conversion to sets of leaf-to-root + subpatterns'b'Internal method. Returns a characteristic path of the + pattern tree. This method must be run for all leaves until the + linear subpatterns are merged into a single'u'Internal method. Returns a characteristic path of the + pattern tree. This method must be run for all leaves until the + linear subpatterns are merged into a single'b'Drives the leaf_to_root method. The reason that + leaf_to_root must be run multiple times is because we need to + reject 'group' matches; for example the alternative form + (a | b c) creates a group [b c] that needs to be matched. Since + matching multiple linear patterns overcomes the automaton's + capabilities, leaf_to_root merges each group into a single + choice based on 'characteristic'ity, + + i.e. (a|b c) -> (a|b) if b more characteristic than c + + Returns: The most 'characteristic'(as defined by + get_characteristic_subpattern) path for the compiled pattern + tree. + 'u'Drives the leaf_to_root method. The reason that + leaf_to_root must be run multiple times is because we need to + reject 'group' matches; for example the alternative form + (a | b c) creates a group [b c] that needs to be matched. Since + matching multiple linear patterns overcomes the automaton's + capabilities, leaf_to_root merges each group into a single + choice based on 'characteristic'ity, + + i.e. (a|b c) -> (a|b) if b more characteristic than c + + Returns: The most 'characteristic'(as defined by + get_characteristic_subpattern) path for the compiled pattern + tree. + 'b'Generator that returns the leaves of the tree'u'Generator that returns the leaves of the tree'b' + Internal function. Reduces a compiled pattern tree to an + intermediate representation suitable for feeding the + automaton. This also trims off any optional pattern elements(like + [a], a*). + 'u' + Internal function. Reduces a compiled pattern tree to an + intermediate representation suitable for feeding the + automaton. This also trims off any optional pattern elements(like + [a], a*). + 'b'any'u'any'b'Picks the most characteristic from a list of linear patterns + Current order used is: + names > common_names > common_chars + 'u'Picks the most characteristic from a list of linear patterns + Current order used is: + names > common_names > common_chars + 'b'for'u'for'b'if'u'if'b'not'u'not'b'None'u'None'b'[]().,:'u'[]().,:'b'Tests test_func on all items of sequence and items of included + sub-iterables'u'Tests test_func on all items of sequence and items of included + sub-iterables'u'lib2to3.btm_utils'u'btm_utils'Interface to the libbzip2 compression library. + +This module provides a file interface, classes for incremental +(de)compression, and functions for one-shot (de)compression. +BZ2FileNadeem Vawda _builtin_open_compression_MODE_CLOSED_MODE_READ_MODE_WRITE_sentinelA file object providing transparent bzip2 (de)compression. + + A BZ2File can act as a wrapper for an existing file object, or refer + directly to a named file on disk. + + Note that BZ2File provides a *binary* file interface - data read is + returned as bytes, and data to be written should be given as bytes. + bufferingcompresslevelOpen a bzip2-compressed file. + + If filename is a str, bytes, or PathLike object, it gives the + name of the file to be opened. Otherwise, it should be a file + object, which will be used to read or write the compressed data. + + mode can be 'r' for reading (default), 'w' for (over)writing, + 'x' for creating exclusively, or 'a' for appending. These can + equivalently be given as 'rb', 'wb', 'xb', and 'ab'. + + buffering is ignored since Python 3.0. Its use is deprecated. + + If mode is 'w', 'x' or 'a', compresslevel can be a number between 1 + and 9 specifying the level of compression: 1 produces the least + compression, and 9 (default) produces the most compression. + + If mode is 'r', the input file may be the concatenation of + multiple compressed streams. + _closefpUse of 'buffering' argument is deprecated and ignored since Python 3.0."Use of 'buffering' argument is deprecated and ignored ""since Python 3.0."compresslevel must be between 1 and 9mode_code_compressorxbabInvalid mode: %rPathLikefilename must be a str, bytes, file or PathLike object_bufferFlush and close the file. + + May be called more than once without error. Once the file is + closed, any other operation on it will raise a ValueError. + True if this file is closed.Return the file descriptor for the underlying file.Return whether the file supports seeking.Return whether the file was opened for reading.Return whether the file was opened for writing.Return buffered data without advancing the file position. + + Always returns at least one byte of data, unless at EOF. + The exact number of bytes returned is unspecified. + Read up to size uncompressed bytes from the file. + + If size is negative or omitted, read until EOF is reached. + Returns b'' if the file is already at EOF. + Read up to size uncompressed bytes, while trying to avoid + making multiple reads from the underlying stream. Reads up to a + buffer's worth of data if size is negative. + + Returns b'' if the file is at EOF. + Read bytes into b. + + Returns the number of bytes read (0 for EOF). + Read a line of uncompressed bytes from the file. + + The terminating newline (if present) is retained. If size is + non-negative, no more than size bytes will be read (in which + case the line may be incomplete). Returns b'' if already at EOF. + Integer argument expectedRead a list of lines of uncompressed bytes from the file. + + size can be specified to control the number of lines read: no + further lines will be read once the total size of the lines read + so far equals or exceeds size. + Write a byte string to the file. + + Returns the number of uncompressed bytes written, which is + always len(data). Note that due to buffering, the file on disk + may not reflect the data written until close() is called. + compressedWrite a sequence of byte strings to the file. + + Returns the number of uncompressed bytes written. + seq can be any iterable yielding byte strings. + + Line separators are not added between the written byte strings. + Change the file position. + + The new position is specified by offset, relative to the + position indicated by whence. Values for whence are: + + 0: start of stream (default); offset must not be negative + 1: current stream position + 2: end of stream; offset must not be positive + + Returns the new file position. + + Note that seeking is emulated, so depending on the parameters, + this operation may be extremely slow. + Open a bzip2-compressed file in binary or text mode. + + The filename argument can be an actual filename (a str, bytes, or + PathLike object), or an existing file object to read from or write + to. + + The mode argument can be "r", "rb", "w", "wb", "x", "xb", "a" or + "ab" for binary mode, or "rt", "wt", "xt" or "at" for text mode. + The default mode is "rb", and the default compresslevel is 9. + + For binary mode, this function is equivalent to the BZ2File + constructor: BZ2File(filename, mode, compresslevel). In this case, + the encoding, errors and newline arguments must not be provided. + + For text mode, a BZ2File object is created, and wrapped in an + io.TextIOWrapper instance with the specified encoding, error + handling behavior, and line ending(s). + + Argument 'encoding' not supported in binary modeArgument 'errors' not supported in binary modeArgument 'newline' not supported in binary modebz_modebinary_fileCompress a block of data. + + compresslevel, if given, must be a number between 1 and 9. + + For incremental compression, use a BZ2Compressor object instead. + compDecompress a block of data. + + For incremental decompression, use a BZ2Decompressor object instead. + decompCompressed data ended before the end-of-stream marker was reached"Compressed data ended before the "# Value 2 no longer used# This lock must be recursive, so that BufferedIOBase's# writelines() does not deadlock.# Relies on the undocumented fact that BufferedReader.peek()# always returns at least one byte (except at EOF), independent# of the value of n# Leftover data is not a valid bzip2 stream; ignore it.# Error on the first iteration; bail out.b'Interface to the libbzip2 compression library. + +This module provides a file interface, classes for incremental +(de)compression, and functions for one-shot (de)compression. +'u'Interface to the libbzip2 compression library. + +This module provides a file interface, classes for incremental +(de)compression, and functions for one-shot (de)compression. +'b'BZ2File'u'BZ2File'b'BZ2Compressor'u'BZ2Compressor'b'BZ2Decompressor'u'BZ2Decompressor'b'open'u'open'b'compress'u'compress'b'decompress'u'decompress'b'Nadeem Vawda 'u'Nadeem Vawda 'b'A file object providing transparent bzip2 (de)compression. + + A BZ2File can act as a wrapper for an existing file object, or refer + directly to a named file on disk. + + Note that BZ2File provides a *binary* file interface - data read is + returned as bytes, and data to be written should be given as bytes. + 'u'A file object providing transparent bzip2 (de)compression. + + A BZ2File can act as a wrapper for an existing file object, or refer + directly to a named file on disk. + + Note that BZ2File provides a *binary* file interface - data read is + returned as bytes, and data to be written should be given as bytes. + 'b'Open a bzip2-compressed file. + + If filename is a str, bytes, or PathLike object, it gives the + name of the file to be opened. Otherwise, it should be a file + object, which will be used to read or write the compressed data. + + mode can be 'r' for reading (default), 'w' for (over)writing, + 'x' for creating exclusively, or 'a' for appending. These can + equivalently be given as 'rb', 'wb', 'xb', and 'ab'. + + buffering is ignored since Python 3.0. Its use is deprecated. + + If mode is 'w', 'x' or 'a', compresslevel can be a number between 1 + and 9 specifying the level of compression: 1 produces the least + compression, and 9 (default) produces the most compression. + + If mode is 'r', the input file may be the concatenation of + multiple compressed streams. + 'u'Open a bzip2-compressed file. + + If filename is a str, bytes, or PathLike object, it gives the + name of the file to be opened. Otherwise, it should be a file + object, which will be used to read or write the compressed data. + + mode can be 'r' for reading (default), 'w' for (over)writing, + 'x' for creating exclusively, or 'a' for appending. These can + equivalently be given as 'rb', 'wb', 'xb', and 'ab'. + + buffering is ignored since Python 3.0. Its use is deprecated. + + If mode is 'w', 'x' or 'a', compresslevel can be a number between 1 + and 9 specifying the level of compression: 1 produces the least + compression, and 9 (default) produces the most compression. + + If mode is 'r', the input file may be the concatenation of + multiple compressed streams. + 'b'Use of 'buffering' argument is deprecated and ignored since Python 3.0.'u'Use of 'buffering' argument is deprecated and ignored since Python 3.0.'b'compresslevel must be between 1 and 9'u'compresslevel must be between 1 and 9'b'xb'u'xb'b'ab'u'ab'b'Invalid mode: %r'u'Invalid mode: %r'b'filename must be a str, bytes, file or PathLike object'u'filename must be a str, bytes, file or PathLike object'b'Flush and close the file. + + May be called more than once without error. Once the file is + closed, any other operation on it will raise a ValueError. + 'u'Flush and close the file. + + May be called more than once without error. Once the file is + closed, any other operation on it will raise a ValueError. + 'b'True if this file is closed.'u'True if this file is closed.'b'Return the file descriptor for the underlying file.'u'Return the file descriptor for the underlying file.'b'Return whether the file supports seeking.'u'Return whether the file supports seeking.'b'Return whether the file was opened for reading.'u'Return whether the file was opened for reading.'b'Return whether the file was opened for writing.'u'Return whether the file was opened for writing.'b'Return buffered data without advancing the file position. + + Always returns at least one byte of data, unless at EOF. + The exact number of bytes returned is unspecified. + 'u'Return buffered data without advancing the file position. + + Always returns at least one byte of data, unless at EOF. + The exact number of bytes returned is unspecified. + 'b'Read up to size uncompressed bytes from the file. + + If size is negative or omitted, read until EOF is reached. + Returns b'' if the file is already at EOF. + 'u'Read up to size uncompressed bytes from the file. + + If size is negative or omitted, read until EOF is reached. + Returns b'' if the file is already at EOF. + 'b'Read up to size uncompressed bytes, while trying to avoid + making multiple reads from the underlying stream. Reads up to a + buffer's worth of data if size is negative. + + Returns b'' if the file is at EOF. + 'u'Read up to size uncompressed bytes, while trying to avoid + making multiple reads from the underlying stream. Reads up to a + buffer's worth of data if size is negative. + + Returns b'' if the file is at EOF. + 'b'Read bytes into b. + + Returns the number of bytes read (0 for EOF). + 'u'Read bytes into b. + + Returns the number of bytes read (0 for EOF). + 'b'Read a line of uncompressed bytes from the file. + + The terminating newline (if present) is retained. If size is + non-negative, no more than size bytes will be read (in which + case the line may be incomplete). Returns b'' if already at EOF. + 'u'Read a line of uncompressed bytes from the file. + + The terminating newline (if present) is retained. If size is + non-negative, no more than size bytes will be read (in which + case the line may be incomplete). Returns b'' if already at EOF. + 'b'__index__'u'__index__'b'Integer argument expected'u'Integer argument expected'b'Read a list of lines of uncompressed bytes from the file. + + size can be specified to control the number of lines read: no + further lines will be read once the total size of the lines read + so far equals or exceeds size. + 'u'Read a list of lines of uncompressed bytes from the file. + + size can be specified to control the number of lines read: no + further lines will be read once the total size of the lines read + so far equals or exceeds size. + 'b'Write a byte string to the file. + + Returns the number of uncompressed bytes written, which is + always len(data). Note that due to buffering, the file on disk + may not reflect the data written until close() is called. + 'u'Write a byte string to the file. + + Returns the number of uncompressed bytes written, which is + always len(data). Note that due to buffering, the file on disk + may not reflect the data written until close() is called. + 'b'Write a sequence of byte strings to the file. + + Returns the number of uncompressed bytes written. + seq can be any iterable yielding byte strings. + + Line separators are not added between the written byte strings. + 'u'Write a sequence of byte strings to the file. + + Returns the number of uncompressed bytes written. + seq can be any iterable yielding byte strings. + + Line separators are not added between the written byte strings. + 'b'Change the file position. + + The new position is specified by offset, relative to the + position indicated by whence. Values for whence are: + + 0: start of stream (default); offset must not be negative + 1: current stream position + 2: end of stream; offset must not be positive + + Returns the new file position. + + Note that seeking is emulated, so depending on the parameters, + this operation may be extremely slow. + 'u'Change the file position. + + The new position is specified by offset, relative to the + position indicated by whence. Values for whence are: + + 0: start of stream (default); offset must not be negative + 1: current stream position + 2: end of stream; offset must not be positive + + Returns the new file position. + + Note that seeking is emulated, so depending on the parameters, + this operation may be extremely slow. + 'b'Open a bzip2-compressed file in binary or text mode. + + The filename argument can be an actual filename (a str, bytes, or + PathLike object), or an existing file object to read from or write + to. + + The mode argument can be "r", "rb", "w", "wb", "x", "xb", "a" or + "ab" for binary mode, or "rt", "wt", "xt" or "at" for text mode. + The default mode is "rb", and the default compresslevel is 9. + + For binary mode, this function is equivalent to the BZ2File + constructor: BZ2File(filename, mode, compresslevel). In this case, + the encoding, errors and newline arguments must not be provided. + + For text mode, a BZ2File object is created, and wrapped in an + io.TextIOWrapper instance with the specified encoding, error + handling behavior, and line ending(s). + + 'u'Open a bzip2-compressed file in binary or text mode. + + The filename argument can be an actual filename (a str, bytes, or + PathLike object), or an existing file object to read from or write + to. + + The mode argument can be "r", "rb", "w", "wb", "x", "xb", "a" or + "ab" for binary mode, or "rt", "wt", "xt" or "at" for text mode. + The default mode is "rb", and the default compresslevel is 9. + + For binary mode, this function is equivalent to the BZ2File + constructor: BZ2File(filename, mode, compresslevel). In this case, + the encoding, errors and newline arguments must not be provided. + + For text mode, a BZ2File object is created, and wrapped in an + io.TextIOWrapper instance with the specified encoding, error + handling behavior, and line ending(s). + + 'b'Argument 'encoding' not supported in binary mode'u'Argument 'encoding' not supported in binary mode'b'Argument 'errors' not supported in binary mode'u'Argument 'errors' not supported in binary mode'b'Argument 'newline' not supported in binary mode'u'Argument 'newline' not supported in binary mode'b'Compress a block of data. + + compresslevel, if given, must be a number between 1 and 9. + + For incremental compression, use a BZ2Compressor object instead. + 'u'Compress a block of data. + + compresslevel, if given, must be a number between 1 and 9. + + For incremental compression, use a BZ2Compressor object instead. + 'b'Decompress a block of data. + + For incremental decompression, use a BZ2Decompressor object instead. + 'u'Decompress a block of data. + + For incremental decompression, use a BZ2Decompressor object instead. + 'b'Compressed data ended before the end-of-stream marker was reached'u'Compressed data ended before the end-of-stream marker was reached'Calendar printing functions + +Note when comparing these calendars to the ones printed by cal(1): By +default, these calendars have Monday as the first day of the week, and +Sunday as the last (the European convention). Use setfirstweekday() to +set the first day of the week (0=Monday, 6=Sunday).IllegalMonthErrorIllegalWeekdayErrorsetfirstweekdayfirstweekdayisleapleapdaysmonthrangemonthcalendarprmonthprcalmonth_namemonth_abbrday_nameday_abbrCalendarTextCalendarHTMLCalendarLocaleTextCalendarLocaleHTMLCalendarweekheaderbad month number %r; must be 1-12bad weekday number %r; must be 0 (Monday) to 6 (Sunday)JanuaryFebruarymdays_localized_month2001_monthsfuncs_localized_day_days%a%BMONDAYTUESDAYWEDNESDAYTHURSDAYFRIDAYSATURDAYSUNDAYReturn True for leap years, False for non-leap years.Return number of leap years in range [y1, y2). + Assume y1 <= y2.Return weekday (0-6 ~ Mon-Sun) for year, month (1-12), day (1-31).Return weekday (0-6 ~ Mon-Sun) and number of days (28-31) for + year, month.day1ndays_monthlen_prevmonth_nextmonth + Base calendar class. This class doesn't do any formatting. It simply + provides data to subclasses. + getfirstweekday_firstweekdayiterweekdays + Return an iterator for one week of weekday numbers starting with the + configured first one. + itermonthdates + Return an iterator for one month. The iterator will yield datetime.date + values and will always iterate through complete weeks, so it will yield + dates outside the specified month. + itermonthdays3itermonthdays + Like itermonthdates(), but will yield day numbers. For days outside + the specified month the day number is 0. + days_beforedays_afteritermonthdays2 + Like itermonthdates(), but will yield (day number, weekday number) + tuples. For days outside the specified month the day number is 0. + + Like itermonthdates(), but will yield (year, month, day) tuples. Can be + used for dates outside of datetime.date range. + itermonthdays4 + Like itermonthdates(), but will yield (year, month, day, day_of_week) tuples. + Can be used for dates outside of datetime.date range. + monthdatescalendar + Return a matrix (list of lists) representing a month's calendar. + Each row represents a week; week entries are datetime.date values. + datesmonthdays2calendar + Return a matrix representing a month's calendar. + Each row represents a week; week entries are + (day number, weekday number) tuples. Day numbers outside this month + are zero. + monthdayscalendar + Return a matrix representing a month's calendar. + Each row represents a week; days outside this month are zero. + yeardatescalendar + Return the data for the specified year ready for formatting. The return + value is a list of month rows. Each month row contains up to width months. + Each month contains between 4 and 6 weeks and each week contains 1-7 + days. Days are datetime.date objects. + monthsyeardays2calendar + Return the data for the specified year ready for formatting (similar to + yeardatescalendar()). Entries in the week lists are + (day number, weekday number) tuples. Day numbers outside this month are + zero. + yeardayscalendar + Return the data for the specified year ready for formatting (similar to + yeardatescalendar()). Entries in the week lists are day numbers. + Day numbers outside this month are zero. + + Subclass of Calendar that outputs a calendar as a simple plain text + similar to the UNIX program cal. + prweektheweek + Print a single week (no newline). + formatweekformatday + Returns a formatted day. + %2i + Returns a single week in a string (no newline). + wdformatweekday + Returns a formatted week day name. + formatweekheader + Return a header for a week. + formatmonthnametheyearthemonthwithyear + Return a formatted month name. + %s %r + Print a month's calendar. + formatmonth + Return a month's calendar string (multi-line). + weekformatyear + Returns a year's calendar as a multi-line string. + colwidthformatstringcalweekspryearPrint a year's calendar. + This calendar returns complete HTML pages. + cssclassescssclasses_weekday_headnodaycssclass_nodaycssclass_month_headcssclass_monthcssclass_year_headcssclass_year + Return a day as a table cell. +  %d + Return a complete week as a table row. + %s + Return a weekday name as a table header. + %s + Return a header for a week as a table row. + + Return a month name as a table row. + %s + Return a formatted month as a table. +
+ Return a formatted year as a table of tables. + %sformatyearpagecalendar.csscss + Return a formatted year as a complete HTML page. + + + + + + +Calendar for %d + + + + +different_localegetlocaleoldlocale + This class can be passed a locale name in the constructor and will return + month and weekday names in the specified locale. If this locale includes + an encoding all strings containing month and weekday names will be returned + as unicode. + %s_colwidth_spacingcolsspacingPrints multi-column formatting for year calendarsReturns a string formatted from n strings, centered within n columns.1970EPOCH_EPOCH_ORDUnrelated but handy function to calculate Unix timestamp from GMT.hoursminutesargparsetext only argumentstextgrouphtml only argumentshtmlgroup-w--widthwidth of date column (default 2)-l--linesnumber of lines for each week (default 1)-s--spacingspacing between months (default 6)-m--monthsmonths per row (default 3)-c--cssCSS to use for page-L--localelocale to be used from month and weekday names--encodingencoding to use for output--typeoutput type (text or html)year number (1-9999)month number (1-12, text only)if --locale is specified --encoding is requiredoptdictincorrect number of arguments# Exception raised for bad input (with string parameter for details)# Exceptions raised for bad input# Constants for months referenced later# Number of days per month (except for February in leap years)# This module used to have hard-coded lists of day and month names, as# English strings. The classes following emulate a read-only version of# that, but supply localized names. Note that the values are computed# fresh on each call, in case the user changes locale between calls.# January 1, 2001, was a Monday.# Full and abbreviated names of weekdays# Full and abbreviated names of months (1-based arrays!!!)# Constants for weekdays# 0 = Monday, 6 = Sunday# right-align single-digit days# months in this row# max number of weeks for this row# CSS classes for the day s# CSS classes for the day s# CSS class for the days before and after current month# CSS class for the month's head# CSS class for the month# CSS class for the year's table head# CSS class for the whole year table# day outside month# Support for old module level interface# Spacing of month columns for multi-column year calendar# Amount printed by prweek()# Number of spaces between columnsb'Calendar printing functions + +Note when comparing these calendars to the ones printed by cal(1): By +default, these calendars have Monday as the first day of the week, and +Sunday as the last (the European convention). Use setfirstweekday() to +set the first day of the week (0=Monday, 6=Sunday).'u'Calendar printing functions + +Note when comparing these calendars to the ones printed by cal(1): By +default, these calendars have Monday as the first day of the week, and +Sunday as the last (the European convention). Use setfirstweekday() to +set the first day of the week (0=Monday, 6=Sunday).'b'IllegalMonthError'u'IllegalMonthError'b'IllegalWeekdayError'u'IllegalWeekdayError'b'setfirstweekday'u'setfirstweekday'b'firstweekday'u'firstweekday'b'isleap'u'isleap'b'leapdays'u'leapdays'b'weekday'u'weekday'b'monthrange'u'monthrange'b'monthcalendar'u'monthcalendar'b'prmonth'u'prmonth'b'month'u'month'b'prcal'u'prcal'b'calendar'u'calendar'b'timegm'u'timegm'b'month_name'u'month_name'b'month_abbr'u'month_abbr'b'day_name'u'day_name'b'day_abbr'u'day_abbr'b'Calendar'u'Calendar'b'TextCalendar'u'TextCalendar'b'HTMLCalendar'u'HTMLCalendar'b'LocaleTextCalendar'u'LocaleTextCalendar'b'LocaleHTMLCalendar'u'LocaleHTMLCalendar'b'weekheader'u'weekheader'b'bad month number %r; must be 1-12'u'bad month number %r; must be 1-12'b'bad weekday number %r; must be 0 (Monday) to 6 (Sunday)'u'bad weekday number %r; must be 0 (Monday) to 6 (Sunday)'b'%a'u'%a'b'%B'u'%B'b'Return True for leap years, False for non-leap years.'u'Return True for leap years, False for non-leap years.'b'Return number of leap years in range [y1, y2). + Assume y1 <= y2.'u'Return number of leap years in range [y1, y2). + Assume y1 <= y2.'b'Return weekday (0-6 ~ Mon-Sun) for year, month (1-12), day (1-31).'u'Return weekday (0-6 ~ Mon-Sun) for year, month (1-12), day (1-31).'b'Return weekday (0-6 ~ Mon-Sun) and number of days (28-31) for + year, month.'u'Return weekday (0-6 ~ Mon-Sun) and number of days (28-31) for + year, month.'b' + Base calendar class. This class doesn't do any formatting. It simply + provides data to subclasses. + 'u' + Base calendar class. This class doesn't do any formatting. It simply + provides data to subclasses. + 'b' + Return an iterator for one week of weekday numbers starting with the + configured first one. + 'u' + Return an iterator for one week of weekday numbers starting with the + configured first one. + 'b' + Return an iterator for one month. The iterator will yield datetime.date + values and will always iterate through complete weeks, so it will yield + dates outside the specified month. + 'u' + Return an iterator for one month. The iterator will yield datetime.date + values and will always iterate through complete weeks, so it will yield + dates outside the specified month. + 'b' + Like itermonthdates(), but will yield day numbers. For days outside + the specified month the day number is 0. + 'u' + Like itermonthdates(), but will yield day numbers. For days outside + the specified month the day number is 0. + 'b' + Like itermonthdates(), but will yield (day number, weekday number) + tuples. For days outside the specified month the day number is 0. + 'u' + Like itermonthdates(), but will yield (day number, weekday number) + tuples. For days outside the specified month the day number is 0. + 'b' + Like itermonthdates(), but will yield (year, month, day) tuples. Can be + used for dates outside of datetime.date range. + 'u' + Like itermonthdates(), but will yield (year, month, day) tuples. Can be + used for dates outside of datetime.date range. + 'b' + Like itermonthdates(), but will yield (year, month, day, day_of_week) tuples. + Can be used for dates outside of datetime.date range. + 'u' + Like itermonthdates(), but will yield (year, month, day, day_of_week) tuples. + Can be used for dates outside of datetime.date range. + 'b' + Return a matrix (list of lists) representing a month's calendar. + Each row represents a week; week entries are datetime.date values. + 'u' + Return a matrix (list of lists) representing a month's calendar. + Each row represents a week; week entries are datetime.date values. + 'b' + Return a matrix representing a month's calendar. + Each row represents a week; week entries are + (day number, weekday number) tuples. Day numbers outside this month + are zero. + 'u' + Return a matrix representing a month's calendar. + Each row represents a week; week entries are + (day number, weekday number) tuples. Day numbers outside this month + are zero. + 'b' + Return a matrix representing a month's calendar. + Each row represents a week; days outside this month are zero. + 'u' + Return a matrix representing a month's calendar. + Each row represents a week; days outside this month are zero. + 'b' + Return the data for the specified year ready for formatting. The return + value is a list of month rows. Each month row contains up to width months. + Each month contains between 4 and 6 weeks and each week contains 1-7 + days. Days are datetime.date objects. + 'u' + Return the data for the specified year ready for formatting. The return + value is a list of month rows. Each month row contains up to width months. + Each month contains between 4 and 6 weeks and each week contains 1-7 + days. Days are datetime.date objects. + 'b' + Return the data for the specified year ready for formatting (similar to + yeardatescalendar()). Entries in the week lists are + (day number, weekday number) tuples. Day numbers outside this month are + zero. + 'u' + Return the data for the specified year ready for formatting (similar to + yeardatescalendar()). Entries in the week lists are + (day number, weekday number) tuples. Day numbers outside this month are + zero. + 'b' + Return the data for the specified year ready for formatting (similar to + yeardatescalendar()). Entries in the week lists are day numbers. + Day numbers outside this month are zero. + 'u' + Return the data for the specified year ready for formatting (similar to + yeardatescalendar()). Entries in the week lists are day numbers. + Day numbers outside this month are zero. + 'b' + Subclass of Calendar that outputs a calendar as a simple plain text + similar to the UNIX program cal. + 'u' + Subclass of Calendar that outputs a calendar as a simple plain text + similar to the UNIX program cal. + 'b' + Print a single week (no newline). + 'u' + Print a single week (no newline). + 'b' + Returns a formatted day. + 'u' + Returns a formatted day. + 'b'%2i'u'%2i'b' + Returns a single week in a string (no newline). + 'u' + Returns a single week in a string (no newline). + 'b' + Returns a formatted week day name. + 'u' + Returns a formatted week day name. + 'b' + Return a header for a week. + 'u' + Return a header for a week. + 'b' + Return a formatted month name. + 'u' + Return a formatted month name. + 'b'%s %r'u'%s %r'b' + Print a month's calendar. + 'u' + Print a month's calendar. + 'b' + Return a month's calendar string (multi-line). + 'u' + Return a month's calendar string (multi-line). + 'b' + Returns a year's calendar as a multi-line string. + 'u' + Returns a year's calendar as a multi-line string. + 'b'Print a year's calendar.'u'Print a year's calendar.'b' + This calendar returns complete HTML pages. + 'u' + This calendar returns complete HTML pages. + 'b'noday'u'noday'b'year'u'year'b' + Return a day as a table cell. + 'u' + Return a day as a table cell. + 'b' 'u' 'b'%d'u'%d'b' + Return a complete week as a table row. + 'u' + Return a complete week as a table row. + 'b'%s'u'%s'b' + Return a weekday name as a table header. + 'u' + Return a weekday name as a table header. + 'b'%s'u'%s'b' + Return a header for a week as a table row. + 'u' + Return a header for a week as a table row. + 'b' + Return a month name as a table row. + 'u' + Return a month name as a table row. + 'b'%s'u'%s'b' + Return a formatted month as a table. + 'u' + Return a formatted month as a table. + 'b''u'
'b'
'u''b' + Return a formatted year as a table of tables. + 'u' + Return a formatted year as a table of tables. + 'b'%s'u'%s'b''u''b''u''b''u''b''u''b'calendar.css'u'calendar.css'b' + Return a formatted year as a complete HTML page. + 'u' + Return a formatted year as a complete HTML page. + 'b' +'u' +'b' +'u' +'b' +'u' +'b' +'u' +'b' +'u' +'b' +'u' +'b'Calendar for %d +'u'Calendar for %d +'b' +'u' +'b' +'u' +'b' +'u' +'b' +'u' +'b' + This class can be passed a locale name in the constructor and will return + month and weekday names in the specified locale. If this locale includes + an encoding all strings containing month and weekday names will be returned + as unicode. + 'u' + This class can be passed a locale name in the constructor and will return + month and weekday names in the specified locale. If this locale includes + an encoding all strings containing month and weekday names will be returned + as unicode. + 'b'%s'u'%s'b'Prints multi-column formatting for year calendars'u'Prints multi-column formatting for year calendars'b'Returns a string formatted from n strings, centered within n columns.'u'Returns a string formatted from n strings, centered within n columns.'b'Unrelated but handy function to calculate Unix timestamp from GMT.'u'Unrelated but handy function to calculate Unix timestamp from GMT.'b'text only arguments'u'text only arguments'b'html only arguments'u'html only arguments'b'-w'u'-w'b'--width'u'--width'b'width of date column (default 2)'u'width of date column (default 2)'b'-l'u'-l'b'--lines'u'--lines'b'number of lines for each week (default 1)'u'number of lines for each week (default 1)'b'-s'u'-s'b'--spacing'u'--spacing'b'spacing between months (default 6)'u'spacing between months (default 6)'b'-m'u'-m'b'--months'u'--months'b'months per row (default 3)'u'months per row (default 3)'b'-c'b'--css'u'--css'b'CSS to use for page'u'CSS to use for page'b'-L'u'-L'b'--locale'u'--locale'b'locale to be used from month and weekday names'u'locale to be used from month and weekday names'b'--encoding'u'--encoding'b'encoding to use for output'u'encoding to use for output'b'--type'u'--type'b'output type (text or html)'u'output type (text or html)'b'year number (1-9999)'u'year number (1-9999)'b'month number (1-12, text only)'u'month number (1-12, text only)'b'if --locale is specified --encoding is required'u'if --locale is specified --encoding is required'b'incorrect number of arguments'u'incorrect number of arguments'Test case implementationdifflibpprintstrclasssafe_repr_count_diff_all_purpose_count_diff_hashable_common_shorten_repr_subtest_msg_sentinel +Diff is %s characters long. Set self.maxDiff to None to see it.'\nDiff is %s characters long. ''Set self.maxDiff to None to see it.'DIFF_OMITTED + Raise this exception in a test to skip it. + + Usually you can use TestCase.skipTest() or one of the skipping decorators + instead of raising this directly. + _ShouldStop + The test should stop. + _UnexpectedSuccess + The test was supposed to fail, but it didn't! + _Outcomeexpecting_failureaddSubTestresult_supports_subtestssuccesstestPartExecutorisTestold_success_module_cleanupsSame as addCleanup, except the cleanup items are called even if + setUpModule fails (unlike tearDownModule).doModuleCleanupsExecute all module cleanup functions. Normally called for you after + tearDownModule. + Unconditionally skip a test. + test_itemskip_wrapper__unittest_skip____unittest_skip_why__ + Skip a test if the condition is true. + + Skip a test unless the condition is true. + __unittest_expecting_failure___is_subtypebasetype_BaseTestCaseContext_raiseFailurestandardMsg_formatMessagefailureException_AssertRaisesBaseContextexpected_regexobj_name + If args is empty, assertRaises/Warns is being used as a + context manager, so check for a 'msg' kwarg and return self. + If args is not empty, call a callable passing positional and keyword + arguments. + _base_type%s() arg 1 must be %s_base_type_str%r is an invalid keyword argument for this function'%r is an invalid keyword argument for ''this function'callable_obj_AssertRaisesContextA context manager used to implement TestCase.assertRaises* methods.an exception type or tuple of exception typesexc_name{} not raised by {}{} not raisedclear_frames"{}" does not match "{}"_AssertWarnsContextA context manager used to implement TestCase.assertWarns* methods.a warning type or tuple of warning typeswarnings_managerfirst_matching{} not triggered by {}{} not triggered_LoggingWatcher_CapturingHandler + A logging handler capturing all (raw and formatted) logging output. + watcher_AssertLogsContextA context manager used to implement TestCase.assertLogs().LOGGING_FORMATlogger_nameold_handlersold_levelold_propagateno logs of level {} or higher triggered on {}_OrderedChainMapA class whose instances are single test cases. + + By default, the test code itself should be placed in a method named + 'runTest'. + + If the fixture may be used for many test cases, create as + many test methods as are needed. When instantiating such a TestCase + subclass, specify in the constructor arguments the name of the test method + that the instance is to execute. + + Test authors should subclass TestCase for their own tests. Construction + and deconstruction of the test's environment ('fixture') can be + implemented by overriding the 'setUp' and 'tearDown' methods respectively. + + If it is necessary to override the __init__ method, the base class + __init__ method must always be called. It is important that subclasses + should not change the signature of their __init__ method, since instances + of the classes are instantiated automatically by parts of the framework + in order to be run. + + When subclassing TestCase, you can set these attributes: + * failureException: determines which exception will be raised when + the instance's assertion methods fail; test methods raising this + exception will be deemed to have 'failed' rather than 'errored'. + * longMessage: determines whether long messages (including repr of + objects used in assert methods) will be printed on failure in *addition* + to any explicit message passed. + * maxDiff: sets the maximum length of a diff in failure messages + by assert methods using difflib. It is looked up as an instance + attribute so can be configured by individual tests if required. + longMessagemaxDiff_diffThreshold_classSetupFailed_class_cleanupsCreate an instance of the class that will use the named test + method when executed. Raises a ValueError if the instance does + not have a method with the specified name. + _testMethodName_outcomeNo test_testMethodDoctestMethodno such test method in %s: %s_cleanups_subtest_type_equality_funcsaddTypeEqualityFuncassertDictEqualassertListEqualassertTupleEqualassertSetEqualassertMultiLineEqualtypeobjAdd a type specific assertEqual style function to compare a type. + + This method is for use by TestCase subclasses that need to register + their own type equality functions to provide nicer error messages. + + Args: + typeobj: The data type to call this function on when both values + are of the same type in assertEqual(). + function: The callable taking two arguments and an optional + msg= argument that raises self.failureException with a + useful error message when the two arguments are not equal. + Add a function, with arguments, to be called when the test is + completed. Functions added are called on a LIFO basis and are + called after tearDown on test failure or success. + + Cleanup items are called even if setUp fails (unlike tearDown).descriptor 'addCleanup' of 'TestCase' object needs an argument"descriptor 'addCleanup' of 'TestCase' object "Passing 'function' as keyword argument is deprecatedaddCleanup expected at least 1 positional argument, got %d'addCleanup expected at least 1 positional ''argument, got %d'($self, function, /, *args, **kwargs)addClassCleanupSame as addCleanup, except the cleanup items are called even if + setUpClass fails (unlike tearDownClass).Hook method for setting up the test fixture before exercising it.Hook method for deconstructing the test fixture after testing it.setUpClassHook method for setting up class fixture before running tests in the class.tearDownClassHook method for deconstructing the class fixture after running all tests in the class.countTestCasesdefaultTestResultshortDescriptionReturns a one-line description of the test, or None if no + description has been provided. + + The default implementation of this method returns the first line of + the specified test method's docstring. + %s.%s%s (%s)<%s testMethod=%s>_addSkipaddSkipTestResult has no addSkip method, skips not reportedaddSuccesssubTestReturn a context manager that will return the enclosed block + of code in a subtest identified by the optional message and + keyword parameters. A failure in the subtest marks the test + case as failed but resumes execution at the end of the enclosed + block, allowing further test code to be executed. + params_map_SubTest_feedErrorsToResultaddFailureaddError_addExpectedFailureaddExpectedFailureTestResult has no addExpectedFailure method, reporting as passes_addUnexpectedSuccessaddUnexpectedSuccessTestResult has no addUnexpectedSuccess method, reporting as failureorig_resultstartTestRunstartTestskip_whystopTestexpecting_failure_methodexpecting_failure_classoutcomedoCleanupsstopTestRunExecute all cleanup functions. Normally called for you after + tearDown.doClassCleanupsExecute all class cleanup functions. Normally called for you after + tearDownClass.tearDown_exceptionsRun the test without collecting errors in a TestResultskipTestSkip this test.failFail immediately, with the given message.assertFalseCheck that the expression is false.%s is not falseCheck that the expression is true.%s is not trueHonour the longMessage attribute when generating failure messages. + If longMessage is False this means: + * Use only an explicit message if it is provided + * Otherwise use the standard message for the assert + + If longMessage is True: + * Use the standard message + * If an explicit message is provided, plus ' : ' and the explicit message + %s : %sexpected_exceptionFail unless an exception of class expected_exception is raised + by the callable when invoked with specified positional and + keyword arguments. If a different type of exception is + raised, it will not be caught, and the test case will be + deemed to have suffered an error, exactly as for an + unexpected exception. + + If called with the callable and arguments omitted, will return a + context object used like this:: + + with self.assertRaises(SomeException): + do_something() + + An optional keyword argument 'msg' can be provided when assertRaises + is used as a context object. + + The context manager keeps a reference to the exception as + the 'exception' attribute. This allows you to inspect the + exception after the assertion:: + + with self.assertRaises(SomeException) as cm: + do_something() + the_exception = cm.exception + self.assertEqual(the_exception.error_code, 3) + assertWarnsexpected_warningFail unless a warning of class warnClass is triggered + by the callable when invoked with specified positional and + keyword arguments. If a different type of warning is + triggered, it will not be handled: depending on the other + warning filtering rules in effect, it might be silenced, printed + out, or raised as an exception. + + If called with the callable and arguments omitted, will return a + context object used like this:: + + with self.assertWarns(SomeWarning): + do_something() + + An optional keyword argument 'msg' can be provided when assertWarns + is used as a context object. + + The context manager keeps a reference to the first matching + warning as the 'warning' attribute; similarly, the 'filename' + and 'lineno' attributes give you information about the line + of Python code from which the warning was triggered. + This allows you to inspect the warning after the assertion:: + + with self.assertWarns(SomeWarning) as cm: + do_something() + the_warning = cm.warning + self.assertEqual(the_warning.some_attribute, 147) + assertLogsFail unless a log message of level *level* or higher is emitted + on *logger_name* or its children. If omitted, *level* defaults to + INFO and *logger* defaults to the root logger. + + This method must be used as a context manager, and will yield + a recording object with two attributes: `output` and `records`. + At the end of the context manager, the `output` attribute will + be a list of the matching formatted log messages and the + `records` attribute will be a list of the corresponding LogRecord + objects. + + Example:: + + with self.assertLogs('foo', level='INFO') as cm: + logging.getLogger('foo').info('first message') + logging.getLogger('foo.bar').error('second message') + self.assertEqual(cm.output, ['INFO:foo:first message', + 'ERROR:foo.bar:second message']) + _getAssertEqualityFuncGet a detailed comparison function for the types of the two args. + + Returns: A callable accepting (first, second, msg=None) that will + raise a failure exception if first != second with a useful human + readable error message for those types. + asserter_baseAssertEqualThe default assertEqual implementation, not type specific.%s != %sFail if the two objects are unequal as determined by the '==' + operator. + assertion_funcassertNotEqualFail if the two objects are equal as determined by the '!=' + operator. + %s == %sassertAlmostEqualFail if the two objects are unequal as determined by their + difference rounded to the given number of decimal places + (default 7) and comparing to zero, or by comparing that the + difference between the two objects is more than the given + delta. + + Note that decimal places (from zero) are usually not the same + as significant digits (measured from the most significant digit). + + If the two objects compare equal then they will automatically + compare almost equal. + specify delta or places not bothdiff%s != %s within %s delta (%s difference)%s != %s within %r places (%s difference)assertNotAlmostEqualFail if the two objects are equal as determined by their + difference rounded to the given number of decimal places + (default 7) and comparing to zero, or by comparing that the + difference between the two objects is less than the given delta. + + Note that decimal places (from zero) are usually not the same + as significant digits (measured from the most significant digit). + + Objects that are equal automatically fail. + %s == %s within %s delta (%s difference)%s == %s within %r placesassertSequenceEqualseq1seq2seq_typeAn equality assertion for ordered sequences (like lists and tuples). + + For the purposes of this function, a valid ordered sequence type is one + which can be indexed, has a length, and has an equality operator. + + Args: + seq1: The first sequence to compare. + seq2: The second sequence to compare. + seq_type: The expected datatype of the sequences, or None if no + datatype should be enforced. + msg: Optional message to use on failure instead of a list of + differences. + seq_type_nameFirst sequence is not a %s: %sSecond sequence is not a %s: %sdifferinglen1First %s has no length. Non-sequence?len2Second %s has no length. Non-sequence?%ss differ: %s != %s +item1 +Unable to index element %d of first %s +item2 +Unable to index element %d of second %s + +First differing element %d: +%s +%s + +First %s contains %d additional elements. +'\nFirst %s contains %d additional ''elements.\n'First extra element %d: +%s +Unable to index element %d of first %s +'Unable to index element %d ''of first %s\n' +Second %s contains %d additional elements. +'\nSecond %s contains %d additional 'Unable to index element %d of second %s +'of second %s\n'ndiffpformatdiffMsg_truncateMessagemax_difflist1list2A list-specific equality assertion. + + Args: + list1: The first list to compare. + list2: The second list to compare. + msg: Optional message to use on failure instead of a list of + differences. + + tuple1tuple2A tuple-specific equality assertion. + + Args: + tuple1: The first tuple to compare. + tuple2: The second tuple to compare. + msg: Optional message to use on failure instead of a list of + differences. + set1set2A set-specific equality assertion. + + Args: + set1: The first set to compare. + set2: The second set to compare. + msg: Optional message to use on failure instead of a list of + differences. + + assertSetEqual uses ducktyping to support different types of sets, and + is optimized for sets specifically (parameters must support a + difference method). + difference1invalid type when attempting set difference: %sfirst argument does not support set difference: %sdifference2second argument does not support set difference: %sItems in the first set but not the second:Items in the second set but not the first:assertInJust like self.assertTrue(a in b), but with a nicer default message.%s not found in %sassertNotInJust like self.assertTrue(a not in b), but with a nicer default message.%s unexpectedly found in %sassertIsexpr1expr2Just like self.assertTrue(a is b), but with a nicer default message.%s is not %sassertIsNotJust like self.assertTrue(a is not b), but with a nicer default message.unexpectedly identical: %sd1assertIsInstanceFirst argument is not a dictionarySecond argument is not a dictionaryassertDictContainsSubsetsubsetdictionaryChecks whether dictionary is a superset of subset.assertDictContainsSubset is deprecatedmismatched%s, expected: %s, actual: %sMissing: %s; Mismatched values: %sAsserts that two iterables have the same elements, the same number of + times, without regard to order. + + self.assertEqual(Counter(list(first)), + Counter(list(second))) + + Example: + - [0, 1, 1] and [1, 0, 1] compare equal. + - [0, 0, 1] and [0, 1] compare unequal. + + first_seqsecond_seqdifferencesElement counts were not equal: +First has %d, Second has %d: %rAssert that two multi-line strings are equal.First argument is not a stringSecond argument is not a stringfirstlinessecondlinesassertLessJust like self.assertTrue(a < b), but with a nicer default message.%s not less than %sassertLessEqualJust like self.assertTrue(a <= b), but with a nicer default message.%s not less than or equal to %sassertGreaterJust like self.assertTrue(a > b), but with a nicer default message.%s not greater than %sassertGreaterEqualJust like self.assertTrue(a >= b), but with a nicer default message.%s not greater than or equal to %sassertIsNoneSame as self.assertTrue(obj is None), with a nicer default message.%s is not NoneIncluded for symmetry with assertIsNone.unexpectedly NoneSame as self.assertTrue(isinstance(obj, cls)), with a nicer + default message.%s is not an instance of %rassertNotIsInstanceIncluded for symmetry with assertIsInstance.%s is an instance of %rAsserts that the message in a raised exception matches a regex. + + Args: + expected_exception: Exception class expected to be raised. + expected_regex: Regex (re.Pattern object or string) expected + to be found in error message. + args: Function to be called and extra positional args. + kwargs: Extra kwargs. + msg: Optional message used in case of failure. Can only be used + when assertRaisesRegex is used as a context manager. + assertWarnsRegexAsserts that the message in a triggered warning matches a regexp. + Basic functioning is similar to assertWarns() with the addition + that only warnings whose messages also match the regular expression + are considered successful matches. + + Args: + expected_warning: Warning class expected to be triggered. + expected_regex: Regex (re.Pattern object or string) expected + to be found in error message. + args: Function to be called and extra positional args. + kwargs: Extra kwargs. + msg: Optional message used in case of failure. Can only be used + when assertWarnsRegex is used as a context manager. + Fail the test unless the text matches the regular expression.expected_regex must not be empty.Regex didn't match: %r not found in %rassertNotRegexunexpected_regexFail the test if the text matches the regular expression.Regex matched: %r matches %r in %r_deprecateoriginal_funcdeprecated_funcPlease use {0} instead.failUnlessEqualassertEqualsfailIfEqualassertNotEqualsfailUnlessAlmostEqualassertAlmostEqualsfailIfAlmostEqualassertNotAlmostEqualsfailUnlessassert_failUnlessRaisesfailIfassertRaisesRegexpassertRegexpMatchesassertNotRegexpMatchesA test case that wraps a test function. + + This is useful for slipping pre-existing test functions into the + unittest framework. Optionally, set-up and tidy-up functions can be + supplied. As with TestCase, the tidy-up ('tearDown') function will + always be called if the set-up ('setUp') function ran successfully. + testFunc_setUpFunc_tearDownFunc_testFunc_description<%s tec=%s>_messagesubtests cannot be run directly_subDescription[{}]params_desc({})(){} {}Returns a one-line description of the subtest, or None if no + description has been provided. + # explicitly break a reference cycle:# exc_info -> frame -> exc_info# Swallows all but first exception. If a multi-exception handler# gets written we should use that here instead.# bpo-23890: manually break a reference cycle# let unexpected exceptions pass through# store exception, without traceback, for later retrieval# The __warningregistry__'s need to be in a pristine state for tests# to work properly.# store warning for later retrieval# Now we simply try to choose a helpful failure message# If a string is longer than _diffThreshold, use normal comparison instead# of difflib. See #11763.# Attribute used by TestSuite for classSetUp# we allow instantiation with no explicit method name# but not an *incorrect* or missing method name# Map types to custom assertEqual functions that will compare# instances of said type in more detail to generate a more useful# error message.# If the test is expecting a failure, we really want to# stop now and register the expected failure.# We need to pass an actual exception and traceback to addFailure,# otherwise the legacy result can choke.# If the class or method was skipped.# explicitly break reference cycles:# outcome.errors -> frame -> outcome -> outcome.errors# outcome.expectedFailure -> frame -> outcome -> outcome.expectedFailure# clear the outcome, no more needed# return this for backwards compatibility# even though we no longer use it internally# don't switch to '{}' formatting in Python 2.X# it changes the way unicode input is handled# NOTE(gregory.p.smith): I considered isinstance(first, type(second))# and vice versa. I opted for the conservative approach in case# subclasses are not intended to be compared in detail to their super# class instances using a type equality func. This means testing# subtypes won't automagically use the detailed comparison. Callers# should use their type specific assertSpamEqual method to compare# subclasses if the detailed comparison is desired and appropriate.# See the discussion in http://bugs.python.org/issue2578.# shortcut# The sequences are the same, but have differing types.# Handle case with unhashable elements# don't use difflib if the strings are too long# _formatMessage ensures the longMessage option is respected# see #9424b'Test case implementation'u'Test case implementation'b' +Diff is %s characters long. Set self.maxDiff to None to see it.'u' +Diff is %s characters long. Set self.maxDiff to None to see it.'b' + Raise this exception in a test to skip it. + + Usually you can use TestCase.skipTest() or one of the skipping decorators + instead of raising this directly. + 'u' + Raise this exception in a test to skip it. + + Usually you can use TestCase.skipTest() or one of the skipping decorators + instead of raising this directly. + 'b' + The test should stop. + 'u' + The test should stop. + 'b' + The test was supposed to fail, but it didn't! + 'u' + The test was supposed to fail, but it didn't! + 'b'addSubTest'u'addSubTest'b'Same as addCleanup, except the cleanup items are called even if + setUpModule fails (unlike tearDownModule).'u'Same as addCleanup, except the cleanup items are called even if + setUpModule fails (unlike tearDownModule).'b'Execute all module cleanup functions. Normally called for you after + tearDownModule.'u'Execute all module cleanup functions. Normally called for you after + tearDownModule.'b' + Unconditionally skip a test. + 'u' + Unconditionally skip a test. + 'b' + Skip a test if the condition is true. + 'u' + Skip a test if the condition is true. + 'b' + Skip a test unless the condition is true. + 'u' + Skip a test unless the condition is true. + 'b' + If args is empty, assertRaises/Warns is being used as a + context manager, so check for a 'msg' kwarg and return self. + If args is not empty, call a callable passing positional and keyword + arguments. + 'u' + If args is empty, assertRaises/Warns is being used as a + context manager, so check for a 'msg' kwarg and return self. + If args is not empty, call a callable passing positional and keyword + arguments. + 'b'%s() arg 1 must be %s'u'%s() arg 1 must be %s'b'%r is an invalid keyword argument for this function'u'%r is an invalid keyword argument for this function'b'A context manager used to implement TestCase.assertRaises* methods.'u'A context manager used to implement TestCase.assertRaises* methods.'b'an exception type or tuple of exception types'u'an exception type or tuple of exception types'b'{} not raised by {}'u'{} not raised by {}'b'{} not raised'u'{} not raised'b'"{}" does not match "{}"'u'"{}" does not match "{}"'b'A context manager used to implement TestCase.assertWarns* methods.'u'A context manager used to implement TestCase.assertWarns* methods.'b'a warning type or tuple of warning types'u'a warning type or tuple of warning types'b'{} not triggered by {}'u'{} not triggered by {}'b'{} not triggered'u'{} not triggered'b'_LoggingWatcher'u'_LoggingWatcher'b'records'u'records'b'output'u'output'b' + A logging handler capturing all (raw and formatted) logging output. + 'u' + A logging handler capturing all (raw and formatted) logging output. + 'b'A context manager used to implement TestCase.assertLogs().'u'A context manager used to implement TestCase.assertLogs().'b'no logs of level {} or higher triggered on {}'u'no logs of level {} or higher triggered on {}'b'A class whose instances are single test cases. + + By default, the test code itself should be placed in a method named + 'runTest'. + + If the fixture may be used for many test cases, create as + many test methods as are needed. When instantiating such a TestCase + subclass, specify in the constructor arguments the name of the test method + that the instance is to execute. + + Test authors should subclass TestCase for their own tests. Construction + and deconstruction of the test's environment ('fixture') can be + implemented by overriding the 'setUp' and 'tearDown' methods respectively. + + If it is necessary to override the __init__ method, the base class + __init__ method must always be called. It is important that subclasses + should not change the signature of their __init__ method, since instances + of the classes are instantiated automatically by parts of the framework + in order to be run. + + When subclassing TestCase, you can set these attributes: + * failureException: determines which exception will be raised when + the instance's assertion methods fail; test methods raising this + exception will be deemed to have 'failed' rather than 'errored'. + * longMessage: determines whether long messages (including repr of + objects used in assert methods) will be printed on failure in *addition* + to any explicit message passed. + * maxDiff: sets the maximum length of a diff in failure messages + by assert methods using difflib. It is looked up as an instance + attribute so can be configured by individual tests if required. + 'u'A class whose instances are single test cases. + + By default, the test code itself should be placed in a method named + 'runTest'. + + If the fixture may be used for many test cases, create as + many test methods as are needed. When instantiating such a TestCase + subclass, specify in the constructor arguments the name of the test method + that the instance is to execute. + + Test authors should subclass TestCase for their own tests. Construction + and deconstruction of the test's environment ('fixture') can be + implemented by overriding the 'setUp' and 'tearDown' methods respectively. + + If it is necessary to override the __init__ method, the base class + __init__ method must always be called. It is important that subclasses + should not change the signature of their __init__ method, since instances + of the classes are instantiated automatically by parts of the framework + in order to be run. + + When subclassing TestCase, you can set these attributes: + * failureException: determines which exception will be raised when + the instance's assertion methods fail; test methods raising this + exception will be deemed to have 'failed' rather than 'errored'. + * longMessage: determines whether long messages (including repr of + objects used in assert methods) will be printed on failure in *addition* + to any explicit message passed. + * maxDiff: sets the maximum length of a diff in failure messages + by assert methods using difflib. It is looked up as an instance + attribute so can be configured by individual tests if required. + 'b'Create an instance of the class that will use the named test + method when executed. Raises a ValueError if the instance does + not have a method with the specified name. + 'u'Create an instance of the class that will use the named test + method when executed. Raises a ValueError if the instance does + not have a method with the specified name. + 'b'No test'u'No test'b'no such test method in %s: %s'u'no such test method in %s: %s'b'assertDictEqual'u'assertDictEqual'b'assertListEqual'u'assertListEqual'b'assertTupleEqual'u'assertTupleEqual'b'assertSetEqual'u'assertSetEqual'b'assertMultiLineEqual'u'assertMultiLineEqual'b'Add a type specific assertEqual style function to compare a type. + + This method is for use by TestCase subclasses that need to register + their own type equality functions to provide nicer error messages. + + Args: + typeobj: The data type to call this function on when both values + are of the same type in assertEqual(). + function: The callable taking two arguments and an optional + msg= argument that raises self.failureException with a + useful error message when the two arguments are not equal. + 'u'Add a type specific assertEqual style function to compare a type. + + This method is for use by TestCase subclasses that need to register + their own type equality functions to provide nicer error messages. + + Args: + typeobj: The data type to call this function on when both values + are of the same type in assertEqual(). + function: The callable taking two arguments and an optional + msg= argument that raises self.failureException with a + useful error message when the two arguments are not equal. + 'b'Add a function, with arguments, to be called when the test is + completed. Functions added are called on a LIFO basis and are + called after tearDown on test failure or success. + + Cleanup items are called even if setUp fails (unlike tearDown).'u'Add a function, with arguments, to be called when the test is + completed. Functions added are called on a LIFO basis and are + called after tearDown on test failure or success. + + Cleanup items are called even if setUp fails (unlike tearDown).'b'descriptor 'addCleanup' of 'TestCase' object needs an argument'u'descriptor 'addCleanup' of 'TestCase' object needs an argument'b'function'u'function'b'Passing 'function' as keyword argument is deprecated'u'Passing 'function' as keyword argument is deprecated'b'addCleanup expected at least 1 positional argument, got %d'u'addCleanup expected at least 1 positional argument, got %d'b'($self, function, /, *args, **kwargs)'u'($self, function, /, *args, **kwargs)'b'Same as addCleanup, except the cleanup items are called even if + setUpClass fails (unlike tearDownClass).'u'Same as addCleanup, except the cleanup items are called even if + setUpClass fails (unlike tearDownClass).'b'Hook method for setting up the test fixture before exercising it.'u'Hook method for setting up the test fixture before exercising it.'b'Hook method for deconstructing the test fixture after testing it.'u'Hook method for deconstructing the test fixture after testing it.'b'Hook method for setting up class fixture before running tests in the class.'u'Hook method for setting up class fixture before running tests in the class.'b'Hook method for deconstructing the class fixture after running all tests in the class.'u'Hook method for deconstructing the class fixture after running all tests in the class.'b'Returns a one-line description of the test, or None if no + description has been provided. + + The default implementation of this method returns the first line of + the specified test method's docstring. + 'u'Returns a one-line description of the test, or None if no + description has been provided. + + The default implementation of this method returns the first line of + the specified test method's docstring. + 'b'%s.%s'u'%s.%s'b'%s (%s)'u'%s (%s)'b'<%s testMethod=%s>'u'<%s testMethod=%s>'b'addSkip'u'addSkip'b'TestResult has no addSkip method, skips not reported'u'TestResult has no addSkip method, skips not reported'b'Return a context manager that will return the enclosed block + of code in a subtest identified by the optional message and + keyword parameters. A failure in the subtest marks the test + case as failed but resumes execution at the end of the enclosed + block, allowing further test code to be executed. + 'u'Return a context manager that will return the enclosed block + of code in a subtest identified by the optional message and + keyword parameters. A failure in the subtest marks the test + case as failed but resumes execution at the end of the enclosed + block, allowing further test code to be executed. + 'b'TestResult has no addExpectedFailure method, reporting as passes'u'TestResult has no addExpectedFailure method, reporting as passes'b'TestResult has no addUnexpectedSuccess method, reporting as failure'u'TestResult has no addUnexpectedSuccess method, reporting as failure'b'startTestRun'u'startTestRun'b'__unittest_skip__'u'__unittest_skip__'b'__unittest_skip_why__'u'__unittest_skip_why__'b'__unittest_expecting_failure__'u'__unittest_expecting_failure__'b'stopTestRun'u'stopTestRun'b'Execute all cleanup functions. Normally called for you after + tearDown.'u'Execute all cleanup functions. Normally called for you after + tearDown.'b'Execute all class cleanup functions. Normally called for you after + tearDownClass.'u'Execute all class cleanup functions. Normally called for you after + tearDownClass.'b'Run the test without collecting errors in a TestResult'u'Run the test without collecting errors in a TestResult'b'Skip this test.'u'Skip this test.'b'Fail immediately, with the given message.'u'Fail immediately, with the given message.'b'Check that the expression is false.'u'Check that the expression is false.'b'%s is not false'u'%s is not false'b'Check that the expression is true.'u'Check that the expression is true.'b'%s is not true'u'%s is not true'b'Honour the longMessage attribute when generating failure messages. + If longMessage is False this means: + * Use only an explicit message if it is provided + * Otherwise use the standard message for the assert + + If longMessage is True: + * Use the standard message + * If an explicit message is provided, plus ' : ' and the explicit message + 'u'Honour the longMessage attribute when generating failure messages. + If longMessage is False this means: + * Use only an explicit message if it is provided + * Otherwise use the standard message for the assert + + If longMessage is True: + * Use the standard message + * If an explicit message is provided, plus ' : ' and the explicit message + 'b'%s : %s'u'%s : %s'b'Fail unless an exception of class expected_exception is raised + by the callable when invoked with specified positional and + keyword arguments. If a different type of exception is + raised, it will not be caught, and the test case will be + deemed to have suffered an error, exactly as for an + unexpected exception. + + If called with the callable and arguments omitted, will return a + context object used like this:: + + with self.assertRaises(SomeException): + do_something() + + An optional keyword argument 'msg' can be provided when assertRaises + is used as a context object. + + The context manager keeps a reference to the exception as + the 'exception' attribute. This allows you to inspect the + exception after the assertion:: + + with self.assertRaises(SomeException) as cm: + do_something() + the_exception = cm.exception + self.assertEqual(the_exception.error_code, 3) + 'u'Fail unless an exception of class expected_exception is raised + by the callable when invoked with specified positional and + keyword arguments. If a different type of exception is + raised, it will not be caught, and the test case will be + deemed to have suffered an error, exactly as for an + unexpected exception. + + If called with the callable and arguments omitted, will return a + context object used like this:: + + with self.assertRaises(SomeException): + do_something() + + An optional keyword argument 'msg' can be provided when assertRaises + is used as a context object. + + The context manager keeps a reference to the exception as + the 'exception' attribute. This allows you to inspect the + exception after the assertion:: + + with self.assertRaises(SomeException) as cm: + do_something() + the_exception = cm.exception + self.assertEqual(the_exception.error_code, 3) + 'b'assertRaises'u'assertRaises'b'Fail unless a warning of class warnClass is triggered + by the callable when invoked with specified positional and + keyword arguments. If a different type of warning is + triggered, it will not be handled: depending on the other + warning filtering rules in effect, it might be silenced, printed + out, or raised as an exception. + + If called with the callable and arguments omitted, will return a + context object used like this:: + + with self.assertWarns(SomeWarning): + do_something() + + An optional keyword argument 'msg' can be provided when assertWarns + is used as a context object. + + The context manager keeps a reference to the first matching + warning as the 'warning' attribute; similarly, the 'filename' + and 'lineno' attributes give you information about the line + of Python code from which the warning was triggered. + This allows you to inspect the warning after the assertion:: + + with self.assertWarns(SomeWarning) as cm: + do_something() + the_warning = cm.warning + self.assertEqual(the_warning.some_attribute, 147) + 'u'Fail unless a warning of class warnClass is triggered + by the callable when invoked with specified positional and + keyword arguments. If a different type of warning is + triggered, it will not be handled: depending on the other + warning filtering rules in effect, it might be silenced, printed + out, or raised as an exception. + + If called with the callable and arguments omitted, will return a + context object used like this:: + + with self.assertWarns(SomeWarning): + do_something() + + An optional keyword argument 'msg' can be provided when assertWarns + is used as a context object. + + The context manager keeps a reference to the first matching + warning as the 'warning' attribute; similarly, the 'filename' + and 'lineno' attributes give you information about the line + of Python code from which the warning was triggered. + This allows you to inspect the warning after the assertion:: + + with self.assertWarns(SomeWarning) as cm: + do_something() + the_warning = cm.warning + self.assertEqual(the_warning.some_attribute, 147) + 'b'assertWarns'u'assertWarns'b'Fail unless a log message of level *level* or higher is emitted + on *logger_name* or its children. If omitted, *level* defaults to + INFO and *logger* defaults to the root logger. + + This method must be used as a context manager, and will yield + a recording object with two attributes: `output` and `records`. + At the end of the context manager, the `output` attribute will + be a list of the matching formatted log messages and the + `records` attribute will be a list of the corresponding LogRecord + objects. + + Example:: + + with self.assertLogs('foo', level='INFO') as cm: + logging.getLogger('foo').info('first message') + logging.getLogger('foo.bar').error('second message') + self.assertEqual(cm.output, ['INFO:foo:first message', + 'ERROR:foo.bar:second message']) + 'u'Fail unless a log message of level *level* or higher is emitted + on *logger_name* or its children. If omitted, *level* defaults to + INFO and *logger* defaults to the root logger. + + This method must be used as a context manager, and will yield + a recording object with two attributes: `output` and `records`. + At the end of the context manager, the `output` attribute will + be a list of the matching formatted log messages and the + `records` attribute will be a list of the corresponding LogRecord + objects. + + Example:: + + with self.assertLogs('foo', level='INFO') as cm: + logging.getLogger('foo').info('first message') + logging.getLogger('foo.bar').error('second message') + self.assertEqual(cm.output, ['INFO:foo:first message', + 'ERROR:foo.bar:second message']) + 'b'Get a detailed comparison function for the types of the two args. + + Returns: A callable accepting (first, second, msg=None) that will + raise a failure exception if first != second with a useful human + readable error message for those types. + 'u'Get a detailed comparison function for the types of the two args. + + Returns: A callable accepting (first, second, msg=None) that will + raise a failure exception if first != second with a useful human + readable error message for those types. + 'b'The default assertEqual implementation, not type specific.'u'The default assertEqual implementation, not type specific.'b'%s != %s'u'%s != %s'b'Fail if the two objects are unequal as determined by the '==' + operator. + 'u'Fail if the two objects are unequal as determined by the '==' + operator. + 'b'Fail if the two objects are equal as determined by the '!=' + operator. + 'u'Fail if the two objects are equal as determined by the '!=' + operator. + 'b'%s == %s'u'%s == %s'b'Fail if the two objects are unequal as determined by their + difference rounded to the given number of decimal places + (default 7) and comparing to zero, or by comparing that the + difference between the two objects is more than the given + delta. + + Note that decimal places (from zero) are usually not the same + as significant digits (measured from the most significant digit). + + If the two objects compare equal then they will automatically + compare almost equal. + 'u'Fail if the two objects are unequal as determined by their + difference rounded to the given number of decimal places + (default 7) and comparing to zero, or by comparing that the + difference between the two objects is more than the given + delta. + + Note that decimal places (from zero) are usually not the same + as significant digits (measured from the most significant digit). + + If the two objects compare equal then they will automatically + compare almost equal. + 'b'specify delta or places not both'u'specify delta or places not both'b'%s != %s within %s delta (%s difference)'u'%s != %s within %s delta (%s difference)'b'%s != %s within %r places (%s difference)'u'%s != %s within %r places (%s difference)'b'Fail if the two objects are equal as determined by their + difference rounded to the given number of decimal places + (default 7) and comparing to zero, or by comparing that the + difference between the two objects is less than the given delta. + + Note that decimal places (from zero) are usually not the same + as significant digits (measured from the most significant digit). + + Objects that are equal automatically fail. + 'u'Fail if the two objects are equal as determined by their + difference rounded to the given number of decimal places + (default 7) and comparing to zero, or by comparing that the + difference between the two objects is less than the given delta. + + Note that decimal places (from zero) are usually not the same + as significant digits (measured from the most significant digit). + + Objects that are equal automatically fail. + 'b'%s == %s within %s delta (%s difference)'u'%s == %s within %s delta (%s difference)'b'%s == %s within %r places'u'%s == %s within %r places'b'An equality assertion for ordered sequences (like lists and tuples). + + For the purposes of this function, a valid ordered sequence type is one + which can be indexed, has a length, and has an equality operator. + + Args: + seq1: The first sequence to compare. + seq2: The second sequence to compare. + seq_type: The expected datatype of the sequences, or None if no + datatype should be enforced. + msg: Optional message to use on failure instead of a list of + differences. + 'u'An equality assertion for ordered sequences (like lists and tuples). + + For the purposes of this function, a valid ordered sequence type is one + which can be indexed, has a length, and has an equality operator. + + Args: + seq1: The first sequence to compare. + seq2: The second sequence to compare. + seq_type: The expected datatype of the sequences, or None if no + datatype should be enforced. + msg: Optional message to use on failure instead of a list of + differences. + 'b'First sequence is not a %s: %s'u'First sequence is not a %s: %s'b'Second sequence is not a %s: %s'u'Second sequence is not a %s: %s'b'sequence'u'sequence'b'First %s has no length. Non-sequence?'u'First %s has no length. Non-sequence?'b'Second %s has no length. Non-sequence?'u'Second %s has no length. Non-sequence?'b'%ss differ: %s != %s +'u'%ss differ: %s != %s +'b' +Unable to index element %d of first %s +'u' +Unable to index element %d of first %s +'b' +Unable to index element %d of second %s +'u' +Unable to index element %d of second %s +'b' +First differing element %d: +%s +%s +'u' +First differing element %d: +%s +%s +'b' +First %s contains %d additional elements. +'u' +First %s contains %d additional elements. +'b'First extra element %d: +%s +'u'First extra element %d: +%s +'b'Unable to index element %d of first %s +'u'Unable to index element %d of first %s +'b' +Second %s contains %d additional elements. +'u' +Second %s contains %d additional elements. +'b'Unable to index element %d of second %s +'u'Unable to index element %d of second %s +'b'A list-specific equality assertion. + + Args: + list1: The first list to compare. + list2: The second list to compare. + msg: Optional message to use on failure instead of a list of + differences. + + 'u'A list-specific equality assertion. + + Args: + list1: The first list to compare. + list2: The second list to compare. + msg: Optional message to use on failure instead of a list of + differences. + + 'b'A tuple-specific equality assertion. + + Args: + tuple1: The first tuple to compare. + tuple2: The second tuple to compare. + msg: Optional message to use on failure instead of a list of + differences. + 'u'A tuple-specific equality assertion. + + Args: + tuple1: The first tuple to compare. + tuple2: The second tuple to compare. + msg: Optional message to use on failure instead of a list of + differences. + 'b'A set-specific equality assertion. + + Args: + set1: The first set to compare. + set2: The second set to compare. + msg: Optional message to use on failure instead of a list of + differences. + + assertSetEqual uses ducktyping to support different types of sets, and + is optimized for sets specifically (parameters must support a + difference method). + 'u'A set-specific equality assertion. + + Args: + set1: The first set to compare. + set2: The second set to compare. + msg: Optional message to use on failure instead of a list of + differences. + + assertSetEqual uses ducktyping to support different types of sets, and + is optimized for sets specifically (parameters must support a + difference method). + 'b'invalid type when attempting set difference: %s'u'invalid type when attempting set difference: %s'b'first argument does not support set difference: %s'u'first argument does not support set difference: %s'b'second argument does not support set difference: %s'u'second argument does not support set difference: %s'b'Items in the first set but not the second:'u'Items in the first set but not the second:'b'Items in the second set but not the first:'u'Items in the second set but not the first:'b'Just like self.assertTrue(a in b), but with a nicer default message.'u'Just like self.assertTrue(a in b), but with a nicer default message.'b'%s not found in %s'u'%s not found in %s'b'Just like self.assertTrue(a not in b), but with a nicer default message.'u'Just like self.assertTrue(a not in b), but with a nicer default message.'b'%s unexpectedly found in %s'u'%s unexpectedly found in %s'b'Just like self.assertTrue(a is b), but with a nicer default message.'u'Just like self.assertTrue(a is b), but with a nicer default message.'b'%s is not %s'u'%s is not %s'b'Just like self.assertTrue(a is not b), but with a nicer default message.'u'Just like self.assertTrue(a is not b), but with a nicer default message.'b'unexpectedly identical: %s'u'unexpectedly identical: %s'b'First argument is not a dictionary'u'First argument is not a dictionary'b'Second argument is not a dictionary'u'Second argument is not a dictionary'b'Checks whether dictionary is a superset of subset.'u'Checks whether dictionary is a superset of subset.'b'assertDictContainsSubset is deprecated'u'assertDictContainsSubset is deprecated'b'%s, expected: %s, actual: %s'u'%s, expected: %s, actual: %s'b'Missing: %s'u'Missing: %s'b'; 'u'; 'b'Mismatched values: %s'u'Mismatched values: %s'b'Asserts that two iterables have the same elements, the same number of + times, without regard to order. + + self.assertEqual(Counter(list(first)), + Counter(list(second))) + + Example: + - [0, 1, 1] and [1, 0, 1] compare equal. + - [0, 0, 1] and [0, 1] compare unequal. + + 'u'Asserts that two iterables have the same elements, the same number of + times, without regard to order. + + self.assertEqual(Counter(list(first)), + Counter(list(second))) + + Example: + - [0, 1, 1] and [1, 0, 1] compare equal. + - [0, 0, 1] and [0, 1] compare unequal. + + 'b'Element counts were not equal: +'u'Element counts were not equal: +'b'First has %d, Second has %d: %r'u'First has %d, Second has %d: %r'b'Assert that two multi-line strings are equal.'u'Assert that two multi-line strings are equal.'b'First argument is not a string'u'First argument is not a string'b'Second argument is not a string'u'Second argument is not a string'b'Just like self.assertTrue(a < b), but with a nicer default message.'u'Just like self.assertTrue(a < b), but with a nicer default message.'b'%s not less than %s'u'%s not less than %s'b'Just like self.assertTrue(a <= b), but with a nicer default message.'u'Just like self.assertTrue(a <= b), but with a nicer default message.'b'%s not less than or equal to %s'u'%s not less than or equal to %s'b'Just like self.assertTrue(a > b), but with a nicer default message.'u'Just like self.assertTrue(a > b), but with a nicer default message.'b'%s not greater than %s'u'%s not greater than %s'b'Just like self.assertTrue(a >= b), but with a nicer default message.'u'Just like self.assertTrue(a >= b), but with a nicer default message.'b'%s not greater than or equal to %s'u'%s not greater than or equal to %s'b'Same as self.assertTrue(obj is None), with a nicer default message.'u'Same as self.assertTrue(obj is None), with a nicer default message.'b'%s is not None'u'%s is not None'b'Included for symmetry with assertIsNone.'u'Included for symmetry with assertIsNone.'b'unexpectedly None'u'unexpectedly None'b'Same as self.assertTrue(isinstance(obj, cls)), with a nicer + default message.'u'Same as self.assertTrue(isinstance(obj, cls)), with a nicer + default message.'b'%s is not an instance of %r'u'%s is not an instance of %r'b'Included for symmetry with assertIsInstance.'u'Included for symmetry with assertIsInstance.'b'%s is an instance of %r'u'%s is an instance of %r'b'Asserts that the message in a raised exception matches a regex. + + Args: + expected_exception: Exception class expected to be raised. + expected_regex: Regex (re.Pattern object or string) expected + to be found in error message. + args: Function to be called and extra positional args. + kwargs: Extra kwargs. + msg: Optional message used in case of failure. Can only be used + when assertRaisesRegex is used as a context manager. + 'u'Asserts that the message in a raised exception matches a regex. + + Args: + expected_exception: Exception class expected to be raised. + expected_regex: Regex (re.Pattern object or string) expected + to be found in error message. + args: Function to be called and extra positional args. + kwargs: Extra kwargs. + msg: Optional message used in case of failure. Can only be used + when assertRaisesRegex is used as a context manager. + 'b'assertRaisesRegex'u'assertRaisesRegex'b'Asserts that the message in a triggered warning matches a regexp. + Basic functioning is similar to assertWarns() with the addition + that only warnings whose messages also match the regular expression + are considered successful matches. + + Args: + expected_warning: Warning class expected to be triggered. + expected_regex: Regex (re.Pattern object or string) expected + to be found in error message. + args: Function to be called and extra positional args. + kwargs: Extra kwargs. + msg: Optional message used in case of failure. Can only be used + when assertWarnsRegex is used as a context manager. + 'u'Asserts that the message in a triggered warning matches a regexp. + Basic functioning is similar to assertWarns() with the addition + that only warnings whose messages also match the regular expression + are considered successful matches. + + Args: + expected_warning: Warning class expected to be triggered. + expected_regex: Regex (re.Pattern object or string) expected + to be found in error message. + args: Function to be called and extra positional args. + kwargs: Extra kwargs. + msg: Optional message used in case of failure. Can only be used + when assertWarnsRegex is used as a context manager. + 'b'assertWarnsRegex'u'assertWarnsRegex'b'Fail the test unless the text matches the regular expression.'u'Fail the test unless the text matches the regular expression.'b'expected_regex must not be empty.'u'expected_regex must not be empty.'b'Regex didn't match: %r not found in %r'u'Regex didn't match: %r not found in %r'b'Fail the test if the text matches the regular expression.'u'Fail the test if the text matches the regular expression.'b'Regex matched: %r matches %r in %r'u'Regex matched: %r matches %r in %r'b'Please use {0} instead.'u'Please use {0} instead.'b'A test case that wraps a test function. + + This is useful for slipping pre-existing test functions into the + unittest framework. Optionally, set-up and tidy-up functions can be + supplied. As with TestCase, the tidy-up ('tearDown') function will + always be called if the set-up ('setUp') function ran successfully. + 'u'A test case that wraps a test function. + + This is useful for slipping pre-existing test functions into the + unittest framework. Optionally, set-up and tidy-up functions can be + supplied. As with TestCase, the tidy-up ('tearDown') function will + always be called if the set-up ('setUp') function ran successfully. + 'b'<%s tec=%s>'u'<%s tec=%s>'b'subtests cannot be run directly'u'subtests cannot be run directly'b'[{}]'u'[{}]'b'({})'u'({})'b'()'u'()'b'{} {}'u'{} {}'b'Returns a one-line description of the subtest, or None if no + description has been provided. + 'u'Returns a one-line description of the subtest, or None if no + description has been provided. + 'u'unittest.case'u'case'distutils.ccompiler + +Contains CCompiler, an abstract base class that defines the interface +for the Distutils compiler abstraction model.distutils.errorsdistutils.spawndistutils.file_utilmove_filedistutils.dir_utilmkpathdistutils.dep_utilnewer_pairwisenewer_groupdistutils.utilsplit_quotedexecuteCCompilerAbstract base class to define the interface that must be implemented + by real compiler classes. Also has some utility methods used by + several compiler classes. + + The basic idea behind a compiler abstraction class is that each + instance can be used for all the compile/link steps in building a + single project. Thus, attributes common to all of those compile and + link steps -- include directories, macros to define, libraries to link + against, etc. -- are attributes of the compiler instance. To allow for + variability in how individual files are treated, most of those + attributes may be varied on a per-compilation or per-link basis. + compiler_typesrc_extensionsobj_extensionstatic_lib_extensionshared_lib_extensionstatic_lib_formatshared_lib_formatexe_extension.cc++.cc.cpp.cxxobjc.mlanguage_maplanguage_orderoutput_dirmacrosinclude_dirslibrarieslibrary_dirsruntime_library_dirsset_executableset_executablesDefine the executables (and options for them) that will be run + to perform the various stages of compilation. The exact set of + executables that may be specified here depends on the compiler + class (via the 'executables' class attribute), but most will have: + compiler the C/C++ compiler + linker_so linker used to create shared objects and libraries + linker_exe linker used to create binary executables + archiver static library creator + + On platforms with a command-line (Unix, DOS/Windows), each of these + is a string that will be split into executable name and (optional) + list of arguments. (Splitting the string is done similarly to how + Unix shells operate: words are delimited by spaces, but quotes and + backslashes can override this. See + 'distutils.util.split_quoted()'.) + unknown executable '%s' for class %s_find_macrodefn_check_macro_definitionsdefinitionsEnsures that every element of 'definitions' is a valid macro + definition, ie. either (name,value) 2-tuple or a (name,) tuple. Do + nothing if all definitions are OK, raise TypeError otherwise. + invalid macro definition '%s': must be tuple (string,), (string, string), or (string, None)define_macroDefine a preprocessor macro for all compilations driven by this + compiler object. The optional parameter 'value' should be a + string; if it is not supplied, then the macro will be defined + without an explicit value and the exact outcome depends on the + compiler used (XXX true? does ANSI say anything about this?) + undefine_macroUndefine a preprocessor macro for all compilations driven by + this compiler object. If the same macro is defined by + 'define_macro()' and undefined by 'undefine_macro()' the last call + takes precedence (including multiple redefinitions or + undefinitions). If the macro is redefined/undefined on a + per-compilation basis (ie. in the call to 'compile()'), then that + takes precedence. + undefnadd_include_dirAdd 'dir' to the list of directories that will be searched for + header files. The compiler is instructed to search directories in + the order in which they are supplied by successive calls to + 'add_include_dir()'. + set_include_dirsdirsSet the list of directories that will be searched to 'dirs' (a + list of strings). Overrides any preceding calls to + 'add_include_dir()'; subsequence calls to 'add_include_dir()' add + to the list passed to 'set_include_dirs()'. This does not affect + any list of standard include directories that the compiler may + search by default. + add_libraryAdd 'libname' to the list of libraries that will be included in + all links driven by this compiler object. Note that 'libname' + should *not* be the name of a file containing a library, but the + name of the library itself: the actual filename will be inferred by + the linker, the compiler, or the compiler class (depending on the + platform). + + The linker will be instructed to link against libraries in the + order they were supplied to 'add_library()' and/or + 'set_libraries()'. It is perfectly valid to duplicate library + names; the linker will be instructed to link against libraries as + many times as they are mentioned. + set_librariesSet the list of libraries to be included in all links driven by + this compiler object to 'libnames' (a list of strings). This does + not affect any standard system libraries that the linker may + include by default. + add_library_dirAdd 'dir' to the list of directories that will be searched for + libraries specified to 'add_library()' and 'set_libraries()'. The + linker will be instructed to search for libraries in the order they + are supplied to 'add_library_dir()' and/or 'set_library_dirs()'. + set_library_dirsSet the list of library search directories to 'dirs' (a list of + strings). This does not affect any standard library search path + that the linker may search by default. + add_runtime_library_dirAdd 'dir' to the list of directories that will be searched for + shared libraries at runtime. + set_runtime_library_dirsSet the list of directories to search for shared libraries at + runtime to 'dirs' (a list of strings). This does not affect any + standard search path that the runtime linker may search by + default. + add_link_objectAdd 'object' to the list of object files (or analogues, such as + explicitly named library files or the output of "resource + compilers") to be included in every link driven by this compiler + object. + set_link_objectsSet the list of object files (or analogues) to be included in + every link to 'objects'. This does not affect any standard object + files that the linker may include by default (such as system + libraries). + _setup_compileoutdirincdirssourcesdependsProcess arguments and decide which source files to compile.'output_dir' must be a string or None'macros' (if supplied) must be a list of tuples'include_dirs' (if supplied) must be a list of stringsobject_filenamesstrip_dirgen_preprocess_optionspp_optsbuildsrc_get_cc_args-g_fix_compile_argsTypecheck and fix-up some of the arguments to the 'compile()' + method, and return fixed-up values. Specifically: if 'output_dir' + is None, replaces it with 'self.output_dir'; ensures that 'macros' + is a list, and augments it with 'self.macros'; ensures that + 'include_dirs' is a list, and augments it with 'self.include_dirs'. + Guarantees that the returned values are of the correct type, + i.e. for 'output_dir' either string or None, and for 'macros' and + 'include_dirs' either list or None. + _prep_compileDecide which souce files must be recompiled. + + Determine the list of object files corresponding to 'sources', + and figure out which ones really need to be recompiled. + Return a list of all object files and a dictionary telling + which source files can be skipped. + _fix_object_argsTypecheck and fix up some arguments supplied to various methods. + Specifically: ensure that 'objects' is a list; if output_dir is + None, replace with self.output_dir. Return fixed versions of + 'objects' and 'output_dir'. + 'objects' must be a list or tuple of strings_fix_lib_argsTypecheck and fix up some of the arguments supplied to the + 'link_*' methods. Specifically: ensure that all arguments are + lists, and augment them with their permanent versions + (eg. 'self.libraries' augments 'libraries'). Return a tuple with + fixed versions of all arguments. + 'libraries' (if supplied) must be a list of strings'library_dirs' (if supplied) must be a list of strings'runtime_library_dirs' (if supplied) must be a list of strings"'runtime_library_dirs' (if supplied) ""must be a list of strings"_need_linkoutput_fileReturn true if we need to relink the files listed in 'objects' + to recreate 'output_file'. + newerdetect_languageDetect the language of a given file, or list of files. Uses + language_map, and language_order to do the job. + extlangextindexpreprocessextra_preargsextra_postargsPreprocess a single C/C++ source file, named in 'source'. + Output will be written to file named 'output_file', or stdout if + 'output_file' not supplied. 'macros' is a list of macro + definitions as for 'compile()', which will augment the macros set + with 'define_macro()' and 'undefine_macro()'. 'include_dirs' is a + list of directory names that will be added to the default list. + + Raises PreprocessError on failure. + Compile one or more source files. + + 'sources' must be a list of filenames, most likely C/C++ + files, but in reality anything that can be handled by a + particular compiler and compiler class (eg. MSVCCompiler can + handle resource files in 'sources'). Return a list of object + filenames, one per source filename in 'sources'. Depending on + the implementation, not all source files will necessarily be + compiled, but all corresponding object filenames will be + returned. + + If 'output_dir' is given, object files will be put under it, while + retaining their original path component. That is, "foo/bar.c" + normally compiles to "foo/bar.o" (for a Unix implementation); if + 'output_dir' is "build", then it would compile to + "build/foo/bar.o". + + 'macros', if given, must be a list of macro definitions. A macro + definition is either a (name, value) 2-tuple or a (name,) 1-tuple. + The former defines a macro; if the value is None, the macro is + defined without an explicit value. The 1-tuple case undefines a + macro. Later definitions/redefinitions/ undefinitions take + precedence. + + 'include_dirs', if given, must be a list of strings, the + directories to add to the default include file search path for this + compilation only. + + 'debug' is a boolean; if true, the compiler will be instructed to + output debug symbols in (or alongside) the object file(s). + + 'extra_preargs' and 'extra_postargs' are implementation- dependent. + On platforms that have the notion of a command-line (e.g. Unix, + DOS/Windows), they are most likely lists of strings: extra + command-line arguments to prepend/append to the compiler command + line. On other platforms, consult the implementation class + documentation. In any event, they are intended as an escape hatch + for those occasions when the abstract compiler framework doesn't + cut the mustard. + + 'depends', if given, is a list of filenames that all targets + depend on. If a source file is older than any file in + depends, then the source file will be recompiled. This + supports dependency tracking, but only at a coarse + granularity. + + Raises CompileError on failure. + _compileCompile 'src' to product 'obj'.create_static_liboutput_libnametarget_langLink a bunch of stuff together to create a static library file. + The "bunch of stuff" consists of the list of object files supplied + as 'objects', the extra object files supplied to + 'add_link_object()' and/or 'set_link_objects()', the libraries + supplied to 'add_library()' and/or 'set_libraries()', and the + libraries supplied as 'libraries' (if any). + + 'output_libname' should be a library name, not a filename; the + filename will be inferred from the library name. 'output_dir' is + the directory where the library file will be put. + + 'debug' is a boolean; if true, debugging information will be + included in the library (note that on most platforms, it is the + compile step where this matters: the 'debug' flag is included here + just for consistency). + + 'target_lang' is the target language for which the given objects + are being compiled. This allows specific linkage time treatment of + certain languages. + + Raises LibError on failure. + shared_objectSHARED_OBJECTshared_librarySHARED_LIBRARYEXECUTABLEtarget_descoutput_filenameexport_symbolsbuild_tempLink a bunch of stuff together to create an executable or + shared library file. + + The "bunch of stuff" consists of the list of object files supplied + as 'objects'. 'output_filename' should be a filename. If + 'output_dir' is supplied, 'output_filename' is relative to it + (i.e. 'output_filename' can provide directory components if + needed). + + 'libraries' is a list of libraries to link against. These are + library names, not filenames, since they're translated into + filenames in a platform-specific way (eg. "foo" becomes "libfoo.a" + on Unix and "foo.lib" on DOS/Windows). However, they can include a + directory component, which means the linker will look in that + specific directory rather than searching all the normal locations. + + 'library_dirs', if supplied, should be a list of directories to + search for libraries that were specified as bare library names + (ie. no directory component). These are on top of the system + default and those supplied to 'add_library_dir()' and/or + 'set_library_dirs()'. 'runtime_library_dirs' is a list of + directories that will be embedded into the shared library and used + to search for other shared libraries that *it* depends on at + run-time. (This may only be relevant on Unix.) + + 'export_symbols' is a list of symbols that the shared library will + export. (This appears to be relevant only on Windows.) + + 'debug' is as for 'compile()' and 'create_static_lib()', with the + slight distinction that it actually matters on most platforms (as + opposed to 'create_static_lib()', which includes a 'debug' flag + mostly for form's sake). + + 'extra_preargs' and 'extra_postargs' are as for 'compile()' (except + of course that they supply command-line arguments for the + particular linker being used). + + 'target_lang' is the target language for which the given objects + are being compiled. This allows specific linkage time treatment of + certain languages. + + Raises LinkError on failure. + link_shared_liblibrary_filenamelib_typelink_shared_objectlink_executableoutput_prognameexecutable_filenamelibrary_dir_optionReturn the compiler option to add 'dir' to the list of + directories searched for libraries. + runtime_library_dir_optionReturn the compiler option to add 'dir' to the list of + directories searched for runtime libraries. + library_optionReturn the compiler option to add 'lib' to the list of libraries + linked into the shared library or executable. + has_functionReturn a boolean indicating whether funcname is supported on + the current platform. The optional arguments can be used to + augment the compilation environment. + fnamefdopenincl#include "%s" +int main (int argc, char **argv) { + %s(); + return 0; +} +CompileErrora.outLinkErrorfind_library_fileSearch the specified list of directories for a static or shared + library file 'lib' and return the full path to that file. If + 'debug' true, look for a debugging version (if that makes sense on + the current platform). Return None if 'lib' wasn't found in any of + the specified directories. + source_filenamesobj_namessrc_namesplitdriveUnknownFileErrorunknown file type '%s' (from '%s')shared_object_filenamestaticdylibxcode_stub'lib_type' must be "static", "shared", "dylib", or "xcode_stub"_lib_format_lib_extensionannouncedebug_printdistutils.debugwarning: %s +0o777cygwin.*unixmsvc_default_compilersget_default_compilerDetermine the default compiler to use for the given platform. + + osname should be one of the standard Python OS names (i.e. the + ones returned by os.name) and platform the common value + returned by sys.platform for the platform in question. + + The default values are os.name and sys.platform in case the + parameters are not given. + unixccompilerUnixCCompilerstandard UNIX-style compiler_msvccompilerMSVCCompilerMicrosoft Visual C++cygwinccompilerCygwinCCompilerCygwin port of GNU C Compiler for Win32Mingw32CCompilerMingw32 port of GNU C Compiler for Win32mingw32bcppcompilerBCPPCompilerBorland C++ Compilerbcppcompiler_classshow_compilersPrint list of available compilers (used by the "--help-compiler" + options to "build", "build_ext", "build_clib"). + distutils.fancy_getoptFancyGetoptcompilerscompiler=pretty_printerList of available compilers:platGenerate an instance of some CCompiler subclass for the supplied + platform/compiler combination. 'plat' defaults to 'os.name' + (eg. 'posix', 'nt'), and 'compiler' defaults to the default compiler + for that platform. Currently only 'posix' and 'nt' are supported, and + the default compilers are "traditional Unix interface" (UnixCCompiler + class) and Visual C++ (MSVCCompiler class). Note that it's perfectly + possible to ask for a Unix compiler object under Windows, and a + Microsoft compiler object under Unix -- if you supply a value for + 'compiler', 'plat' is ignored. + class_namelong_descriptiondon't know how to compile C/C++ code on platform '%s' with '%s' compilerDistutilsPlatformErrordistutils.DistutilsModuleErrorcan't compile C/C++ code: unable to load module '%s'can't compile C/C++ code: unable to find class '%s' in module '%s'"can't compile C/C++ code: unable to find class '%s' ""in module '%s'"Generate C pre-processor options (-D, -U, -I) as used by at least + two types of compilers: the typical Unix compiler and Visual C++. + 'macros' is the usual thing, a list of 1- or 2-tuples, where (name,) + means undefine (-U) macro 'name', and (name,value) means define (-D) + macro 'name' to 'value'. 'include_dirs' is just a list of directory + names to be added to the header file search path (-I). Returns a list + of command-line options suitable for either Unix compilers or Visual + C++. + macrobad macro definition '%s': each element of 'macros' list must be a 1- or 2-tuple"bad macro definition '%s': ""each element of 'macros' list must be a 1- or 2-tuple"-U%s-D%s-D%s=%s-I%sgen_lib_optionsGenerate linker options for searching library directories and + linking with specific libraries. 'libraries' and 'library_dirs' are, + respectively, lists of library names (not filenames!) and search + directories. Returns a list of command-line options suitable for use + with some compiler (depending on the two format strings passed in). + lib_optslib_dirlib_namelib_fileno library file corresponding to '%s' found (skipping)"no library file corresponding to ""'%s' found (skipping)"# 'compiler_type' is a class attribute that identifies this class. It# keeps code that wants to know what kind of compiler it's dealing with# from having to import all possible compiler classes just to do an# 'isinstance'. In concrete CCompiler subclasses, 'compiler_type'# should really, really be one of the keys of the 'compiler_class'# dictionary (see below -- used by the 'new_compiler()' factory# function) -- authors of new compiler interface classes are# responsible for updating 'compiler_class'!# XXX things not handled by this compiler abstraction model:# * client can't provide additional options for a compiler,# e.g. warning, optimization, debugging flags. Perhaps this# should be the domain of concrete compiler abstraction classes# (UnixCCompiler, MSVCCompiler, etc.) -- or perhaps the base# class should have methods for the common ones.# * can't completely override the include or library searchg# path, ie. no "cc -I -Idir1 -Idir2" or "cc -L -Ldir1 -Ldir2".# I'm not sure how widely supported this is even by Unix# compilers, much less on other platforms. And I'm even less# sure how useful it is; maybe for cross-compiling, but# support for that is a ways off. (And anyways, cross# compilers probably have a dedicated binary with the# right paths compiled in. I hope.)# * can't do really freaky things with the library list/library# dirs, e.g. "-Ldir1 -lfoo -Ldir2 -lfoo" to link against# different versions of libfoo.a in different locations. I# think this is useless without the ability to null out the# library search path anyways.# Subclasses that rely on the standard filename generation methods# implemented below should override these; see the comment near# those methods ('object_filenames()' et. al.) for details:# list of strings# string# format string# prob. same as static_lib_format# Default language settings. language_map is used to detect a source# file or Extension target language, checking source filenames.# language_order is used to detect the language precedence, when deciding# what language to use when mixing source types. For example, if some# extension has two files with ".c" extension, and one with ".cpp", it# is still linked as c++.# 'output_dir': a common output directory for object, library,# shared object, and shared library files# 'macros': a list of macro definitions (or undefinitions). A# macro definition is a 2-tuple (name, value), where the value is# either a string or None (no explicit value). A macro# undefinition is a 1-tuple (name,).# 'include_dirs': a list of directories to search for include files# 'libraries': a list of libraries to include in any link# (library names, not filenames: eg. "foo" not "libfoo.a")# 'library_dirs': a list of directories to search for libraries# 'runtime_library_dirs': a list of directories to search for# shared libraries/objects at runtime# 'objects': a list of object files (or similar, such as explicitly# named library files) to include on any link# Note that some CCompiler implementation classes will define class# attributes 'cpp', 'cc', etc. with hard-coded executable names;# this is appropriate when a compiler class is for exactly one# compiler/OS combination (eg. MSVCCompiler). Other compiler# classes (UnixCCompiler, in particular) are driven by information# discovered at run-time, since there are many different ways to do# basically the same things with Unix C compilers.# -- Bookkeeping methods -------------------------------------------# Delete from the list of macro definitions/undefinitions if# already there (so that this one will take precedence).# -- Private utility methods --------------------------------------# (here for the convenience of subclasses)# Helper method to prep compiler in subclass compile() methods# Get the list of expected output (object) files# works for unixccompiler, cygwinccompiler# Return an empty dict for the "which source files can be skipped"# return value to preserve API compatibility.# -- Worker methods ------------------------------------------------# (must be implemented by subclasses)# A concrete compiler class can either override this method# entirely or implement _compile().# Return *all* object filenames, not just the ones we just built.# A concrete compiler class that does not override compile()# should implement _compile().# values for target_desc parameter in link()# Old 'link_*()' methods, rewritten to use the new 'link()' method.# -- Miscellaneous methods -----------------------------------------# These are all used by the 'gen_lib_options() function; there is# no appropriate default implementation so subclasses should# implement all of these.# this can't be included at module scope because it tries to# import math which might not be available at that point - maybe# the necessary logic should just be inlined?# -- Filename generation methods -----------------------------------# The default implementation of the filename generating methods are# prejudiced towards the Unix/DOS/Windows view of the world:# * object files are named by replacing the source file extension# (eg. .c/.cpp -> .o/.obj)# * library files (shared or static) are named by plugging the# library name and extension into a format string, eg.# "lib%s.%s" % (lib_name, ".a") for Unix static libraries# * executables are named by appending an extension (possibly# empty) to the program name: eg. progname + ".exe" for# Windows# To reduce redundant code, these methods expect to find# several attributes in the current object (presumably defined# as class attributes):# * src_extensions -# list of C/C++ source file extensions, eg. ['.c', '.cpp']# * obj_extension -# object file extension, eg. '.o' or '.obj'# * static_lib_extension -# extension for static library files, eg. '.a' or '.lib'# * shared_lib_extension -# extension for shared library/object files, eg. '.so', '.dll'# * static_lib_format -# format string for generating static library filenames,# eg. 'lib%s.%s' or '%s.%s'# * shared_lib_format# format string for generating shared library filenames# (probably same as static_lib_format, since the extension# is one of the intended parameters to the format string)# * exe_extension -# extension for executable files, eg. '' or '.exe'# Chop off the drive# If abs, chop off leading /# or 'shared'# -- Utility methods -----------------------------------------------# Map a sys.platform/os.name ('posix', 'nt') to the default compiler# type for that platform. Keys are interpreted as re match# patterns. Order is important; platform mappings are preferred over# OS names.# Platform string mappings# on a cygwin built python we can use gcc like an ordinary UNIXish# compiler# OS name mappings# Default to Unix compiler# Map compiler types to (module_name, class_name) pairs -- ie. where to# find the code that implements an interface to this compiler. (The module# is assumed to be in the 'distutils' package.)# XXX this "knows" that the compiler option it's describing is# "--compiler", which just happens to be the case for the three# commands that use it.# XXX The None is necessary to preserve backwards compatibility# with classes that expect verbose to be the first positional# argument.# XXX it would be nice (mainly aesthetic, and so we don't generate# stupid-looking command lines) to go over 'macros' and eliminate# redundant definitions/undefinitions (ie. ensure that only the# latest mention of a particular macro winds up on the command# line). I don't think it's essential, though, since most (all?)# Unix C compilers only pay attention to the latest -D or -U# mention of a macro on their command line. Similar situation for# 'include_dirs'. I'm punting on both for now. Anyways, weeding out# redundancies like this should probably be the province of# CCompiler, since the data structures used are inherited from it# and therefore common to all CCompiler classes.# undefine this macro# define with no explicit value# XXX *don't* need to be clever about quoting the# macro value here, because we're going to avoid the# shell at all costs when we spawn the command!# XXX it's important that we *not* remove redundant library mentions!# sometimes you really do have to say "-lfoo -lbar -lfoo" in order to# resolve all symbols. I just hope we never have to say "-lfoo obj.o# -lbar" to get things to work -- that's certainly a possibility, but a# pretty nasty way to arrange your C code.b'distutils.ccompiler + +Contains CCompiler, an abstract base class that defines the interface +for the Distutils compiler abstraction model.'u'distutils.ccompiler + +Contains CCompiler, an abstract base class that defines the interface +for the Distutils compiler abstraction model.'b'Abstract base class to define the interface that must be implemented + by real compiler classes. Also has some utility methods used by + several compiler classes. + + The basic idea behind a compiler abstraction class is that each + instance can be used for all the compile/link steps in building a + single project. Thus, attributes common to all of those compile and + link steps -- include directories, macros to define, libraries to link + against, etc. -- are attributes of the compiler instance. To allow for + variability in how individual files are treated, most of those + attributes may be varied on a per-compilation or per-link basis. + 'u'Abstract base class to define the interface that must be implemented + by real compiler classes. Also has some utility methods used by + several compiler classes. + + The basic idea behind a compiler abstraction class is that each + instance can be used for all the compile/link steps in building a + single project. Thus, attributes common to all of those compile and + link steps -- include directories, macros to define, libraries to link + against, etc. -- are attributes of the compiler instance. To allow for + variability in how individual files are treated, most of those + attributes may be varied on a per-compilation or per-link basis. + 'b'.c'u'.c'b'c++'u'c++'b'.cc'u'.cc'b'.cpp'u'.cpp'b'.cxx'u'.cxx'b'objc'u'objc'b'.m'u'.m'b'Define the executables (and options for them) that will be run + to perform the various stages of compilation. The exact set of + executables that may be specified here depends on the compiler + class (via the 'executables' class attribute), but most will have: + compiler the C/C++ compiler + linker_so linker used to create shared objects and libraries + linker_exe linker used to create binary executables + archiver static library creator + + On platforms with a command-line (Unix, DOS/Windows), each of these + is a string that will be split into executable name and (optional) + list of arguments. (Splitting the string is done similarly to how + Unix shells operate: words are delimited by spaces, but quotes and + backslashes can override this. See + 'distutils.util.split_quoted()'.) + 'u'Define the executables (and options for them) that will be run + to perform the various stages of compilation. The exact set of + executables that may be specified here depends on the compiler + class (via the 'executables' class attribute), but most will have: + compiler the C/C++ compiler + linker_so linker used to create shared objects and libraries + linker_exe linker used to create binary executables + archiver static library creator + + On platforms with a command-line (Unix, DOS/Windows), each of these + is a string that will be split into executable name and (optional) + list of arguments. (Splitting the string is done similarly to how + Unix shells operate: words are delimited by spaces, but quotes and + backslashes can override this. See + 'distutils.util.split_quoted()'.) + 'b'unknown executable '%s' for class %s'u'unknown executable '%s' for class %s'b'Ensures that every element of 'definitions' is a valid macro + definition, ie. either (name,value) 2-tuple or a (name,) tuple. Do + nothing if all definitions are OK, raise TypeError otherwise. + 'u'Ensures that every element of 'definitions' is a valid macro + definition, ie. either (name,value) 2-tuple or a (name,) tuple. Do + nothing if all definitions are OK, raise TypeError otherwise. + 'b'invalid macro definition '%s': 'u'invalid macro definition '%s': 'b'must be tuple (string,), (string, string), or 'u'must be tuple (string,), (string, string), or 'b'(string, None)'u'(string, None)'b'Define a preprocessor macro for all compilations driven by this + compiler object. The optional parameter 'value' should be a + string; if it is not supplied, then the macro will be defined + without an explicit value and the exact outcome depends on the + compiler used (XXX true? does ANSI say anything about this?) + 'u'Define a preprocessor macro for all compilations driven by this + compiler object. The optional parameter 'value' should be a + string; if it is not supplied, then the macro will be defined + without an explicit value and the exact outcome depends on the + compiler used (XXX true? does ANSI say anything about this?) + 'b'Undefine a preprocessor macro for all compilations driven by + this compiler object. If the same macro is defined by + 'define_macro()' and undefined by 'undefine_macro()' the last call + takes precedence (including multiple redefinitions or + undefinitions). If the macro is redefined/undefined on a + per-compilation basis (ie. in the call to 'compile()'), then that + takes precedence. + 'u'Undefine a preprocessor macro for all compilations driven by + this compiler object. If the same macro is defined by + 'define_macro()' and undefined by 'undefine_macro()' the last call + takes precedence (including multiple redefinitions or + undefinitions). If the macro is redefined/undefined on a + per-compilation basis (ie. in the call to 'compile()'), then that + takes precedence. + 'b'Add 'dir' to the list of directories that will be searched for + header files. The compiler is instructed to search directories in + the order in which they are supplied by successive calls to + 'add_include_dir()'. + 'u'Add 'dir' to the list of directories that will be searched for + header files. The compiler is instructed to search directories in + the order in which they are supplied by successive calls to + 'add_include_dir()'. + 'b'Set the list of directories that will be searched to 'dirs' (a + list of strings). Overrides any preceding calls to + 'add_include_dir()'; subsequence calls to 'add_include_dir()' add + to the list passed to 'set_include_dirs()'. This does not affect + any list of standard include directories that the compiler may + search by default. + 'u'Set the list of directories that will be searched to 'dirs' (a + list of strings). Overrides any preceding calls to + 'add_include_dir()'; subsequence calls to 'add_include_dir()' add + to the list passed to 'set_include_dirs()'. This does not affect + any list of standard include directories that the compiler may + search by default. + 'b'Add 'libname' to the list of libraries that will be included in + all links driven by this compiler object. Note that 'libname' + should *not* be the name of a file containing a library, but the + name of the library itself: the actual filename will be inferred by + the linker, the compiler, or the compiler class (depending on the + platform). + + The linker will be instructed to link against libraries in the + order they were supplied to 'add_library()' and/or + 'set_libraries()'. It is perfectly valid to duplicate library + names; the linker will be instructed to link against libraries as + many times as they are mentioned. + 'u'Add 'libname' to the list of libraries that will be included in + all links driven by this compiler object. Note that 'libname' + should *not* be the name of a file containing a library, but the + name of the library itself: the actual filename will be inferred by + the linker, the compiler, or the compiler class (depending on the + platform). + + The linker will be instructed to link against libraries in the + order they were supplied to 'add_library()' and/or + 'set_libraries()'. It is perfectly valid to duplicate library + names; the linker will be instructed to link against libraries as + many times as they are mentioned. + 'b'Set the list of libraries to be included in all links driven by + this compiler object to 'libnames' (a list of strings). This does + not affect any standard system libraries that the linker may + include by default. + 'u'Set the list of libraries to be included in all links driven by + this compiler object to 'libnames' (a list of strings). This does + not affect any standard system libraries that the linker may + include by default. + 'b'Add 'dir' to the list of directories that will be searched for + libraries specified to 'add_library()' and 'set_libraries()'. The + linker will be instructed to search for libraries in the order they + are supplied to 'add_library_dir()' and/or 'set_library_dirs()'. + 'u'Add 'dir' to the list of directories that will be searched for + libraries specified to 'add_library()' and 'set_libraries()'. The + linker will be instructed to search for libraries in the order they + are supplied to 'add_library_dir()' and/or 'set_library_dirs()'. + 'b'Set the list of library search directories to 'dirs' (a list of + strings). This does not affect any standard library search path + that the linker may search by default. + 'u'Set the list of library search directories to 'dirs' (a list of + strings). This does not affect any standard library search path + that the linker may search by default. + 'b'Add 'dir' to the list of directories that will be searched for + shared libraries at runtime. + 'u'Add 'dir' to the list of directories that will be searched for + shared libraries at runtime. + 'b'Set the list of directories to search for shared libraries at + runtime to 'dirs' (a list of strings). This does not affect any + standard search path that the runtime linker may search by + default. + 'u'Set the list of directories to search for shared libraries at + runtime to 'dirs' (a list of strings). This does not affect any + standard search path that the runtime linker may search by + default. + 'b'Add 'object' to the list of object files (or analogues, such as + explicitly named library files or the output of "resource + compilers") to be included in every link driven by this compiler + object. + 'u'Add 'object' to the list of object files (or analogues, such as + explicitly named library files or the output of "resource + compilers") to be included in every link driven by this compiler + object. + 'b'Set the list of object files (or analogues) to be included in + every link to 'objects'. This does not affect any standard object + files that the linker may include by default (such as system + libraries). + 'u'Set the list of object files (or analogues) to be included in + every link to 'objects'. This does not affect any standard object + files that the linker may include by default (such as system + libraries). + 'b'Process arguments and decide which source files to compile.'u'Process arguments and decide which source files to compile.'b''output_dir' must be a string or None'u''output_dir' must be a string or None'b''macros' (if supplied) must be a list of tuples'u''macros' (if supplied) must be a list of tuples'b''include_dirs' (if supplied) must be a list of strings'u''include_dirs' (if supplied) must be a list of strings'b'-g'u'-g'b'Typecheck and fix-up some of the arguments to the 'compile()' + method, and return fixed-up values. Specifically: if 'output_dir' + is None, replaces it with 'self.output_dir'; ensures that 'macros' + is a list, and augments it with 'self.macros'; ensures that + 'include_dirs' is a list, and augments it with 'self.include_dirs'. + Guarantees that the returned values are of the correct type, + i.e. for 'output_dir' either string or None, and for 'macros' and + 'include_dirs' either list or None. + 'u'Typecheck and fix-up some of the arguments to the 'compile()' + method, and return fixed-up values. Specifically: if 'output_dir' + is None, replaces it with 'self.output_dir'; ensures that 'macros' + is a list, and augments it with 'self.macros'; ensures that + 'include_dirs' is a list, and augments it with 'self.include_dirs'. + Guarantees that the returned values are of the correct type, + i.e. for 'output_dir' either string or None, and for 'macros' and + 'include_dirs' either list or None. + 'b'Decide which souce files must be recompiled. + + Determine the list of object files corresponding to 'sources', + and figure out which ones really need to be recompiled. + Return a list of all object files and a dictionary telling + which source files can be skipped. + 'u'Decide which souce files must be recompiled. + + Determine the list of object files corresponding to 'sources', + and figure out which ones really need to be recompiled. + Return a list of all object files and a dictionary telling + which source files can be skipped. + 'b'Typecheck and fix up some arguments supplied to various methods. + Specifically: ensure that 'objects' is a list; if output_dir is + None, replace with self.output_dir. Return fixed versions of + 'objects' and 'output_dir'. + 'u'Typecheck and fix up some arguments supplied to various methods. + Specifically: ensure that 'objects' is a list; if output_dir is + None, replace with self.output_dir. Return fixed versions of + 'objects' and 'output_dir'. + 'b''objects' must be a list or tuple of strings'u''objects' must be a list or tuple of strings'b'Typecheck and fix up some of the arguments supplied to the + 'link_*' methods. Specifically: ensure that all arguments are + lists, and augment them with their permanent versions + (eg. 'self.libraries' augments 'libraries'). Return a tuple with + fixed versions of all arguments. + 'u'Typecheck and fix up some of the arguments supplied to the + 'link_*' methods. Specifically: ensure that all arguments are + lists, and augment them with their permanent versions + (eg. 'self.libraries' augments 'libraries'). Return a tuple with + fixed versions of all arguments. + 'b''libraries' (if supplied) must be a list of strings'u''libraries' (if supplied) must be a list of strings'b''library_dirs' (if supplied) must be a list of strings'u''library_dirs' (if supplied) must be a list of strings'b''runtime_library_dirs' (if supplied) must be a list of strings'u''runtime_library_dirs' (if supplied) must be a list of strings'b'Return true if we need to relink the files listed in 'objects' + to recreate 'output_file'. + 'u'Return true if we need to relink the files listed in 'objects' + to recreate 'output_file'. + 'b'newer'u'newer'b'Detect the language of a given file, or list of files. Uses + language_map, and language_order to do the job. + 'u'Detect the language of a given file, or list of files. Uses + language_map, and language_order to do the job. + 'b'Preprocess a single C/C++ source file, named in 'source'. + Output will be written to file named 'output_file', or stdout if + 'output_file' not supplied. 'macros' is a list of macro + definitions as for 'compile()', which will augment the macros set + with 'define_macro()' and 'undefine_macro()'. 'include_dirs' is a + list of directory names that will be added to the default list. + + Raises PreprocessError on failure. + 'u'Preprocess a single C/C++ source file, named in 'source'. + Output will be written to file named 'output_file', or stdout if + 'output_file' not supplied. 'macros' is a list of macro + definitions as for 'compile()', which will augment the macros set + with 'define_macro()' and 'undefine_macro()'. 'include_dirs' is a + list of directory names that will be added to the default list. + + Raises PreprocessError on failure. + 'b'Compile one or more source files. + + 'sources' must be a list of filenames, most likely C/C++ + files, but in reality anything that can be handled by a + particular compiler and compiler class (eg. MSVCCompiler can + handle resource files in 'sources'). Return a list of object + filenames, one per source filename in 'sources'. Depending on + the implementation, not all source files will necessarily be + compiled, but all corresponding object filenames will be + returned. + + If 'output_dir' is given, object files will be put under it, while + retaining their original path component. That is, "foo/bar.c" + normally compiles to "foo/bar.o" (for a Unix implementation); if + 'output_dir' is "build", then it would compile to + "build/foo/bar.o". + + 'macros', if given, must be a list of macro definitions. A macro + definition is either a (name, value) 2-tuple or a (name,) 1-tuple. + The former defines a macro; if the value is None, the macro is + defined without an explicit value. The 1-tuple case undefines a + macro. Later definitions/redefinitions/ undefinitions take + precedence. + + 'include_dirs', if given, must be a list of strings, the + directories to add to the default include file search path for this + compilation only. + + 'debug' is a boolean; if true, the compiler will be instructed to + output debug symbols in (or alongside) the object file(s). + + 'extra_preargs' and 'extra_postargs' are implementation- dependent. + On platforms that have the notion of a command-line (e.g. Unix, + DOS/Windows), they are most likely lists of strings: extra + command-line arguments to prepend/append to the compiler command + line. On other platforms, consult the implementation class + documentation. In any event, they are intended as an escape hatch + for those occasions when the abstract compiler framework doesn't + cut the mustard. + + 'depends', if given, is a list of filenames that all targets + depend on. If a source file is older than any file in + depends, then the source file will be recompiled. This + supports dependency tracking, but only at a coarse + granularity. + + Raises CompileError on failure. + 'u'Compile one or more source files. + + 'sources' must be a list of filenames, most likely C/C++ + files, but in reality anything that can be handled by a + particular compiler and compiler class (eg. MSVCCompiler can + handle resource files in 'sources'). Return a list of object + filenames, one per source filename in 'sources'. Depending on + the implementation, not all source files will necessarily be + compiled, but all corresponding object filenames will be + returned. + + If 'output_dir' is given, object files will be put under it, while + retaining their original path component. That is, "foo/bar.c" + normally compiles to "foo/bar.o" (for a Unix implementation); if + 'output_dir' is "build", then it would compile to + "build/foo/bar.o". + + 'macros', if given, must be a list of macro definitions. A macro + definition is either a (name, value) 2-tuple or a (name,) 1-tuple. + The former defines a macro; if the value is None, the macro is + defined without an explicit value. The 1-tuple case undefines a + macro. Later definitions/redefinitions/ undefinitions take + precedence. + + 'include_dirs', if given, must be a list of strings, the + directories to add to the default include file search path for this + compilation only. + + 'debug' is a boolean; if true, the compiler will be instructed to + output debug symbols in (or alongside) the object file(s). + + 'extra_preargs' and 'extra_postargs' are implementation- dependent. + On platforms that have the notion of a command-line (e.g. Unix, + DOS/Windows), they are most likely lists of strings: extra + command-line arguments to prepend/append to the compiler command + line. On other platforms, consult the implementation class + documentation. In any event, they are intended as an escape hatch + for those occasions when the abstract compiler framework doesn't + cut the mustard. + + 'depends', if given, is a list of filenames that all targets + depend on. If a source file is older than any file in + depends, then the source file will be recompiled. This + supports dependency tracking, but only at a coarse + granularity. + + Raises CompileError on failure. + 'b'Compile 'src' to product 'obj'.'u'Compile 'src' to product 'obj'.'b'Link a bunch of stuff together to create a static library file. + The "bunch of stuff" consists of the list of object files supplied + as 'objects', the extra object files supplied to + 'add_link_object()' and/or 'set_link_objects()', the libraries + supplied to 'add_library()' and/or 'set_libraries()', and the + libraries supplied as 'libraries' (if any). + + 'output_libname' should be a library name, not a filename; the + filename will be inferred from the library name. 'output_dir' is + the directory where the library file will be put. + + 'debug' is a boolean; if true, debugging information will be + included in the library (note that on most platforms, it is the + compile step where this matters: the 'debug' flag is included here + just for consistency). + + 'target_lang' is the target language for which the given objects + are being compiled. This allows specific linkage time treatment of + certain languages. + + Raises LibError on failure. + 'u'Link a bunch of stuff together to create a static library file. + The "bunch of stuff" consists of the list of object files supplied + as 'objects', the extra object files supplied to + 'add_link_object()' and/or 'set_link_objects()', the libraries + supplied to 'add_library()' and/or 'set_libraries()', and the + libraries supplied as 'libraries' (if any). + + 'output_libname' should be a library name, not a filename; the + filename will be inferred from the library name. 'output_dir' is + the directory where the library file will be put. + + 'debug' is a boolean; if true, debugging information will be + included in the library (note that on most platforms, it is the + compile step where this matters: the 'debug' flag is included here + just for consistency). + + 'target_lang' is the target language for which the given objects + are being compiled. This allows specific linkage time treatment of + certain languages. + + Raises LibError on failure. + 'b'shared_object'u'shared_object'b'shared_library'u'shared_library'b'executable'u'executable'b'Link a bunch of stuff together to create an executable or + shared library file. + + The "bunch of stuff" consists of the list of object files supplied + as 'objects'. 'output_filename' should be a filename. If + 'output_dir' is supplied, 'output_filename' is relative to it + (i.e. 'output_filename' can provide directory components if + needed). + + 'libraries' is a list of libraries to link against. These are + library names, not filenames, since they're translated into + filenames in a platform-specific way (eg. "foo" becomes "libfoo.a" + on Unix and "foo.lib" on DOS/Windows). However, they can include a + directory component, which means the linker will look in that + specific directory rather than searching all the normal locations. + + 'library_dirs', if supplied, should be a list of directories to + search for libraries that were specified as bare library names + (ie. no directory component). These are on top of the system + default and those supplied to 'add_library_dir()' and/or + 'set_library_dirs()'. 'runtime_library_dirs' is a list of + directories that will be embedded into the shared library and used + to search for other shared libraries that *it* depends on at + run-time. (This may only be relevant on Unix.) + + 'export_symbols' is a list of symbols that the shared library will + export. (This appears to be relevant only on Windows.) + + 'debug' is as for 'compile()' and 'create_static_lib()', with the + slight distinction that it actually matters on most platforms (as + opposed to 'create_static_lib()', which includes a 'debug' flag + mostly for form's sake). + + 'extra_preargs' and 'extra_postargs' are as for 'compile()' (except + of course that they supply command-line arguments for the + particular linker being used). + + 'target_lang' is the target language for which the given objects + are being compiled. This allows specific linkage time treatment of + certain languages. + + Raises LinkError on failure. + 'u'Link a bunch of stuff together to create an executable or + shared library file. + + The "bunch of stuff" consists of the list of object files supplied + as 'objects'. 'output_filename' should be a filename. If + 'output_dir' is supplied, 'output_filename' is relative to it + (i.e. 'output_filename' can provide directory components if + needed). + + 'libraries' is a list of libraries to link against. These are + library names, not filenames, since they're translated into + filenames in a platform-specific way (eg. "foo" becomes "libfoo.a" + on Unix and "foo.lib" on DOS/Windows). However, they can include a + directory component, which means the linker will look in that + specific directory rather than searching all the normal locations. + + 'library_dirs', if supplied, should be a list of directories to + search for libraries that were specified as bare library names + (ie. no directory component). These are on top of the system + default and those supplied to 'add_library_dir()' and/or + 'set_library_dirs()'. 'runtime_library_dirs' is a list of + directories that will be embedded into the shared library and used + to search for other shared libraries that *it* depends on at + run-time. (This may only be relevant on Unix.) + + 'export_symbols' is a list of symbols that the shared library will + export. (This appears to be relevant only on Windows.) + + 'debug' is as for 'compile()' and 'create_static_lib()', with the + slight distinction that it actually matters on most platforms (as + opposed to 'create_static_lib()', which includes a 'debug' flag + mostly for form's sake). + + 'extra_preargs' and 'extra_postargs' are as for 'compile()' (except + of course that they supply command-line arguments for the + particular linker being used). + + 'target_lang' is the target language for which the given objects + are being compiled. This allows specific linkage time treatment of + certain languages. + + Raises LinkError on failure. + 'b'shared'u'shared'b'Return the compiler option to add 'dir' to the list of + directories searched for libraries. + 'u'Return the compiler option to add 'dir' to the list of + directories searched for libraries. + 'b'Return the compiler option to add 'dir' to the list of + directories searched for runtime libraries. + 'u'Return the compiler option to add 'dir' to the list of + directories searched for runtime libraries. + 'b'Return the compiler option to add 'lib' to the list of libraries + linked into the shared library or executable. + 'u'Return the compiler option to add 'lib' to the list of libraries + linked into the shared library or executable. + 'b'Return a boolean indicating whether funcname is supported on + the current platform. The optional arguments can be used to + augment the compilation environment. + 'u'Return a boolean indicating whether funcname is supported on + the current platform. The optional arguments can be used to + augment the compilation environment. + 'b'#include "%s" +'u'#include "%s" +'b'int main (int argc, char **argv) { + %s(); + return 0; +} +'u'int main (int argc, char **argv) { + %s(); + return 0; +} +'b'a.out'u'a.out'b'Search the specified list of directories for a static or shared + library file 'lib' and return the full path to that file. If + 'debug' true, look for a debugging version (if that makes sense on + the current platform). Return None if 'lib' wasn't found in any of + the specified directories. + 'u'Search the specified list of directories for a static or shared + library file 'lib' and return the full path to that file. If + 'debug' true, look for a debugging version (if that makes sense on + the current platform). Return None if 'lib' wasn't found in any of + the specified directories. + 'b'unknown file type '%s' (from '%s')'u'unknown file type '%s' (from '%s')'b'static'u'static'b'dylib'u'dylib'b'xcode_stub'u'xcode_stub'b''lib_type' must be "static", "shared", "dylib", or "xcode_stub"'u''lib_type' must be "static", "shared", "dylib", or "xcode_stub"'b'_lib_format'u'_lib_format'b'_lib_extension'u'_lib_extension'b'warning: %s +'u'warning: %s +'b'cygwin.*'u'cygwin.*'b'unix'u'unix'b'msvc'u'msvc'b'Determine the default compiler to use for the given platform. + + osname should be one of the standard Python OS names (i.e. the + ones returned by os.name) and platform the common value + returned by sys.platform for the platform in question. + + The default values are os.name and sys.platform in case the + parameters are not given. + 'u'Determine the default compiler to use for the given platform. + + osname should be one of the standard Python OS names (i.e. the + ones returned by os.name) and platform the common value + returned by sys.platform for the platform in question. + + The default values are os.name and sys.platform in case the + parameters are not given. + 'b'unixccompiler'u'unixccompiler'b'UnixCCompiler'u'UnixCCompiler'b'standard UNIX-style compiler'u'standard UNIX-style compiler'b'_msvccompiler'u'_msvccompiler'b'MSVCCompiler'u'MSVCCompiler'b'Microsoft Visual C++'u'Microsoft Visual C++'b'cygwinccompiler'u'cygwinccompiler'b'CygwinCCompiler'u'CygwinCCompiler'b'Cygwin port of GNU C Compiler for Win32'u'Cygwin port of GNU C Compiler for Win32'b'Mingw32CCompiler'u'Mingw32CCompiler'b'Mingw32 port of GNU C Compiler for Win32'u'Mingw32 port of GNU C Compiler for Win32'b'mingw32'u'mingw32'b'bcppcompiler'u'bcppcompiler'b'BCPPCompiler'u'BCPPCompiler'b'Borland C++ Compiler'u'Borland C++ Compiler'b'bcpp'u'bcpp'b'Print list of available compilers (used by the "--help-compiler" + options to "build", "build_ext", "build_clib"). + 'u'Print list of available compilers (used by the "--help-compiler" + options to "build", "build_ext", "build_clib"). + 'b'compiler='u'compiler='b'List of available compilers:'u'List of available compilers:'b'Generate an instance of some CCompiler subclass for the supplied + platform/compiler combination. 'plat' defaults to 'os.name' + (eg. 'posix', 'nt'), and 'compiler' defaults to the default compiler + for that platform. Currently only 'posix' and 'nt' are supported, and + the default compilers are "traditional Unix interface" (UnixCCompiler + class) and Visual C++ (MSVCCompiler class). Note that it's perfectly + possible to ask for a Unix compiler object under Windows, and a + Microsoft compiler object under Unix -- if you supply a value for + 'compiler', 'plat' is ignored. + 'u'Generate an instance of some CCompiler subclass for the supplied + platform/compiler combination. 'plat' defaults to 'os.name' + (eg. 'posix', 'nt'), and 'compiler' defaults to the default compiler + for that platform. Currently only 'posix' and 'nt' are supported, and + the default compilers are "traditional Unix interface" (UnixCCompiler + class) and Visual C++ (MSVCCompiler class). Note that it's perfectly + possible to ask for a Unix compiler object under Windows, and a + Microsoft compiler object under Unix -- if you supply a value for + 'compiler', 'plat' is ignored. + 'b'don't know how to compile C/C++ code on platform '%s''u'don't know how to compile C/C++ code on platform '%s''b' with '%s' compiler'u' with '%s' compiler'b'distutils.'u'distutils.'b'can't compile C/C++ code: unable to load module '%s''u'can't compile C/C++ code: unable to load module '%s''b'can't compile C/C++ code: unable to find class '%s' in module '%s''u'can't compile C/C++ code: unable to find class '%s' in module '%s''b'Generate C pre-processor options (-D, -U, -I) as used by at least + two types of compilers: the typical Unix compiler and Visual C++. + 'macros' is the usual thing, a list of 1- or 2-tuples, where (name,) + means undefine (-U) macro 'name', and (name,value) means define (-D) + macro 'name' to 'value'. 'include_dirs' is just a list of directory + names to be added to the header file search path (-I). Returns a list + of command-line options suitable for either Unix compilers or Visual + C++. + 'u'Generate C pre-processor options (-D, -U, -I) as used by at least + two types of compilers: the typical Unix compiler and Visual C++. + 'macros' is the usual thing, a list of 1- or 2-tuples, where (name,) + means undefine (-U) macro 'name', and (name,value) means define (-D) + macro 'name' to 'value'. 'include_dirs' is just a list of directory + names to be added to the header file search path (-I). Returns a list + of command-line options suitable for either Unix compilers or Visual + C++. + 'b'bad macro definition '%s': each element of 'macros' list must be a 1- or 2-tuple'u'bad macro definition '%s': each element of 'macros' list must be a 1- or 2-tuple'b'-U%s'u'-U%s'b'-D%s'u'-D%s'b'-D%s=%s'u'-D%s=%s'b'-I%s'u'-I%s'b'Generate linker options for searching library directories and + linking with specific libraries. 'libraries' and 'library_dirs' are, + respectively, lists of library names (not filenames!) and search + directories. Returns a list of command-line options suitable for use + with some compiler (depending on the two format strings passed in). + 'u'Generate linker options for searching library directories and + linking with specific libraries. 'libraries' and 'library_dirs' are, + respectively, lists of library names (not filenames!) and search + directories. Returns a list of command-line options suitable for use + with some compiler (depending on the two format strings passed in). + 'b'no library file corresponding to '%s' found (skipping)'u'no library file corresponding to '%s' found (skipping)'u'distutils.ccompiler'u'ccompiler'Charsetadd_aliasadd_charsetadd_codecemail.base64mimeemail.quoprimimeemail.encodersencode_7or8bitQPBASE64SHORTESTRFC2047_CHROME_LENDEFAULT_CHARSETiso-8859-2iso-8859-3iso-8859-4iso-8859-9iso-8859-10iso-8859-13iso-8859-14iso-8859-15iso-8859-16windows-1252visciiiso-2022-jpeuc-jpkoi8-rCHARSETSlatin-1latin_2latin-2latin_3latin-3latin_4latin-4latin_5latin-5latin_6latin-6latin_7latin-7latin_8latin-8latin_9latin-9latin_10latin-10ks_c_5601-1987euc-krALIASESCODEC_MAPheader_encbody_encoutput_charsetAdd character set properties to the global registry. + + charset is the input character set, and must be the canonical name of a + character set. + + Optional header_enc and body_enc is either Charset.QP for + quoted-printable, Charset.BASE64 for base64 encoding, Charset.SHORTEST for + the shortest of qp or base64 encoding, or None for no encoding. SHORTEST + is only valid for header_enc. It describes how message headers and + message bodies in the input charset are to be encoded. Default is no + encoding. + + Optional output_charset is the character set that the output should be + in. Conversions will proceed from input charset, to Unicode, to the + output charset when the method Charset.convert() is called. The default + is to output in the same character set as the input. + + Both input_charset and output_charset must have Unicode codec entries in + the module's charset-to-codec mapping; use add_codec(charset, codecname) + to add codecs the module does not know about. See the codecs module's + documentation for more information. + SHORTEST not allowed for body_encAdd a character set alias. + + alias is the alias name, e.g. latin-1 + canonical is the character set's canonical name, e.g. iso-8859-1 + codecnameAdd a codec that map characters in the given charset to/from Unicode. + + charset is the canonical name of a character set. codecname is the name + of a Python codec, as appropriate for the second argument to the unicode() + built-in, or to the encode() method of a Unicode string. + _encodecodecMap character sets to their email properties. + + This class provides information about the requirements imposed on email + for a specific character set. It also provides convenience routines for + converting between character sets, given the availability of the + applicable codecs. Given a character set, it will do its best to provide + information on how to use that character set in an email in an + RFC-compliant way. + + Certain character sets must be encoded with quoted-printable or base64 + when used in email headers or bodies. Certain character sets must be + converted outright, and are not allowed in email. Instances of this + module expose the following information about a character set: + + input_charset: The initial character set specified. Common aliases + are converted to their `official' email names (e.g. latin_1 + is converted to iso-8859-1). Defaults to 7-bit us-ascii. + + header_encoding: If the character set must be encoded before it can be + used in an email header, this attribute will be set to + Charset.QP (for quoted-printable), Charset.BASE64 (for + base64 encoding), or Charset.SHORTEST for the shortest of + QP or BASE64 encoding. Otherwise, it will be None. + + body_encoding: Same as header_encoding, but describes the encoding for the + mail message's body, which indeed may be different than the + header encoding. Charset.SHORTEST is not allowed for + body_encoding. + + output_charset: Some character sets must be converted before they can be + used in email headers or bodies. If the input_charset is + one of them, this attribute will contain the name of the + charset output will be converted to. Otherwise, it will + be None. + + input_codec: The name of the Python codec used to convert the + input_charset to Unicode. If no conversion codec is + necessary, this attribute will be None. + + output_codec: The name of the Python codec used to convert Unicode + to the output_charset. If no conversion codec is necessary, + this attribute will have the same value as the input_codec. + input_charsethencbencheader_encodingbody_encodinginput_codecoutput_codecget_body_encodingReturn the content-transfer-encoding used for body encoding. + + This is either the string `quoted-printable' or `base64' depending on + the encoding used, or it is a function in which case you should call + the function with a single argument, the Message object being + encoded. The function should then set the Content-Transfer-Encoding + header itself to whatever is appropriate. + + Returns "quoted-printable" if self.body_encoding is QP. + Returns "base64" if self.body_encoding is BASE64. + Returns conversion function otherwise. + quoted-printableget_output_charsetReturn the output character set. + + This is self.output_charset if that is not None, otherwise it is + self.input_charset. + Header-encode a string by converting it first to bytes. + + The type of encoding (base64 or quoted-printable) will be based on + this charset's `header_encoding`. + + :param string: A unicode string for the header. It must be possible + to encode this string to bytes using the character set's + output codec. + :return: The encoded string, with RFC 2047 chrome. + _get_encoderencoder_moduleheader_encode_linesmaxlengthsHeader-encode a string by converting it first to bytes. + + This is similar to `header_encode()` except that the string is fit + into maximum line lengths as given by the argument. + + :param string: A unicode string for the header. It must be possible + to encode this string to bytes using the character set's + output codec. + :param maxlengths: Maximum line length iterator. Each element + returned from this iterator will provide the next maximum line + length. This parameter is used as an argument to built-in next() + and should never be exhausted. The maximum line lengths should + not count the RFC 2047 chrome. These line lengths are only a + hint; the splitter does the best it can. + :return: Lines of encoded strings, each with RFC 2047 chrome. + encodercurrent_linethis_linejoined_linelen64lenqpBody-encode a string by converting it first to bytes. + + The type of encoding (base64 or quoted-printable) will be based on + self.body_encoding. If body_encoding is None, we assume the + output charset is a 7bit encoding, so re-encoding the decoded + string using the ascii codec produces the correct string version + of the content. + # Author: Ben Gertzfield, Barry Warsaw# Flags for types of header encodings# Quoted-Printable# the shorter of QP and base64, but only for headers# In "=?charset?q?hello_world?=", the =?, ?q?, and ?= add up to 7# Defaults# input header enc body enc output conv# iso-8859-5 is Cyrillic, and not especially used# iso-8859-6 is Arabic, also not particularly used# iso-8859-7 is Greek, QP will not make it readable# iso-8859-8 is Hebrew, QP will not make it readable# iso-8859-11 is Thai, QP will not make it readable# Aliases for other commonly-used names for character sets. Map# them to the real ones used in email.# Map charsets to their Unicode codec strings.# Hack: We don't want *any* conversion for stuff marked us-ascii, as all# sorts of garbage might be sent to us in the guise of 7-bit us-ascii.# Let that stuff pass through without conversion to/from Unicode.# Convenience functions for extending the above mappings# Convenience function for encoding strings, taking into account# that they might be unknown-8bit (ie: have surrogate-escaped bytes)# RFC 2046, $4.1.2 says charsets are not case sensitive. We coerce to# unicode because its .lower() is locale insensitive. If the argument# is already a unicode, we leave it at that, but ensure that the# charset is ASCII, as the standard (RFC XXX) requires.# Set the input charset after filtering through the aliases# We can try to guess which encoding and conversion to use by the# charset_map dictionary. Try that first, but let the user override# it.# Set the attributes, allowing the arguments to override the default.# Now set the codecs. If one isn't defined for input_charset,# guess and try a Unicode codec with the same name as input_codec.# 7bit/8bit encodings return the string unchanged (modulo conversions)# See which encoding we should use.# Calculate the number of characters that the RFC 2047 chrome will# contribute to each line.# Now comes the hard part. We must encode bytes but we can't split on# bytes because some character sets are variable length and each# encoded word must stand on its own. So the problem is you have to# encode to bytes to figure out this word's length, but you must split# on characters. This causes two problems: first, we don't know how# many octets a specific substring of unicode characters will get# encoded to, and second, we don't know how many ASCII characters# those octets will get encoded to. Unless we try it. Which seems# inefficient. In the interest of being correct rather than fast (and# in the hope that there will be few encoded headers in any such# message), brute force it. :(# This last character doesn't fit so pop it off.# Does nothing fit on the first line?# quopromime.body_encode takes a string, but operates on it as if# it were a list of byte codes. For a (minimal) history on why# this is so, see changeset 0cf700464177. To correctly encode a# character set, then, we must turn it into pseudo bytes via the# latin1 charset, which will encode any byte as a single code point# between 0 and 255, which is what body_encode is expecting.b'Charset'u'Charset'b'add_alias'u'add_alias'b'add_charset'u'add_charset'b'add_codec'u'add_codec'b'iso-8859-2'u'iso-8859-2'b'iso-8859-3'u'iso-8859-3'b'iso-8859-4'u'iso-8859-4'b'iso-8859-9'u'iso-8859-9'b'iso-8859-10'u'iso-8859-10'b'iso-8859-13'u'iso-8859-13'b'iso-8859-14'u'iso-8859-14'b'iso-8859-15'u'iso-8859-15'b'iso-8859-16'u'iso-8859-16'b'windows-1252'u'windows-1252'b'viscii'u'viscii'b'iso-2022-jp'u'iso-2022-jp'b'euc-jp'u'euc-jp'b'koi8-r'u'koi8-r'b'latin-1'u'latin-1'b'latin_2'u'latin_2'b'latin-2'u'latin-2'b'latin_3'u'latin_3'b'latin-3'u'latin-3'b'latin_4'u'latin_4'b'latin-4'u'latin-4'b'latin_5'u'latin_5'b'latin-5'u'latin-5'b'latin_6'u'latin_6'b'latin-6'u'latin-6'b'latin_7'u'latin_7'b'latin-7'u'latin-7'b'latin_8'u'latin_8'b'latin-8'u'latin-8'b'latin_9'u'latin_9'b'latin-9'u'latin-9'b'latin_10'u'latin_10'b'latin-10'u'latin-10'b'ks_c_5601-1987'u'ks_c_5601-1987'b'euc-kr'u'euc-kr'b'Add character set properties to the global registry. + + charset is the input character set, and must be the canonical name of a + character set. + + Optional header_enc and body_enc is either Charset.QP for + quoted-printable, Charset.BASE64 for base64 encoding, Charset.SHORTEST for + the shortest of qp or base64 encoding, or None for no encoding. SHORTEST + is only valid for header_enc. It describes how message headers and + message bodies in the input charset are to be encoded. Default is no + encoding. + + Optional output_charset is the character set that the output should be + in. Conversions will proceed from input charset, to Unicode, to the + output charset when the method Charset.convert() is called. The default + is to output in the same character set as the input. + + Both input_charset and output_charset must have Unicode codec entries in + the module's charset-to-codec mapping; use add_codec(charset, codecname) + to add codecs the module does not know about. See the codecs module's + documentation for more information. + 'u'Add character set properties to the global registry. + + charset is the input character set, and must be the canonical name of a + character set. + + Optional header_enc and body_enc is either Charset.QP for + quoted-printable, Charset.BASE64 for base64 encoding, Charset.SHORTEST for + the shortest of qp or base64 encoding, or None for no encoding. SHORTEST + is only valid for header_enc. It describes how message headers and + message bodies in the input charset are to be encoded. Default is no + encoding. + + Optional output_charset is the character set that the output should be + in. Conversions will proceed from input charset, to Unicode, to the + output charset when the method Charset.convert() is called. The default + is to output in the same character set as the input. + + Both input_charset and output_charset must have Unicode codec entries in + the module's charset-to-codec mapping; use add_codec(charset, codecname) + to add codecs the module does not know about. See the codecs module's + documentation for more information. + 'b'SHORTEST not allowed for body_enc'u'SHORTEST not allowed for body_enc'b'Add a character set alias. + + alias is the alias name, e.g. latin-1 + canonical is the character set's canonical name, e.g. iso-8859-1 + 'u'Add a character set alias. + + alias is the alias name, e.g. latin-1 + canonical is the character set's canonical name, e.g. iso-8859-1 + 'b'Add a codec that map characters in the given charset to/from Unicode. + + charset is the canonical name of a character set. codecname is the name + of a Python codec, as appropriate for the second argument to the unicode() + built-in, or to the encode() method of a Unicode string. + 'u'Add a codec that map characters in the given charset to/from Unicode. + + charset is the canonical name of a character set. codecname is the name + of a Python codec, as appropriate for the second argument to the unicode() + built-in, or to the encode() method of a Unicode string. + 'b'Map character sets to their email properties. + + This class provides information about the requirements imposed on email + for a specific character set. It also provides convenience routines for + converting between character sets, given the availability of the + applicable codecs. Given a character set, it will do its best to provide + information on how to use that character set in an email in an + RFC-compliant way. + + Certain character sets must be encoded with quoted-printable or base64 + when used in email headers or bodies. Certain character sets must be + converted outright, and are not allowed in email. Instances of this + module expose the following information about a character set: + + input_charset: The initial character set specified. Common aliases + are converted to their `official' email names (e.g. latin_1 + is converted to iso-8859-1). Defaults to 7-bit us-ascii. + + header_encoding: If the character set must be encoded before it can be + used in an email header, this attribute will be set to + Charset.QP (for quoted-printable), Charset.BASE64 (for + base64 encoding), or Charset.SHORTEST for the shortest of + QP or BASE64 encoding. Otherwise, it will be None. + + body_encoding: Same as header_encoding, but describes the encoding for the + mail message's body, which indeed may be different than the + header encoding. Charset.SHORTEST is not allowed for + body_encoding. + + output_charset: Some character sets must be converted before they can be + used in email headers or bodies. If the input_charset is + one of them, this attribute will contain the name of the + charset output will be converted to. Otherwise, it will + be None. + + input_codec: The name of the Python codec used to convert the + input_charset to Unicode. If no conversion codec is + necessary, this attribute will be None. + + output_codec: The name of the Python codec used to convert Unicode + to the output_charset. If no conversion codec is necessary, + this attribute will have the same value as the input_codec. + 'u'Map character sets to their email properties. + + This class provides information about the requirements imposed on email + for a specific character set. It also provides convenience routines for + converting between character sets, given the availability of the + applicable codecs. Given a character set, it will do its best to provide + information on how to use that character set in an email in an + RFC-compliant way. + + Certain character sets must be encoded with quoted-printable or base64 + when used in email headers or bodies. Certain character sets must be + converted outright, and are not allowed in email. Instances of this + module expose the following information about a character set: + + input_charset: The initial character set specified. Common aliases + are converted to their `official' email names (e.g. latin_1 + is converted to iso-8859-1). Defaults to 7-bit us-ascii. + + header_encoding: If the character set must be encoded before it can be + used in an email header, this attribute will be set to + Charset.QP (for quoted-printable), Charset.BASE64 (for + base64 encoding), or Charset.SHORTEST for the shortest of + QP or BASE64 encoding. Otherwise, it will be None. + + body_encoding: Same as header_encoding, but describes the encoding for the + mail message's body, which indeed may be different than the + header encoding. Charset.SHORTEST is not allowed for + body_encoding. + + output_charset: Some character sets must be converted before they can be + used in email headers or bodies. If the input_charset is + one of them, this attribute will contain the name of the + charset output will be converted to. Otherwise, it will + be None. + + input_codec: The name of the Python codec used to convert the + input_charset to Unicode. If no conversion codec is + necessary, this attribute will be None. + + output_codec: The name of the Python codec used to convert Unicode + to the output_charset. If no conversion codec is necessary, + this attribute will have the same value as the input_codec. + 'b'Return the content-transfer-encoding used for body encoding. + + This is either the string `quoted-printable' or `base64' depending on + the encoding used, or it is a function in which case you should call + the function with a single argument, the Message object being + encoded. The function should then set the Content-Transfer-Encoding + header itself to whatever is appropriate. + + Returns "quoted-printable" if self.body_encoding is QP. + Returns "base64" if self.body_encoding is BASE64. + Returns conversion function otherwise. + 'u'Return the content-transfer-encoding used for body encoding. + + This is either the string `quoted-printable' or `base64' depending on + the encoding used, or it is a function in which case you should call + the function with a single argument, the Message object being + encoded. The function should then set the Content-Transfer-Encoding + header itself to whatever is appropriate. + + Returns "quoted-printable" if self.body_encoding is QP. + Returns "base64" if self.body_encoding is BASE64. + Returns conversion function otherwise. + 'b'quoted-printable'u'quoted-printable'b'Return the output character set. + + This is self.output_charset if that is not None, otherwise it is + self.input_charset. + 'u'Return the output character set. + + This is self.output_charset if that is not None, otherwise it is + self.input_charset. + 'b'Header-encode a string by converting it first to bytes. + + The type of encoding (base64 or quoted-printable) will be based on + this charset's `header_encoding`. + + :param string: A unicode string for the header. It must be possible + to encode this string to bytes using the character set's + output codec. + :return: The encoded string, with RFC 2047 chrome. + 'u'Header-encode a string by converting it first to bytes. + + The type of encoding (base64 or quoted-printable) will be based on + this charset's `header_encoding`. + + :param string: A unicode string for the header. It must be possible + to encode this string to bytes using the character set's + output codec. + :return: The encoded string, with RFC 2047 chrome. + 'b'Header-encode a string by converting it first to bytes. + + This is similar to `header_encode()` except that the string is fit + into maximum line lengths as given by the argument. + + :param string: A unicode string for the header. It must be possible + to encode this string to bytes using the character set's + output codec. + :param maxlengths: Maximum line length iterator. Each element + returned from this iterator will provide the next maximum line + length. This parameter is used as an argument to built-in next() + and should never be exhausted. The maximum line lengths should + not count the RFC 2047 chrome. These line lengths are only a + hint; the splitter does the best it can. + :return: Lines of encoded strings, each with RFC 2047 chrome. + 'u'Header-encode a string by converting it first to bytes. + + This is similar to `header_encode()` except that the string is fit + into maximum line lengths as given by the argument. + + :param string: A unicode string for the header. It must be possible + to encode this string to bytes using the character set's + output codec. + :param maxlengths: Maximum line length iterator. Each element + returned from this iterator will provide the next maximum line + length. This parameter is used as an argument to built-in next() + and should never be exhausted. The maximum line lengths should + not count the RFC 2047 chrome. These line lengths are only a + hint; the splitter does the best it can. + :return: Lines of encoded strings, each with RFC 2047 chrome. + 'b'Body-encode a string by converting it first to bytes. + + The type of encoding (base64 or quoted-printable) will be based on + self.body_encoding. If body_encoding is None, we assume the + output charset is a 7bit encoding, so re-encoding the decoded + string using the ascii codec produces the correct string version + of the content. + 'u'Body-encode a string by converting it first to bytes. + + The type of encoding (base64 or quoted-printable) will be based on + self.body_encoding. If body_encoding is None, we assume the + output charset is a 7bit encoding, so re-encoding the decoded + string using the ascii codec produces the correct string version + of the content. + 'u'email.charset' +An XML-RPC client interface for Python. + +The marshalling and response parser code can also be used to +implement XML-RPC servers. + +Exported exceptions: + + Error Base class for client errors + ProtocolError Indicates an HTTP protocol error + ResponseError Indicates a broken response package + Fault Indicates an XML-RPC fault package + +Exported classes: + + ServerProxy Represents a logical connection to an XML-RPC server + + MultiCall Executor of boxcared xmlrpc requests + DateTime dateTime wrapper for an ISO 8601 string or time tuple or + localtime integer value to generate a "dateTime.iso8601" + XML-RPC value + Binary binary data wrapper + + Marshaller Generate an XML-RPC params chunk from a Python data structure + Unmarshaller Unmarshal an XML-RPC response from incoming XML event message + Transport Handles an HTTP transaction to an XML-RPC server + SafeTransport Handles an HTTPS transaction to an XML-RPC server + +Exported constants: + + (none) + +Exported functions: + + getparser Create instance of the fastest available parser & attach + to an unmarshalling object + dumps Convert an argument tuple or a Fault instance to an XML-RPC + request (or response, if the methodresponse option is used). + loads Convert an XML-RPC packet to unmarshalled data plus a method + name (None if not present). +httpMAXINTMININT32700PARSE_ERROR32600SERVER_ERROR32500APPLICATION_ERROR32400SYSTEM_ERROR32300TRANSPORT_ERRORNOT_WELLFORMED_ERROR32701UNSUPPORTED_ENCODING32702INVALID_ENCODING_CHARINVALID_XMLRPC32601METHOD_NOT_FOUND32602INVALID_METHOD_PARAMS32603INTERNAL_ERRORBase class for client errors.ProtocolErrorIndicates an HTTP protocol error.errcodeerrmsg<%s for %s: %s %s>ResponseErrorIndicates a broken response package.FaultIndicates an XML-RPC fault package.faultCodefaultString<%s %s: %r>Boolean_day00001_iso8601_format%Y%m%dT%H:%M:%S%4Y%4Y%m%dT%H:%M:%S_strftimestruct_time%04d%02d%02dT%02d:%02d:%02dDateTimeDateTime wrapper for an ISO 8601 string or time tuple or + localtime integer value to generate 'dateTime.iso8601' XML-RPC + value. + make_comparableotypeCan't compare %s and %s +_datetime_typeBinaryWrapper for binary data.expected bytes or bytearray, not %s + +_binaryWRAPPERSExpatParserMarshallerGenerate an XML-RPC params chunk from a Python data structure. + + Create a Marshaller instance for each set of parameters, and use + the "dumps" method to convert your data (represented as a tuple) + to an XML-RPC params chunk. To write a fault response, pass a + Fault instance instead. You may prefer to use the "dumps" module + function for this purpose. + allow_nonedispatch__dump + + + + + +cannot marshal %s objects_arbitrary_instancedump_nilcannot marshal None unless allow_none is enableddump_bool +dump_longint exceeds XML-RPC limits +dump_intdump_double +dump_unicode +dump_bytesdump_arraycannot marshal recursive sequences + +dump_structcannot marshal recursive dictionaries + +dictionary key must be string%s + + +dump_datetimedump_instanceUnmarshallerUnmarshal an XML-RPC response, based on incoming XML event + messages (start, data, end). Call close() to get the resulting + data structure. + + Note that this reader is fairly tolerant, and gladly accepts bogus + XML-RPC data without complaining (but not bogus XML). + use_datetimeuse_builtin_types_type_stack_marks_value_methodname_use_datetime_use_bytesfaultgetmethodnamestandaloneunknown tag %rend_dispatchend_nilnilend_booleanbad boolean valueend_inti1i2i4i8bigintegerend_doubleend_bigdecimalbigdecimalend_stringend_arrayend_structend_base64end_dateTimedateTime.iso8601end_valueend_paramsend_faultend_methodName_MultiCallMethodcall_list__call_list__nameMultiCallIteratorIterates over the results of a multicall. Exceptions are + raised in response to xmlrpc faults.unexpected type in multicall resultMultiCallserver -> an object used to boxcar method calls + + server should be a ServerProxy object. + + Methods can be added to the MultiCall using normal + method call syntax e.g.: + + multicall = MultiCall(server_proxy) + multicall.add(2,3) + multicall.get_address("Guido") + + To execute the multicall, call the MultiCall object e.g.: + + add_result, address = multicall() + __server<%s at %#x>marshalled_listmulticallFastMarshallerFastParserFastUnmarshallergetparsergetparser() -> parser, unmarshaller + + Create an instance of the fastest available parser, and attach it + to an unmarshalling object. Return both objects. + mkdatetimemkbytesmethodnamemethodresponsedata [,options] -> marshalled data + + Convert an argument tuple or a Fault instance to an XML-RPC + request (or response, if the methodresponse option is used). + + In addition to the data object, the following options can be given + as keyword arguments: + + methodname: the method name for a methodCall packet + + methodresponse: true to create a methodResponse packet. + If this option is used with a tuple, the tuple must be + a singleton (i.e. it can contain only one element). + + encoding: the packet encoding (default is UTF-8) + + All byte strings in the data structure are assumed to use the + packet encoding. Unicode strings are automatically converted, + where necessary. + argument must be tuple or Fault instanceresponse tuple must be a singletonxmlheader + +"\n""" + + + +data -> unmarshalled data, method name + + Convert an XML-RPC packet to unmarshalled data plus a method + name (None if not present). + + If the XML-RPC packet represents a fault condition, this function + raises a Fault exception. + gzip_encodedata -> gzip encoded data + + Encode data using the gzip content encoding as described in RFC 1952 + gzfgzip_decode20971520max_decodegzip encoded data -> unencoded data + + Decode data using the gzip content encoding as described in RFC 1952 + invalid datamax gzipped payload length exceededGzipDecodedResponsea file-like object to decode a response encoded with the gzip + method, as described in RFC 1952. + response_Method__sendTransportHandles an HTTP transaction to an XML-RPC server.Python-xmlrpc/%suser_agentaccept_gzip_encodingencode_threshold_use_builtin_types_connection_headers_extra_headersrequest_bodysingle_requestRemoteDisconnectedECONNABORTEDEPIPEsend_requesthttp_conngetresponserespparse_responsegetheadercontent-lengthgetheadersget_host_infox509_splituserauthunquote_to_bytesAuthorizationBasic extra_headersmake_connectionchostHTTPConnectionconnectionset_debuglevelputrequestPOSTskip_accept_encodingContent-Typetext/xmlUser-Agentsend_headerssend_contentputheaderContent-Lengthendheadersbody:SafeTransportHandles an HTTPS transaction to an XML-RPC server.HTTPSConnectionyour version of http.client doesn't support HTTPSServerProxyuri [,options] -> a logical connection to an XML-RPC server + + uri is the connection point on the server, given as + scheme://host/target. + + The standard implementation always supports the "http" scheme. If + SSL socket support is available (Python 2.0), it also supports + "https". + + If the target part and the slash preceding it are both omitted, + "/RPC2" is assumed. + + The following options can be given as keyword arguments: + + transport: a transport factory + encoding: the request encoding (default is UTF-8) + + All 8-bit strings passed to the server proxy are assumed to use + the given encoding. + _splittypehttpsunsupported XML-RPC protocol_splithost__host__handler/RPC2extra_kwargs__transport__encoding__verbose__allow_none__close__request<%s for %s%s>A workaround to get special attributes on the ServerProxy + without interfering with the magic __getattr__ + Attribute %r not foundhttp://localhost:8000currentTimegetCurrentTimemultigetData# XML-RPC CLIENT LIBRARY# $Id$# an XML-RPC client interface for Python.# the marshalling and response parser code can also be used to# implement XML-RPC servers.# Notes:# this version is designed to work with Python 2.1 or newer.# History:# 1999-01-14 fl Created# 1999-01-15 fl Changed dateTime to use localtime# 1999-01-16 fl Added Binary/base64 element, default to RPC2 service# 1999-01-19 fl Fixed array data element (from Skip Montanaro)# 1999-01-21 fl Fixed dateTime constructor, etc.# 1999-02-02 fl Added fault handling, handle empty sequences, etc.# 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro)# 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8)# 2000-11-28 fl Changed boolean to check the truth value of its argument# 2001-02-24 fl Added encoding/Unicode/SafeTransport patches# 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1)# 2001-03-28 fl Make sure response tuple is a singleton# 2001-03-29 fl Don't require empty params element (from Nicholas Riley)# 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2)# 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from Paul Prescod)# 2001-09-03 fl Allow Transport subclass to override getparser# 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup)# 2001-10-01 fl Remove containers from memo cache when done with them# 2001-10-01 fl Use faster escape method (80% dumps speedup)# 2001-10-02 fl More dumps microtuning# 2001-10-04 fl Make sure import expat gets a parser (from Guido van Rossum)# 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow# 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems)# 2001-11-12 fl Use repr() to marshal doubles (from Paul Felix)# 2002-03-17 fl Avoid buffered read when possible (from James Rucker)# 2002-04-07 fl Added pythondoc comments# 2002-04-16 fl Added __str__ methods to datetime/binary wrappers# 2002-05-15 fl Added error constants (from Andrew Kuchling)# 2002-06-27 fl Merged with Python CVS version# 2002-10-22 fl Added basic authentication (based on code from Phillip Eby)# 2003-01-22 sm Add support for the bool type# 2003-02-27 gvr Remove apply calls# 2003-04-24 sm Use cStringIO if available# 2003-04-25 ak Add support for nil# 2003-06-15 gn Add support for time.struct_time# 2003-07-12 gp Correct marshalling of Faults# 2003-10-31 mvl Add multicall support# 2004-08-20 mvl Bump minimum supported Python version to 2.1# 2014-12-02 ch/doko Add workaround for gzip bomb vulnerability# Copyright (c) 1999-2002 by Secret Labs AB.# Copyright (c) 1999-2002 by Fredrik Lundh.# info@pythonware.com# The XML-RPC client interface is# Copyright (c) 1999-2002 by Secret Labs AB# Copyright (c) 1999-2002 by Fredrik Lundh#python can be built without zlib/gzip support# Internal stuff# used in User-Agent header sent# xmlrpc integer limits# Error constants (from Dan Libby's specification at# http://xmlrpc-epi.sourceforge.net/specs/rfc.fault_codes.php)# Ranges of errors# Specific errors# Base class for all kinds of client-side errors.# Indicates an HTTP-level protocol error. This is raised by the HTTP# transport layer, if the server returns an error code other than 200# (OK).# @param url The target URL.# @param errcode The HTTP error code.# @param errmsg The HTTP error message.# @param headers The HTTP header dictionary.# Indicates a broken XML-RPC response package. This exception is# raised by the unmarshalling layer, if the XML-RPC response is# malformed.# Indicates an XML-RPC fault response package. This exception is# raised by the unmarshalling layer, if the XML-RPC response contains# a fault string. This exception can also be used as a class, to# generate a fault XML-RPC message.# @param faultCode The XML-RPC fault code.# @param faultString The XML-RPC fault string.# Special values# Backwards compatibility# Wrapper for XML-RPC DateTime values. This converts a time value to# the format used by XML-RPC.#

# The value can be given as a datetime object, as a string in the# format "yyyymmddThh:mm:ss", as a 9-item time tuple (as returned by# time.localtime()), or an integer value (as returned by time.time()).# The wrapper uses time.localtime() to convert an integer to a time# tuple.# @param value The time, given as a datetime object, an ISO 8601 string,# a time tuple, or an integer time value.# Issue #13305: different format codes across platforms# Mac OS X# Linux# Get date/time value.# @return Date/time value, as an ISO 8601 string.# decode xml element contents into a DateTime structure.# Wrapper for binary data. This can be used to transport any kind# of binary data over XML-RPC, using BASE64 encoding.# @param data An 8-bit string containing arbitrary data.# Make a copy of the bytes!# Get buffer contents.# @return Buffer contents, as an 8-bit string.# XXX encoding?!# decode xml element contents into a Binary structure# XML parsers# fast expat parser for Python 2.0 and later.# XML-RPC marshalling and unmarshalling code# XML-RPC marshaller.# @param encoding Default encoding for 8-bit strings. The default# value is None (interpreted as UTF-8).# @see dumps# by the way, if you don't understand what's going on in here,# that's perfectly ok.# fault instance# parameter block# FIXME: the xml-rpc specification allows us to leave out# the entire block if there are no parameters.# however, changing this may break older code (including# old versions of xmlrpclib.py), so this is better left as# is for now. See @XMLRPC3 for more information. /F# check if this object can be marshalled as a structure# check if this class is a sub-class of a basic type,# because we don't know how to marshal these types# (e.g. a string sub-class)# XXX(twouters): using "_arbitrary_instance" as key as a quick-fix# for the p3yk merge, this should probably be fixed more neatly.# backward compatible# check for special wrappers# store instance attributes as a struct (really?)# XML-RPC unmarshaller.# @see loads# and again, if you don't understand what's going on in here,# return response tuple and target method# event handlers# FIXME: assert standalone == 1 ???# prepare to handle this element# call the appropriate end tag handler# unknown tag ?# accelerator support# dispatch data# element decoders# struct keys are always strings# map arrays to Python lists# map structs to Python dictionaries# if we stumble upon a value element with no internal# elements, treat it as a string element# no params## Multicall support# some lesser magic to store calls made to a MultiCall object# for batch execution# convenience functions# Create a parser object, and connect it to an unmarshalling instance.# This function picks the fastest available XML parser.# return A (parser, unmarshaller) tuple.# Convert a Python tuple or a Fault instance to an XML-RPC packet.# @def dumps(params, **options)# @param params A tuple or Fault instance.# @keyparam methodname If given, create a methodCall request for# this method name.# @keyparam methodresponse If given, create a methodResponse packet.# If used with a tuple, the tuple must be a singleton (that is,# it must contain exactly one element).# @keyparam encoding The packet encoding.# @return A string containing marshalled data.# utf-8 is default# standard XML-RPC wrappings# a method call# a method response, or a fault structure# return as is# Convert an XML-RPC packet to a Python object. If the XML-RPC packet# represents a fault condition, this function raises a Fault exception.# @param data An XML-RPC packet, given as an 8-bit string.# @return A tuple containing the unpacked data, and the method name# (None if not present).# @see Fault# Encode a string using the gzip content encoding such as specified by the# Content-Encoding: gzip# in the HTTP header, as described in RFC 1952# @param data the unencoded data# @return the encoded data# Decode a string using the gzip content encoding such as specified by the# @param data The encoded data# @keyparam max_decode Maximum bytes to decode (20 MiB default), use negative# values for unlimited decoding# @return the unencoded data# @raises ValueError if data is not correctly coded.# @raises ValueError if max gzipped payload length exceeded# no limit# Return a decoded file-like object for the gzip encoding# as described in RFC 1952.# @param response A stream supporting a read() method# @return a file-like object that the decoded data can be read() from#response doesn't support tell() and read(), required by#GzipFile# request dispatcher# some magic to bind an XML-RPC method to an RPC server.# supports "nested" methods (e.g. examples.getStateName)# Standard transport class for XML-RPC over HTTP.# You can create custom transports by subclassing this method, and# overriding selected methods.# client identifier (may be overridden)#if true, we'll request gzip encoding# if positive, encode request using gzip if it exceeds this threshold# note that many servers will get confused, so only use it if you know# that they can decode such a request#None = don't encode# Send a complete request, and parse the response.# Retry request if a cached connection has disconnected.# @param host Target host.# @param handler Target PRC handler.# @param request_body XML-RPC request body.# @param verbose Debugging flag.# @return Parsed response.#retry request once if cached connection has gone cold# issue XML-RPC request#All unexpected errors leave connection in# a strange state, so we clear it.#We got an error response.#Discard any response data and raise exception# Create parser.# @return A 2-tuple containing a parser and an unmarshaller.# get parser and unmarshaller# Get authorization info from host parameter# Host may be a string, or a (host, x509-dict) tuple; if a string,# it is checked for a "user:pw@host" format, and a "Basic# Authentication" header is added if appropriate.# @param host Host descriptor (URL or (URL, x509 info) tuple).# @return A 3-tuple containing (actual host, extra headers,# x509 info). The header and x509 fields may be None.# get rid of whitespace# Connect to server.# @return An HTTPConnection object#return an existing connection if possible. This allows#HTTP/1.1 keep-alive.# create a HTTP connection object from a host descriptor# Clear any cached connection object.# Used in the event of socket errors.# Send HTTP request.# @param handler Target RPC handler (a path relative to host)# @param request_body The XML-RPC request body# @param debug Enable debugging if debug is true.# @return An HTTPConnection.# Send request headers.# This function provides a useful hook for subclassing# @param connection httpConnection.# @param headers list of key,value pairs for HTTP headers# Send request body.#optionally encode the request# Parse response.# @param file Stream.# @return Response tuple and target method.# read response data from httpresponse, and parse it# Check for new http response object, otherwise it is a file object.# Standard transport class for XML-RPC over HTTPS.# FIXME: mostly untested# create a HTTPS connection object from a host descriptor# host may be a string, or a (host, x509-dict) tuple# Standard server proxy. This class establishes a virtual connection# to an XML-RPC server.# This class is available as ServerProxy and Server. New code should# use ServerProxy, to avoid confusion.# @def ServerProxy(uri, **options)# @param uri The connection point on the server.# @keyparam transport A transport factory, compatible with the# standard transport class.# @keyparam encoding The default encoding used for 8-bit strings# (default is UTF-8).# @keyparam verbose Use a true value to enable debugging output.# (printed to standard output).# @see Transport# establish a "logical" server connection# get the url# call a method on the remote server# magic method dispatcher# note: to call a remote object with a non-standard name, use# result getattr(server, "strange-python-name")(args)# test code# simple test program (from the XML-RPC specification)# local server, available from Lib/xmlrpc/server.pyb' +An XML-RPC client interface for Python. + +The marshalling and response parser code can also be used to +implement XML-RPC servers. + +Exported exceptions: + + Error Base class for client errors + ProtocolError Indicates an HTTP protocol error + ResponseError Indicates a broken response package + Fault Indicates an XML-RPC fault package + +Exported classes: + + ServerProxy Represents a logical connection to an XML-RPC server + + MultiCall Executor of boxcared xmlrpc requests + DateTime dateTime wrapper for an ISO 8601 string or time tuple or + localtime integer value to generate a "dateTime.iso8601" + XML-RPC value + Binary binary data wrapper + + Marshaller Generate an XML-RPC params chunk from a Python data structure + Unmarshaller Unmarshal an XML-RPC response from incoming XML event message + Transport Handles an HTTP transaction to an XML-RPC server + SafeTransport Handles an HTTPS transaction to an XML-RPC server + +Exported constants: + + (none) + +Exported functions: + + getparser Create instance of the fastest available parser & attach + to an unmarshalling object + dumps Convert an argument tuple or a Fault instance to an XML-RPC + request (or response, if the methodresponse option is used). + loads Convert an XML-RPC packet to unmarshalled data plus a method + name (None if not present). +'u' +An XML-RPC client interface for Python. + +The marshalling and response parser code can also be used to +implement XML-RPC servers. + +Exported exceptions: + + Error Base class for client errors + ProtocolError Indicates an HTTP protocol error + ResponseError Indicates a broken response package + Fault Indicates an XML-RPC fault package + +Exported classes: + + ServerProxy Represents a logical connection to an XML-RPC server + + MultiCall Executor of boxcared xmlrpc requests + DateTime dateTime wrapper for an ISO 8601 string or time tuple or + localtime integer value to generate a "dateTime.iso8601" + XML-RPC value + Binary binary data wrapper + + Marshaller Generate an XML-RPC params chunk from a Python data structure + Unmarshaller Unmarshal an XML-RPC response from incoming XML event message + Transport Handles an HTTP transaction to an XML-RPC server + SafeTransport Handles an HTTPS transaction to an XML-RPC server + +Exported constants: + + (none) + +Exported functions: + + getparser Create instance of the fastest available parser & attach + to an unmarshalling object + dumps Convert an argument tuple or a Fault instance to an XML-RPC + request (or response, if the methodresponse option is used). + loads Convert an XML-RPC packet to unmarshalled data plus a method + name (None if not present). +'b'Base class for client errors.'u'Base class for client errors.'b'Indicates an HTTP protocol error.'u'Indicates an HTTP protocol error.'b'<%s for %s: %s %s>'u'<%s for %s: %s %s>'b'Indicates a broken response package.'u'Indicates a broken response package.'b'Indicates an XML-RPC fault package.'u'Indicates an XML-RPC fault package.'b'<%s %s: %r>'u'<%s %s: %r>'b'0001'u'0001'b'%Y%m%dT%H:%M:%S'u'%Y%m%dT%H:%M:%S'b'%4Y'u'%4Y'b'%4Y%m%dT%H:%M:%S'u'%4Y%m%dT%H:%M:%S'b'%04d%02d%02dT%02d:%02d:%02d'u'%04d%02d%02dT%02d:%02d:%02d'b'DateTime wrapper for an ISO 8601 string or time tuple or + localtime integer value to generate 'dateTime.iso8601' XML-RPC + value. + 'u'DateTime wrapper for an ISO 8601 string or time tuple or + localtime integer value to generate 'dateTime.iso8601' XML-RPC + value. + 'b'timetuple'u'timetuple'b'__class__'u'__class__'b'Can't compare %s and %s'u'Can't compare %s and %s'b''u''b' +'u' +'b'Wrapper for binary data.'u'Wrapper for binary data.'b'expected bytes or bytearray, not %s'u'expected bytes or bytearray, not %s'b' +'u' +'b' +'u' +'b'Generate an XML-RPC params chunk from a Python data structure. + + Create a Marshaller instance for each set of parameters, and use + the "dumps" method to convert your data (represented as a tuple) + to an XML-RPC params chunk. To write a fault response, pass a + Fault instance instead. You may prefer to use the "dumps" module + function for this purpose. + 'u'Generate an XML-RPC params chunk from a Python data structure. + + Create a Marshaller instance for each set of parameters, and use + the "dumps" method to convert your data (represented as a tuple) + to an XML-RPC params chunk. To write a fault response, pass a + Fault instance instead. You may prefer to use the "dumps" module + function for this purpose. + 'b' +'u' +'b'faultCode'u'faultCode'b'faultString'u'faultString'b' +'u' +'b' +'u' +'b' +'u' +'b' +'u' +'b' +'u' +'b'cannot marshal %s objects'u'cannot marshal %s objects'b'_arbitrary_instance'u'_arbitrary_instance'b'cannot marshal None unless allow_none is enabled'u'cannot marshal None unless allow_none is enabled'b''u''b''u''b' +'u' +'b'int exceeds XML-RPC limits'u'int exceeds XML-RPC limits'b''u''b' +'u' +'b''u''b' +'u' +'b''u''b' +'u' +'b'cannot marshal recursive sequences'u'cannot marshal recursive sequences'b' +'u' +'b' +'u' +'b'cannot marshal recursive dictionaries'u'cannot marshal recursive dictionaries'b' +'u' +'b' +'u' +'b'dictionary key must be string'u'dictionary key must be string'b'%s +'u'%s +'b' +'u' +'b' +'u' +'b'Unmarshal an XML-RPC response, based on incoming XML event + messages (start, data, end). Call close() to get the resulting + data structure. + + Note that this reader is fairly tolerant, and gladly accepts bogus + XML-RPC data without complaining (but not bogus XML). + 'u'Unmarshal an XML-RPC response, based on incoming XML event + messages (start, data, end). Call close() to get the resulting + data structure. + + Note that this reader is fairly tolerant, and gladly accepts bogus + XML-RPC data without complaining (but not bogus XML). + 'b'fault'u'fault'b'array'b'struct'b'unknown tag %r'u'unknown tag %r'b'nil'u'nil'b'bad boolean value'u'bad boolean value'b'boolean'u'boolean'b'i1'u'i1'b'i2'u'i2'b'i4'u'i4'b'i8'u'i8'b'biginteger'u'biginteger'b'float'u'float'b'bigdecimal'u'bigdecimal'b'string'u'string'b'dateTime.iso8601'u'dateTime.iso8601'b'params'u'params'b'methodName'u'methodName'b'Iterates over the results of a multicall. Exceptions are + raised in response to xmlrpc faults.'u'Iterates over the results of a multicall. Exceptions are + raised in response to xmlrpc faults.'b'unexpected type in multicall result'u'unexpected type in multicall result'b'server -> an object used to boxcar method calls + + server should be a ServerProxy object. + + Methods can be added to the MultiCall using normal + method call syntax e.g.: + + multicall = MultiCall(server_proxy) + multicall.add(2,3) + multicall.get_address("Guido") + + To execute the multicall, call the MultiCall object e.g.: + + add_result, address = multicall() + 'u'server -> an object used to boxcar method calls + + server should be a ServerProxy object. + + Methods can be added to the MultiCall using normal + method call syntax e.g.: + + multicall = MultiCall(server_proxy) + multicall.add(2,3) + multicall.get_address("Guido") + + To execute the multicall, call the MultiCall object e.g.: + + add_result, address = multicall() + 'b'<%s at %#x>'u'<%s at %#x>'b'getparser() -> parser, unmarshaller + + Create an instance of the fastest available parser, and attach it + to an unmarshalling object. Return both objects. + 'u'getparser() -> parser, unmarshaller + + Create an instance of the fastest available parser, and attach it + to an unmarshalling object. Return both objects. + 'b'data [,options] -> marshalled data + + Convert an argument tuple or a Fault instance to an XML-RPC + request (or response, if the methodresponse option is used). + + In addition to the data object, the following options can be given + as keyword arguments: + + methodname: the method name for a methodCall packet + + methodresponse: true to create a methodResponse packet. + If this option is used with a tuple, the tuple must be + a singleton (i.e. it can contain only one element). + + encoding: the packet encoding (default is UTF-8) + + All byte strings in the data structure are assumed to use the + packet encoding. Unicode strings are automatically converted, + where necessary. + 'u'data [,options] -> marshalled data + + Convert an argument tuple or a Fault instance to an XML-RPC + request (or response, if the methodresponse option is used). + + In addition to the data object, the following options can be given + as keyword arguments: + + methodname: the method name for a methodCall packet + + methodresponse: true to create a methodResponse packet. + If this option is used with a tuple, the tuple must be + a singleton (i.e. it can contain only one element). + + encoding: the packet encoding (default is UTF-8) + + All byte strings in the data structure are assumed to use the + packet encoding. Unicode strings are automatically converted, + where necessary. + 'b'argument must be tuple or Fault instance'u'argument must be tuple or Fault instance'b'response tuple must be a singleton'u'response tuple must be a singleton'b' +'u' +'b' +'u' +'b' +'u' +'b' +'u' +'b' +'u' +'b' +'u' +'b'data -> unmarshalled data, method name + + Convert an XML-RPC packet to unmarshalled data plus a method + name (None if not present). + + If the XML-RPC packet represents a fault condition, this function + raises a Fault exception. + 'u'data -> unmarshalled data, method name + + Convert an XML-RPC packet to unmarshalled data plus a method + name (None if not present). + + If the XML-RPC packet represents a fault condition, this function + raises a Fault exception. + 'b'data -> gzip encoded data + + Encode data using the gzip content encoding as described in RFC 1952 + 'u'data -> gzip encoded data + + Encode data using the gzip content encoding as described in RFC 1952 + 'b'gzip encoded data -> unencoded data + + Decode data using the gzip content encoding as described in RFC 1952 + 'u'gzip encoded data -> unencoded data + + Decode data using the gzip content encoding as described in RFC 1952 + 'b'invalid data'u'invalid data'b'max gzipped payload length exceeded'u'max gzipped payload length exceeded'b'a file-like object to decode a response encoded with the gzip + method, as described in RFC 1952. + 'u'a file-like object to decode a response encoded with the gzip + method, as described in RFC 1952. + 'b'Handles an HTTP transaction to an XML-RPC server.'u'Handles an HTTP transaction to an XML-RPC server.'b'Python-xmlrpc/%s'u'Python-xmlrpc/%s'b'content-length'u'content-length'b'Authorization'u'Authorization'b'Basic 'u'Basic 'b'POST'u'POST'b'Content-Type'u'Content-Type'b'text/xml'u'text/xml'b'User-Agent'u'User-Agent'b'Content-Length'u'Content-Length'b'getheader'u'getheader'b'body:'u'body:'b'Handles an HTTPS transaction to an XML-RPC server.'u'Handles an HTTPS transaction to an XML-RPC server.'b'HTTPSConnection'u'HTTPSConnection'b'your version of http.client doesn't support HTTPS'u'your version of http.client doesn't support HTTPS'b'uri [,options] -> a logical connection to an XML-RPC server + + uri is the connection point on the server, given as + scheme://host/target. + + The standard implementation always supports the "http" scheme. If + SSL socket support is available (Python 2.0), it also supports + "https". + + If the target part and the slash preceding it are both omitted, + "/RPC2" is assumed. + + The following options can be given as keyword arguments: + + transport: a transport factory + encoding: the request encoding (default is UTF-8) + + All 8-bit strings passed to the server proxy are assumed to use + the given encoding. + 'u'uri [,options] -> a logical connection to an XML-RPC server + + uri is the connection point on the server, given as + scheme://host/target. + + The standard implementation always supports the "http" scheme. If + SSL socket support is available (Python 2.0), it also supports + "https". + + If the target part and the slash preceding it are both omitted, + "/RPC2" is assumed. + + The following options can be given as keyword arguments: + + transport: a transport factory + encoding: the request encoding (default is UTF-8) + + All 8-bit strings passed to the server proxy are assumed to use + the given encoding. + 'b'http'b'https'u'https'b'unsupported XML-RPC protocol'u'unsupported XML-RPC protocol'b'/RPC2'u'/RPC2'b'<%s for %s%s>'u'<%s for %s%s>'b'A workaround to get special attributes on the ServerProxy + without interfering with the magic __getattr__ + 'u'A workaround to get special attributes on the ServerProxy + without interfering with the magic __getattr__ + 'b'transport'u'transport'b'Attribute %r not found'u'Attribute %r not found'b'http://localhost:8000'u'http://localhost:8000'HTTP/1.1 client library + + + + +HTTPConnection goes through a number of "states", which define when a client +may legally make another request or fetch the response for a particular +request. This diagram details these state transitions: + + (null) + | + | HTTPConnection() + v + Idle + | + | putrequest() + v + Request-started + | + | ( putheader() )* endheaders() + v + Request-sent + |\_____________________________ + | | getresponse() raises + | response = getresponse() | ConnectionError + v v + Unread-response Idle + [Response-headers-read] + |\____________________ + | | + | response.read() | putrequest() + v v + Idle Req-started-unread-response + ______/| + / | + response.read() | | ( putheader() )* endheaders() + v v + Request-started Req-sent-unread-response + | + | response.read() + v + Request-sent + +This diagram presents the following rules: + -- a second request may not be started until {response-headers-read} + -- a response [object] cannot be retrieved until {request-sent} + -- there is no differentiation between an unread response body and a + partially read response body + +Note: this enforcement is applied by the HTTPConnection class. The + HTTPResponse class does not enforce this state machine, which + implies sophisticated clients may accelerate the request/response + pipeline. Caution should be taken, though: accelerating the states + beyond the above pattern may imply knowledge of the server's + connection-close behavior for certain requests. For example, it + is impossible to tell whether the server will close the connection + UNTIL the response headers have been read; this means that further + requests cannot be placed into the pipeline until it is known that + the server will NOT be closing the connection. + +Logical State __state __response +------------- ------- ---------- +Idle _CS_IDLE None +Request-started _CS_REQ_STARTED None +Request-sent _CS_REQ_SENT None +Unread-response _CS_IDLE +Req-started-unread-response _CS_REQ_STARTED +Req-sent-unread-response _CS_REQ_SENT +email.messageurlsplitHTTPResponseHTTPExceptionNotConnectedUnknownProtocolUnknownTransferEncodingUnimplementedFileModeIncompleteReadInvalidURLImproperConnectionStateCannotSendRequestCannotSendHeaderResponseNotReadyBadStatusLineLineTooLongresponsesHTTP_PORT443HTTPS_PORTUNKNOWN_UNKNOWNIdle_CS_IDLERequest-started_CS_REQ_STARTEDRequest-sent_CS_REQ_SENT__members___MAXLINE_MAXHEADERS[^:\s][^:\r\n]*rb'_is_legal_header_name\n(?![ \t])|\r(?![ \t\n])_is_illegal_header_value[- ]_contains_disallowed_url_pchar_re[-]_contains_disallowed_method_pchar_rePATCHPUT_METHODS_EXPECTING_BODYCall data.encode("latin-1") but show a better error message.%s (%.20r) is not valid Latin-1. Use %s.encode('utf-8') if you want to send it encoded in UTF-8."%s (%.20r) is not valid Latin-1. Use %s.encode('utf-8') ""if you want to send it encoded in UTF-8."HTTPMessagegetallmatchingheadersFind all header lines matching a given header name. + + Look through the list of headers and find all lines matching a given + header name (and their continuation lines). A list of the lines is + returned, without interpretation. If the header does not occur, an + empty list is returned. If the header occurs multiple times, all + occurrences are returned. Case is not important in the header name. + + hit_read_headersReads potential header lines into a list from a file pointer. + + Length of line is limited by _MAXLINE, and number of + headers is limited by _MAXHEADERS. + header linegot more than %d headersparse_headers_classParses only RFC2822 headers from a file pointer. + + email Parser wants to see strings rather than bytes. + But a TextIOWrapper around self.rfile would buffer too many bytes + from the stream, bytes which we later need to read as bytes. + So we read the correct bytes here, as bytes, for email Parser + to parse. + + hstringdebuglevelmakefile_methodchunkedchunk_leftwill_close_read_statusstatus linereply:Remote end closed connection without response"Remote end closed connection without"" response"HTTP/_close_conn999beginskipped_headersheaders:HTTP/1.0HTTP/0.9HTTP/1.hdrheader:transfer-encodingtr_enc_check_closeHEADconnkeep-aliveproxy-connectionpconnAlways returns TrueisclosedTrue if the connection is closed.amt_readall_chunked_safe_readRead up to len(b) bytes into bytearray b and return the number + of bytes read. + _readinto_chunked_read_next_chunk_sizechunk size_read_and_discard_trailertrailer line_get_chunk_lefttotal_bytesmvb_safe_readintotemp_mvbRead the number of bytes requested. + + This function should be used when bytes "should" be present for + reading. If the bytes are truly not available (due to EOF), then the + IncompleteRead exception can be used to detect the problem. + Same as _safe_read, but for reading into a buffer.Read with at most one underlying system call. If at least one + byte is buffered, return that instead. + _read1_chunked_peek_chunkedReturns the value of the header matching *name*. + + If there are multiple matching headers, the values are + combined into a single string separated by commas and spaces. + + If no matching header is found, returns *default* or None if + the *default* is not specified. + + If the headers are unknown, raises http.client.ResponseNotReady. + + get_allReturn list of (header, value) tuples.Returns an instance of the class mimetools.Message containing + meta-information associated with the URL. + + When the method is HTTP, these headers are those returned by + the server at the head of the retrieved HTML page (including + Content-Length and Content-Type). + + When the method is FTP, a Content-Length header will be + present if (as is now usual) the server passed back a file + length in response to the FTP retrieval request. A + Content-Type header will be present if the MIME type can be + guessed. + + When the method is local-file, returned headers will include + a Date representing the file's last-modified time, a + Content-Length giving file size, and a Content-Type + containing a guess at the file's type. See also the + description of the mimetools module. + + geturlReturn the real URL of the page. + + In some cases, the HTTP server redirects a client to another + URL. The urlopen() function handles this transparently, but in + some cases the caller needs to know which URL the client was + redirected to. The geturl() method can be used to get at this + redirected URL. + + getcodeReturn the HTTP status code that was sent with the response, + or None if the URL is not an HTTP URL. + + _http_vsnHTTP/1.1_http_vsn_strresponse_classdefault_portauto_open_is_textIOTest whether a file-like object is a text or a binary stream. + TextIOBase_get_content_lengthGet the content-length based on the body. + + If the body is None, we set Content-Length: 0 for methods that expect + a body (RFC 7230, Section 3.3.2). We also set the Content-Length for + any method if the body is a str or bytes-like object and not a file. + mv_GLOBAL_DEFAULT_TIMEOUTsource_address__response__state_tunnel_host_tunnel_port_tunnel_headers_get_hostport_validate_host_create_connectionset_tunnelSet up host and port for HTTP CONNECT tunnelling. + + In a connection that uses HTTP CONNECT tunneling, the host passed to the + constructor is used as a proxy server that relays all communication to + the endpoint passed to `set_tunnel`. This done by sending an HTTP + CONNECT request to the proxy server when the connection is established. + + This method must be called before the HTTP connection has been + established. + + The headers argument should be a mapping of extra HTTP headers to send + with the CONNECT request. + Can't set up tunnel for established connectionnonnumeric port: '%s'_tunnelCONNECT %s:%d HTTP/1.0 +connect_strconnect_bytes%s: %s +header_strTunnel connection failed: %d %sConnect to the host and port specified in __init__.Close the connection to the HTTP server.Send `data' to the server. + ``data`` can be a string object, a bytes object, an array object, a + file-like object that supports a .read() method, or an iterable object. + send:sendIng a read()ableencoding file using iso-8859-1datablockdata should be a bytes-like object or an iterable, got %r"data should be a bytes-like object ""or an iterable, got %r"_outputAdd a line of output to the current request buffer. + + Assumes that the line does *not* end with \r\n. + _read_readable_send_outputmessage_bodyencode_chunkedSend the currently buffered request and clear the buffer. + + Appends an extra \r\n to the buffer. + A message_body may be specified, to be appended to the request. + message_body should be a bytes-like object or an iterable, got %r"message_body should be a bytes-like ""object or an iterable, got %r"Zero length chunk ignored0 + +skip_hostSend a request to the server. + + `method' specifies an HTTP request method, e.g. 'GET'. + `url' specifies the object being requested, e.g. '/index.html'. + `skip_host' if True does not add automatically a 'Host:' header + `skip_accept_encoding' if True does not add automatically an + 'Accept-Encoding:' header + _validate_method_validate_path%s %s %s_encode_requestnetlocnetloc_encHosthost_encValidate a method name for putrequest.method can't contain control characters. (found at least "(found at least "Validate a url for putrequest.URL can't contain control characters. Validate a host so it doesn't contain control characters.Send a request header line to the server. + + For example: h.putheader('Accept', 'text/html') + Invalid header name %rone_valueInvalid header value %r + Indicate that the last header line has been sent to the server. + + This method sends the request to the server. The optional message_body + argument can be used to pass a message body associated with the + request. + Send a complete request to the server._send_requestheader_namesskipsaccept-encodingcontent_lengthUnable to determine size of %rTransfer-EncodingGet the response from the server. + + If the HTTPConnection is in the correct state, returns an + instance of HTTPResponse or of whatever object is returned by + the response_class variable. + + If a request has not been sent or if a previous response has + not be handled, ResponseNotReady is raised. If the HTTP + response indicates that the connection should be closed, then + it will be closed before the response is returned. When the + connection is closed, the underlying socket is closed. + This class allows communication via SSL.key_filecert_filekey_file, cert_file and check_hostname are deprecated, use a custom context instead."key_file, cert_file and check_hostname are ""deprecated, use a custom context instead."_create_default_https_contextwill_verifycheck_hostname needs a SSL context with either CERT_OPTIONAL or CERT_REQUIRED"check_hostname needs a SSL context with ""either CERT_OPTIONAL or CERT_REQUIRED"_contextConnect to a host on a given (SSL) port.wrap_socket, %i more expected%s(%i bytes read%s)line_typegot more than %d bytes when reading %s# HTTPMessage, parse_headers(), and the HTTP status code constants are# intentionally omitted for simplicity# connection states# hack to maintain backwards compatibility# another hack to maintain backwards compatibility# Mapping status codes to official W3C names# maximal line length when calling readline().# Header name/value ABNF (http://tools.ietf.org/html/rfc7230#section-3.2)# VCHAR = %x21-7E# obs-text = %x80-FF# header-field = field-name ":" OWS field-value OWS# field-name = token# field-value = *( field-content / obs-fold )# field-content = field-vchar [ 1*( SP / HTAB ) field-vchar ]# field-vchar = VCHAR / obs-text# obs-fold = CRLF 1*( SP / HTAB )# ; obsolete line folding# ; see Section 3.2.4# token = 1*tchar# tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*"# / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~"# / DIGIT / ALPHA# ; any VCHAR, except delimiters# VCHAR defined in http://tools.ietf.org/html/rfc5234#appendix-B.1# the patterns for both name and value are more lenient than RFC# definitions to allow for backwards compatibility# These characters are not allowed within HTTP URL paths.# See https://tools.ietf.org/html/rfc3986#section-3.3 and the# https://tools.ietf.org/html/rfc3986#appendix-A pchar definition.# Prevents CVE-2019-9740. Includes control characters such as \r\n.# We don't restrict chars above \x7f as putrequest() limits us to ASCII.# Arguably only these _should_ allowed:# _is_allowed_url_pchars_re = re.compile(r"^[/!$&'()*+,;=:@%a-zA-Z0-9._~-]+$")# We are more lenient for assumed real world compatibility purposes.# These characters are not allowed within HTTP method names# to prevent http header injection.# We always set the Content-Length header for these methods because some# servers will otherwise respond with a 411# XXX The only usage of this method is in# http.server.CGIHTTPRequestHandler. Maybe move the code there so# that it doesn't need to be part of the public API. The API has# never been defined so this could cause backwards compatibility# issues.# See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details.# The bytes from the socket object are iso-8859-1 strings.# See RFC 2616 sec 2.2 which notes an exception for MIME-encoded# text following RFC 2047. The basic status line parsing only# accepts iso-8859-1.# If the response includes a content-length header, we need to# make sure that the client doesn't read more than the# specified number of bytes. If it does, it will block until# the server times out and closes the connection. This will# happen if a self.fp.read() is done (without a size) whether# self.fp is buffered or not. So, no self.fp.read() by# clients unless they know what they are doing.# The HTTPResponse object is returned via urllib. The clients# of http and urllib expect different attributes for the# headers. headers is used here and supports urllib. msg is# provided as a backwards compatibility layer for http# clients.# from the Status-Line of the response# HTTP-Version# Status-Code# Reason-Phrase# is "chunked" being used?# bytes left to read in current chunk# number of bytes left in response# conn will close at end of response# Presumably, the server closed the connection before# sending a valid response.# empty version will cause next test to fail.# The status code is a three-digit number# we've already started reading the response# read until we get a non-100 response# skip the header from the 100 response# Some servers might still return "0.9", treat it as 1.0 anyway# use HTTP/1.1 code for HTTP/1.x where x>=1# are we using the chunked-style of transfer encoding?# will the connection close at the end of the response?# do we have a Content-Length?# NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked"# ignore nonsensical negative lengths# does the body have a fixed length? (of zero)# 1xx codes# if the connection remains open, and we aren't using chunked, and# a content-length was not provided, then assume that the connection# WILL close.# An HTTP/1.1 proxy is assumed to stay open unless# explicitly closed.# Some HTTP/1.0 implementations have support for persistent# connections, using rules different than HTTP/1.1.# For older HTTP, Keep-Alive indicates persistent connection.# At least Akamai returns a "Connection: Keep-Alive" header,# which was supposed to be sent by the client.# Proxy-Connection is a netscape hack.# otherwise, assume it will close# set "closed" flag# These implementations are for the benefit of io.BufferedReader.# XXX This class should probably be revised to act more like# the "raw stream" that BufferedReader expects.# End of "raw stream" methods# NOTE: it is possible that we will not ever call self.close(). This# case occurs when will_close is TRUE, length is None, and we# read up to the last byte, but NOT past it.# IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be# called, meaning self.isclosed() is meaningful.# Amount is given, implement using readinto# Amount is not given (unbounded read) so we must check self.length# and self.chunked# we read everything# clip the read to the "end of response"# we do not use _safe_read() here because this may be a .will_close# connection, and the user is reading more bytes than will be provided# (for example, reading in 1k chunks)# Ideally, we would raise IncompleteRead if the content-length# wasn't satisfied, but it might break compatibility.# Read the next chunk size from the file# strip chunk-extensions# close the connection as protocol synchronisation is# probably lost# read and discard trailer up to the CRLF terminator### note: we shouldn't have any trailers!# a vanishingly small number of sites EOF without# sending the trailer# return self.chunk_left, reading a new chunk if necessary.# chunk_left == 0: at the end of the current chunk, need to close it# chunk_left == None: No current chunk, should read next.# This function returns non-zero or None if the last chunk has# been read.# Can be 0 or None# We are at the end of chunk, discard chunk end# toss the CRLF at the end of the chunk# last chunk: 1*("0") [ chunk-extension ] CRLF# we read everything; close the "file"# Having this enables IOBase.readline() to read more than one# byte at a time# Fallback to IOBase readline which uses peek() and read()# Strictly speaking, _get_chunk_left() may cause more than one read,# but that is ok, since that is to satisfy the chunked protocol.# if n is negative or larger than chunk_left# peek doesn't worry about protocol# eof# peek is allowed to return more than requested. Just request the# entire chunk, and truncate what we get.# We override IOBase.__iter__ so that it doesn't check for closed-ness# For compatibility with old-style urllib responses.# do an explicit check for not None here to distinguish# between unset and set but empty# file-like object.# does it implement the buffer protocol (bytes, bytearray, array)?# This is stored as an instance variable to allow unit# tests to replace it with a suitable mockup# ipv6 addresses have [...]# http://foo.com:/ == http://foo.com/# for sites which EOF without sending a trailer# close it manually... there may be other refs# create a consistent interface to message_body# Let file-like take precedence over byte-like. This# is needed to allow the current position of mmap'ed# files to be taken into account.# this is solely to check to see if message_body# implements the buffer API. it /would/ be easier# to capture if PyObject_CheckBuffer was exposed# to Python.# the object implements the buffer interface and# can be passed directly into socket methods# chunked encoding# end chunked transfer# if a prior response has been completed, then forget about it.# in certain cases, we cannot issue another request on this connection.# this occurs when:# 1) we are in the process of sending a request. (_CS_REQ_STARTED)# 2) a response to a previous request has signalled that it is going# to close the connection upon completion.# 3) the headers for the previous response have not been read, thus# we cannot determine whether point (2) is true. (_CS_REQ_SENT)# if there is no prior response, then we can request at will.# if point (2) is true, then we will have passed the socket to the# response (effectively meaning, "there is no prior response"), and# will open a new one when a new request is made.# Note: if a prior response exists, then we *can* start a new request.# We are not allowed to begin fetching the response to this new# request, however, until that prior response is complete.# Save the method for use later in the response phase# Issue some standard headers for better HTTP/1.1 compliance# this header is issued *only* for HTTP/1.1# connections. more specifically, this means it is# only issued when the client uses the new# HTTPConnection() class. backwards-compat clients# will be using HTTP/1.0 and those clients may be# issuing this header themselves. we should NOT issue# it twice; some web servers (such as Apache) barf# when they see two Host: headers# If we need a non-standard port,include it in the# header. If the request is going through a proxy,# but the host of the actual URL, not the host of the# proxy.# As per RFC 273, IPv6 address should be wrapped with []# when used as Host header# note: we are assuming that clients will not attempt to set these# headers since *this* library must deal with the# consequences. this also means that when the supporting# libraries are updated to recognize other forms, then this# code should be changed (removed or updated).# we only want a Content-Encoding of "identity" since we don't# support encodings such as x-gzip or x-deflate.# we can accept "chunked" Transfer-Encodings, but no others# NOTE: no TE header implies *only* "chunked"#self.putheader('TE', 'chunked')# if TE is supplied in the header, then it must appear in a# Connection header.#self.putheader('Connection', 'TE')# For HTTP/1.0, the server will assume "not chunked"# ASCII also helps prevent CVE-2019-9740.# prevent http header injection# Prevent CVE-2019-9740.# Prevent CVE-2019-18348.# Honor explicitly requested Host: and Accept-Encoding: headers.# chunked encoding will happen if HTTP/1.1 is used and either# the caller passes encode_chunked=True or the following# conditions hold:# 1. content-length has not been explicitly set# 2. the body is a file or iterable, but not a str or bytes-like# 3. Transfer-Encoding has NOT been explicitly set by the caller# only chunk body if not explicitly set for backwards# compatibility, assuming the client code is already handling the# chunking# if content-length cannot be automatically determined, fall# back to chunked encoding# RFC 2616 Section 3.7.1 says that text default has a# default charset of iso-8859-1.# if a prior response exists, then it must be completed (otherwise, we# cannot read this response's header to determine the connection-close# behavior)# note: if a prior response existed, but was connection-close, then the# socket and response were made independent of this HTTPConnection# object since a new request requires that we open a whole new# connection# this means the prior response had one of two states:# 1) will_close: this connection was reset and the prior socket and# response operate independently# 2) persistent: the response was retained and we await its# isclosed() status to become true.# this effectively passes the connection to the response# remember this, so we can tell when it is complete# XXX Should key_file and cert_file be deprecated in favour of context?# enable PHA for TLS 1.3 connections if available# cert and key file means the user wants to authenticate.# enable TLS 1.3 PHA implicitly even for custom contexts.# Subclasses that define an __init__ must call Exception.__init__# or define self.args. Otherwise, str() will fail.# for backwards compatibilityb'HTTP/1.1 client library + + + + +HTTPConnection goes through a number of "states", which define when a client +may legally make another request or fetch the response for a particular +request. This diagram details these state transitions: + + (null) + | + | HTTPConnection() + v + Idle + | + | putrequest() + v + Request-started + | + | ( putheader() )* endheaders() + v + Request-sent + |\_____________________________ + | | getresponse() raises + | response = getresponse() | ConnectionError + v v + Unread-response Idle + [Response-headers-read] + |\____________________ + | | + | response.read() | putrequest() + v v + Idle Req-started-unread-response + ______/| + / | + response.read() | | ( putheader() )* endheaders() + v v + Request-started Req-sent-unread-response + | + | response.read() + v + Request-sent + +This diagram presents the following rules: + -- a second request may not be started until {response-headers-read} + -- a response [object] cannot be retrieved until {request-sent} + -- there is no differentiation between an unread response body and a + partially read response body + +Note: this enforcement is applied by the HTTPConnection class. The + HTTPResponse class does not enforce this state machine, which + implies sophisticated clients may accelerate the request/response + pipeline. Caution should be taken, though: accelerating the states + beyond the above pattern may imply knowledge of the server's + connection-close behavior for certain requests. For example, it + is impossible to tell whether the server will close the connection + UNTIL the response headers have been read; this means that further + requests cannot be placed into the pipeline until it is known that + the server will NOT be closing the connection. + +Logical State __state __response +------------- ------- ---------- +Idle _CS_IDLE None +Request-started _CS_REQ_STARTED None +Request-sent _CS_REQ_SENT None +Unread-response _CS_IDLE +Req-started-unread-response _CS_REQ_STARTED +Req-sent-unread-response _CS_REQ_SENT +'u'HTTP/1.1 client library + + + + +HTTPConnection goes through a number of "states", which define when a client +may legally make another request or fetch the response for a particular +request. This diagram details these state transitions: + + (null) + | + | HTTPConnection() + v + Idle + | + | putrequest() + v + Request-started + | + | ( putheader() )* endheaders() + v + Request-sent + |\_____________________________ + | | getresponse() raises + | response = getresponse() | ConnectionError + v v + Unread-response Idle + [Response-headers-read] + |\____________________ + | | + | response.read() | putrequest() + v v + Idle Req-started-unread-response + ______/| + / | + response.read() | | ( putheader() )* endheaders() + v v + Request-started Req-sent-unread-response + | + | response.read() + v + Request-sent + +This diagram presents the following rules: + -- a second request may not be started until {response-headers-read} + -- a response [object] cannot be retrieved until {request-sent} + -- there is no differentiation between an unread response body and a + partially read response body + +Note: this enforcement is applied by the HTTPConnection class. The + HTTPResponse class does not enforce this state machine, which + implies sophisticated clients may accelerate the request/response + pipeline. Caution should be taken, though: accelerating the states + beyond the above pattern may imply knowledge of the server's + connection-close behavior for certain requests. For example, it + is impossible to tell whether the server will close the connection + UNTIL the response headers have been read; this means that further + requests cannot be placed into the pipeline until it is known that + the server will NOT be closing the connection. + +Logical State __state __response +------------- ------- ---------- +Idle _CS_IDLE None +Request-started _CS_REQ_STARTED None +Request-sent _CS_REQ_SENT None +Unread-response _CS_IDLE +Req-started-unread-response _CS_REQ_STARTED +Req-sent-unread-response _CS_REQ_SENT +'b'HTTPResponse'u'HTTPResponse'b'HTTPConnection'u'HTTPConnection'b'HTTPException'u'HTTPException'b'NotConnected'u'NotConnected'b'UnknownProtocol'u'UnknownProtocol'b'UnknownTransferEncoding'u'UnknownTransferEncoding'b'UnimplementedFileMode'u'UnimplementedFileMode'b'IncompleteRead'u'IncompleteRead'b'InvalidURL'u'InvalidURL'b'ImproperConnectionState'u'ImproperConnectionState'b'CannotSendRequest'u'CannotSendRequest'b'CannotSendHeader'u'CannotSendHeader'b'ResponseNotReady'u'ResponseNotReady'b'BadStatusLine'u'BadStatusLine'b'LineTooLong'u'LineTooLong'b'RemoteDisconnected'u'RemoteDisconnected'b'responses'u'responses'b'UNKNOWN'u'UNKNOWN'b'Idle'u'Idle'b'Request-started'u'Request-started'b'Request-sent'u'Request-sent'b'[^:\s][^:\r\n]*'b'\n(?![ \t])|\r(?![ \t\n])'b'[- ]'u'[- ]'b'[-]'u'[-]'b'PATCH'u'PATCH'b'PUT'u'PUT'b'Call data.encode("latin-1") but show a better error message.'u'Call data.encode("latin-1") but show a better error message.'b'%s (%.20r) is not valid Latin-1. Use %s.encode('utf-8') if you want to send it encoded in UTF-8.'u'%s (%.20r) is not valid Latin-1. Use %s.encode('utf-8') if you want to send it encoded in UTF-8.'b'Find all header lines matching a given header name. + + Look through the list of headers and find all lines matching a given + header name (and their continuation lines). A list of the lines is + returned, without interpretation. If the header does not occur, an + empty list is returned. If the header occurs multiple times, all + occurrences are returned. Case is not important in the header name. + + 'u'Find all header lines matching a given header name. + + Look through the list of headers and find all lines matching a given + header name (and their continuation lines). A list of the lines is + returned, without interpretation. If the header does not occur, an + empty list is returned. If the header occurs multiple times, all + occurrences are returned. Case is not important in the header name. + + 'b'Reads potential header lines into a list from a file pointer. + + Length of line is limited by _MAXLINE, and number of + headers is limited by _MAXHEADERS. + 'u'Reads potential header lines into a list from a file pointer. + + Length of line is limited by _MAXLINE, and number of + headers is limited by _MAXHEADERS. + 'b'header line'u'header line'b'got more than %d headers'u'got more than %d headers'b'Parses only RFC2822 headers from a file pointer. + + email Parser wants to see strings rather than bytes. + But a TextIOWrapper around self.rfile would buffer too many bytes + from the stream, bytes which we later need to read as bytes. + So we read the correct bytes here, as bytes, for email Parser + to parse. + + 'u'Parses only RFC2822 headers from a file pointer. + + email Parser wants to see strings rather than bytes. + But a TextIOWrapper around self.rfile would buffer too many bytes + from the stream, bytes which we later need to read as bytes. + So we read the correct bytes here, as bytes, for email Parser + to parse. + + 'b'status line'u'status line'b'reply:'u'reply:'b'Remote end closed connection without response'u'Remote end closed connection without response'b'HTTP/'u'HTTP/'b'headers:'u'headers:'b'HTTP/1.0'u'HTTP/1.0'b'HTTP/0.9'u'HTTP/0.9'b'HTTP/1.'u'HTTP/1.'b'header:'u'header:'b'transfer-encoding'u'transfer-encoding'b'chunked'u'chunked'b'HEAD'u'HEAD'b'connection'u'connection'b'keep-alive'u'keep-alive'b'proxy-connection'u'proxy-connection'b'Always returns True'u'Always returns True'b'True if the connection is closed.'u'True if the connection is closed.'b'Read up to len(b) bytes into bytearray b and return the number + of bytes read. + 'u'Read up to len(b) bytes into bytearray b and return the number + of bytes read. + 'b'chunk size'u'chunk size'b'trailer line'u'trailer line'b'Read the number of bytes requested. + + This function should be used when bytes "should" be present for + reading. If the bytes are truly not available (due to EOF), then the + IncompleteRead exception can be used to detect the problem. + 'u'Read the number of bytes requested. + + This function should be used when bytes "should" be present for + reading. If the bytes are truly not available (due to EOF), then the + IncompleteRead exception can be used to detect the problem. + 'b'Same as _safe_read, but for reading into a buffer.'u'Same as _safe_read, but for reading into a buffer.'b'Read with at most one underlying system call. If at least one + byte is buffered, return that instead. + 'u'Read with at most one underlying system call. If at least one + byte is buffered, return that instead. + 'b'Returns the value of the header matching *name*. + + If there are multiple matching headers, the values are + combined into a single string separated by commas and spaces. + + If no matching header is found, returns *default* or None if + the *default* is not specified. + + If the headers are unknown, raises http.client.ResponseNotReady. + + 'u'Returns the value of the header matching *name*. + + If there are multiple matching headers, the values are + combined into a single string separated by commas and spaces. + + If no matching header is found, returns *default* or None if + the *default* is not specified. + + If the headers are unknown, raises http.client.ResponseNotReady. + + 'b'Return list of (header, value) tuples.'u'Return list of (header, value) tuples.'b'Returns an instance of the class mimetools.Message containing + meta-information associated with the URL. + + When the method is HTTP, these headers are those returned by + the server at the head of the retrieved HTML page (including + Content-Length and Content-Type). + + When the method is FTP, a Content-Length header will be + present if (as is now usual) the server passed back a file + length in response to the FTP retrieval request. A + Content-Type header will be present if the MIME type can be + guessed. + + When the method is local-file, returned headers will include + a Date representing the file's last-modified time, a + Content-Length giving file size, and a Content-Type + containing a guess at the file's type. See also the + description of the mimetools module. + + 'u'Returns an instance of the class mimetools.Message containing + meta-information associated with the URL. + + When the method is HTTP, these headers are those returned by + the server at the head of the retrieved HTML page (including + Content-Length and Content-Type). + + When the method is FTP, a Content-Length header will be + present if (as is now usual) the server passed back a file + length in response to the FTP retrieval request. A + Content-Type header will be present if the MIME type can be + guessed. + + When the method is local-file, returned headers will include + a Date representing the file's last-modified time, a + Content-Length giving file size, and a Content-Type + containing a guess at the file's type. See also the + description of the mimetools module. + + 'b'Return the real URL of the page. + + In some cases, the HTTP server redirects a client to another + URL. The urlopen() function handles this transparently, but in + some cases the caller needs to know which URL the client was + redirected to. The geturl() method can be used to get at this + redirected URL. + + 'u'Return the real URL of the page. + + In some cases, the HTTP server redirects a client to another + URL. The urlopen() function handles this transparently, but in + some cases the caller needs to know which URL the client was + redirected to. The geturl() method can be used to get at this + redirected URL. + + 'b'Return the HTTP status code that was sent with the response, + or None if the URL is not an HTTP URL. + + 'u'Return the HTTP status code that was sent with the response, + or None if the URL is not an HTTP URL. + + 'b'HTTP/1.1'u'HTTP/1.1'b'Test whether a file-like object is a text or a binary stream. + 'u'Test whether a file-like object is a text or a binary stream. + 'b'Get the content-length based on the body. + + If the body is None, we set Content-Length: 0 for methods that expect + a body (RFC 7230, Section 3.3.2). We also set the Content-Length for + any method if the body is a str or bytes-like object and not a file. + 'u'Get the content-length based on the body. + + If the body is None, we set Content-Length: 0 for methods that expect + a body (RFC 7230, Section 3.3.2). We also set the Content-Length for + any method if the body is a str or bytes-like object and not a file. + 'b'Set up host and port for HTTP CONNECT tunnelling. + + In a connection that uses HTTP CONNECT tunneling, the host passed to the + constructor is used as a proxy server that relays all communication to + the endpoint passed to `set_tunnel`. This done by sending an HTTP + CONNECT request to the proxy server when the connection is established. + + This method must be called before the HTTP connection has been + established. + + The headers argument should be a mapping of extra HTTP headers to send + with the CONNECT request. + 'u'Set up host and port for HTTP CONNECT tunnelling. + + In a connection that uses HTTP CONNECT tunneling, the host passed to the + constructor is used as a proxy server that relays all communication to + the endpoint passed to `set_tunnel`. This done by sending an HTTP + CONNECT request to the proxy server when the connection is established. + + This method must be called before the HTTP connection has been + established. + + The headers argument should be a mapping of extra HTTP headers to send + with the CONNECT request. + 'b'Can't set up tunnel for established connection'u'Can't set up tunnel for established connection'b'nonnumeric port: '%s''u'nonnumeric port: '%s''b'CONNECT %s:%d HTTP/1.0 +'u'CONNECT %s:%d HTTP/1.0 +'b'%s: %s +'u'%s: %s +'b'Tunnel connection failed: %d %s'u'Tunnel connection failed: %d %s'b'Connect to the host and port specified in __init__.'u'Connect to the host and port specified in __init__.'b'Close the connection to the HTTP server.'u'Close the connection to the HTTP server.'b'Send `data' to the server. + ``data`` can be a string object, a bytes object, an array object, a + file-like object that supports a .read() method, or an iterable object. + 'u'Send `data' to the server. + ``data`` can be a string object, a bytes object, an array object, a + file-like object that supports a .read() method, or an iterable object. + 'b'send:'u'send:'b'sendIng a read()able'u'sendIng a read()able'b'encoding file using iso-8859-1'u'encoding file using iso-8859-1'b'data should be a bytes-like object or an iterable, got %r'u'data should be a bytes-like object or an iterable, got %r'b'Add a line of output to the current request buffer. + + Assumes that the line does *not* end with \r\n. + 'u'Add a line of output to the current request buffer. + + Assumes that the line does *not* end with \r\n. + 'b'Send the currently buffered request and clear the buffer. + + Appends an extra \r\n to the buffer. + A message_body may be specified, to be appended to the request. + 'u'Send the currently buffered request and clear the buffer. + + Appends an extra \r\n to the buffer. + A message_body may be specified, to be appended to the request. + 'b'message_body should be a bytes-like object or an iterable, got %r'u'message_body should be a bytes-like object or an iterable, got %r'b'Zero length chunk ignored'u'Zero length chunk ignored'b'0 + +'b'Send a request to the server. + + `method' specifies an HTTP request method, e.g. 'GET'. + `url' specifies the object being requested, e.g. '/index.html'. + `skip_host' if True does not add automatically a 'Host:' header + `skip_accept_encoding' if True does not add automatically an + 'Accept-Encoding:' header + 'u'Send a request to the server. + + `method' specifies an HTTP request method, e.g. 'GET'. + `url' specifies the object being requested, e.g. '/index.html'. + `skip_host' if True does not add automatically a 'Host:' header + `skip_accept_encoding' if True does not add automatically an + 'Accept-Encoding:' header + 'b'%s %s %s'u'%s %s %s'b'Host'u'Host'b'identity'u'identity'b'Validate a method name for putrequest.'u'Validate a method name for putrequest.'b'method can't contain control characters. 'u'method can't contain control characters. 'b' (found at least 'u' (found at least 'b'Validate a url for putrequest.'u'Validate a url for putrequest.'b'URL can't contain control characters. 'u'URL can't contain control characters. 'b'Validate a host so it doesn't contain control characters.'u'Validate a host so it doesn't contain control characters.'b'Send a request header line to the server. + + For example: h.putheader('Accept', 'text/html') + 'u'Send a request header line to the server. + + For example: h.putheader('Accept', 'text/html') + 'b'Invalid header name %r'u'Invalid header name %r'b'Invalid header value %r'u'Invalid header value %r'b' + 'b'Indicate that the last header line has been sent to the server. + + This method sends the request to the server. The optional message_body + argument can be used to pass a message body associated with the + request. + 'u'Indicate that the last header line has been sent to the server. + + This method sends the request to the server. The optional message_body + argument can be used to pass a message body associated with the + request. + 'b'Send a complete request to the server.'u'Send a complete request to the server.'b'host'u'host'b'skip_host'u'skip_host'b'accept-encoding'u'accept-encoding'b'skip_accept_encoding'u'skip_accept_encoding'b'Unable to determine size of %r'u'Unable to determine size of %r'b'Transfer-Encoding'u'Transfer-Encoding'b'body'b'Get the response from the server. + + If the HTTPConnection is in the correct state, returns an + instance of HTTPResponse or of whatever object is returned by + the response_class variable. + + If a request has not been sent or if a previous response has + not be handled, ResponseNotReady is raised. If the HTTP + response indicates that the connection should be closed, then + it will be closed before the response is returned. When the + connection is closed, the underlying socket is closed. + 'u'Get the response from the server. + + If the HTTPConnection is in the correct state, returns an + instance of HTTPResponse or of whatever object is returned by + the response_class variable. + + If a request has not been sent or if a previous response has + not be handled, ResponseNotReady is raised. If the HTTP + response indicates that the connection should be closed, then + it will be closed before the response is returned. When the + connection is closed, the underlying socket is closed. + 'b'This class allows communication via SSL.'u'This class allows communication via SSL.'b'key_file, cert_file and check_hostname are deprecated, use a custom context instead.'u'key_file, cert_file and check_hostname are deprecated, use a custom context instead.'b'check_hostname needs a SSL context with either CERT_OPTIONAL or CERT_REQUIRED'u'check_hostname needs a SSL context with either CERT_OPTIONAL or CERT_REQUIRED'b'Connect to a host on a given (SSL) port.'u'Connect to a host on a given (SSL) port.'b', %i more expected'u', %i more expected'b'%s(%i bytes read%s)'u'%s(%i bytes read%s)'b'got more than %d bytes when reading %s'u'got more than %d bytes when reading %s'A generic class to build line-oriented command interpreters. + +Interpreters constructed with this class obey the following conventions: + +1. End of file on input is processed as the command 'EOF'. +2. A command is parsed out of each line by collecting the prefix composed + of characters in the identchars member. +3. A command `foo' is dispatched to a method 'do_foo()'; the do_ method + is passed a single argument consisting of the remainder of the line. +4. Typing an empty line repeats the last command. (Actually, it calls the + method `emptyline', which may be overridden in a subclass.) +5. There is a predefined `help' method. Given an argument `topic', it + calls the command `help_topic'. With no arguments, it lists all topics + with defined help_ functions, broken into up to three topics; documented + commands, miscellaneous help topics, and undocumented commands. +6. The command '?' is a synonym for `help'. The command '!' is a synonym + for `shell', if a do_shell method exists. +7. If completion is enabled, completing commands will be done automatically, + and completing of commands args is done by calling complete_foo() with + arguments text, line, begidx, endidx. text is string we are matching + against, all returned matches must begin with it. line is the current + input line (lstripped), begidx and endidx are the beginning and end + indexes of the text being matched, which could be used to provide + different completion depending upon which position the argument is in. + +The `default' method may be overridden to intercept commands for which there +is no do_ method. + +The `completedefault' method may be overridden to intercept completions for +commands that have no complete_ method. + +The data member `self.ruler' sets the character used to draw separator lines +in the help messages. If empty, no ruler line is drawn. It defaults to "=". + +If the value of `self.intro' is nonempty when the cmdloop method is called, +it is printed out on interpreter startup. This value may be overridden +via an optional argument to the cmdloop() method. + +The data members `self.doc_header', `self.misc_header', and +`self.undoc_header' set the headers used for the help function's +listings of documented functions, miscellaneous topics, and undocumented +functions respectively. +Cmd(Cmd) PROMPTIDENTCHARSA simple framework for writing line-oriented command interpreters. + + These are often useful for test harnesses, administrative tools, and + prototypes that will later be wrapped in a more sophisticated interface. + + A Cmd instance or subclass instance is a line-oriented interpreter + framework. There is no good reason to instantiate Cmd itself; rather, + it's useful as a superclass of an interpreter class you define yourself + in order to inherit Cmd's methods and encapsulate action methods. + + promptidentcharsrulerlastcmdintrodoc_leaderDocumented commands (type help ):doc_headerMiscellaneous help topics:misc_headerUndocumented commands:undoc_header*** No help on %snohelpuse_rawinputtabcompletekeyInstantiate a line-oriented interpreter framework. + + The optional argument 'completekey' is the readline name of a + completion key; it defaults to the Tab key. If completekey is + not None and the readline module is available, command completion + is done automatically. The optional arguments stdin and stdout + specify alternate input and output file objects; if not specified, + sys.stdin and sys.stdout are used. + + cmdqueuecmdloopRepeatedly issue a prompt, accept input, parse an initial prefix + off the received input, and dispatch to action methods, passing them + the remainder of the line as argument. + + preloopget_completerold_completerset_completercompleteparse_and_bind: completeEOFprecmdonecmdpostcmdpostloopHook method executed just before the command line is + interpreted, but after the input prompt is generated and issued. + + Hook method executed just after a command dispatch is finished.Hook method executed once when the cmdloop() method is called.Hook method executed once when the cmdloop() method is about to + return. + + parselineParse the line into a command name and a string containing + the arguments. Returns a tuple containing (command, args, line). + 'command' and 'args' may be None if the line couldn't be parsed. + help do_shellshell Interpret the argument as though it had been typed in response + to the prompt. + + This may be overridden, but should not normally need to be; + see the precmd() and postcmd() methods for useful execution hooks. + The return value is a flag indicating whether interpretation of + commands by the interpreter should stop. + + emptylinedo_Called when an empty line is entered in response to the prompt. + + If this method is not overridden, it repeats the last nonempty + command entered. + + Called on an input line when the command prefix is not recognized. + + If this method is not overridden, it prints an error message and + returns. + + *** Unknown syntax: %s +completedefaultignoredMethod called to complete an input line when no command-specific + complete_*() method is available. + + By default, it returns an empty list. + + completenamesdotextget_namesReturn the next possible completion for 'text'. + + If a command has not been entered, then complete against command list. + Otherwise try to call complete_ to get list of completions. + get_line_bufferoriglinestrippedget_begidxbegidxget_endidxendidxcompfunccomplete_completion_matchescomplete_helphelp_topicsdo_helpList available commands with "help" or detailed help with "help cmd".%s +cmds_doccmds_undocprevnameprint_topicscmdscmdlenmaxcolcolumnizedisplaywidthDisplay a list of strings as a compact set of columns. + + Each column is only as wide as necessary. + Columns are separated by two spaces (one was not legible enough). + +nonstringslist[i] not a string for i in %snrowsncolscolwidthstotwidthtexts # This method used to pull in base class attributes# at a time dir() didn't do it yet.# XXX check arg syntax# There can be duplicates if routines overridden# Try every row count from 1 upwardsb'A generic class to build line-oriented command interpreters. + +Interpreters constructed with this class obey the following conventions: + +1. End of file on input is processed as the command 'EOF'. +2. A command is parsed out of each line by collecting the prefix composed + of characters in the identchars member. +3. A command `foo' is dispatched to a method 'do_foo()'; the do_ method + is passed a single argument consisting of the remainder of the line. +4. Typing an empty line repeats the last command. (Actually, it calls the + method `emptyline', which may be overridden in a subclass.) +5. There is a predefined `help' method. Given an argument `topic', it + calls the command `help_topic'. With no arguments, it lists all topics + with defined help_ functions, broken into up to three topics; documented + commands, miscellaneous help topics, and undocumented commands. +6. The command '?' is a synonym for `help'. The command '!' is a synonym + for `shell', if a do_shell method exists. +7. If completion is enabled, completing commands will be done automatically, + and completing of commands args is done by calling complete_foo() with + arguments text, line, begidx, endidx. text is string we are matching + against, all returned matches must begin with it. line is the current + input line (lstripped), begidx and endidx are the beginning and end + indexes of the text being matched, which could be used to provide + different completion depending upon which position the argument is in. + +The `default' method may be overridden to intercept commands for which there +is no do_ method. + +The `completedefault' method may be overridden to intercept completions for +commands that have no complete_ method. + +The data member `self.ruler' sets the character used to draw separator lines +in the help messages. If empty, no ruler line is drawn. It defaults to "=". + +If the value of `self.intro' is nonempty when the cmdloop method is called, +it is printed out on interpreter startup. This value may be overridden +via an optional argument to the cmdloop() method. + +The data members `self.doc_header', `self.misc_header', and +`self.undoc_header' set the headers used for the help function's +listings of documented functions, miscellaneous topics, and undocumented +functions respectively. +'u'A generic class to build line-oriented command interpreters. + +Interpreters constructed with this class obey the following conventions: + +1. End of file on input is processed as the command 'EOF'. +2. A command is parsed out of each line by collecting the prefix composed + of characters in the identchars member. +3. A command `foo' is dispatched to a method 'do_foo()'; the do_ method + is passed a single argument consisting of the remainder of the line. +4. Typing an empty line repeats the last command. (Actually, it calls the + method `emptyline', which may be overridden in a subclass.) +5. There is a predefined `help' method. Given an argument `topic', it + calls the command `help_topic'. With no arguments, it lists all topics + with defined help_ functions, broken into up to three topics; documented + commands, miscellaneous help topics, and undocumented commands. +6. The command '?' is a synonym for `help'. The command '!' is a synonym + for `shell', if a do_shell method exists. +7. If completion is enabled, completing commands will be done automatically, + and completing of commands args is done by calling complete_foo() with + arguments text, line, begidx, endidx. text is string we are matching + against, all returned matches must begin with it. line is the current + input line (lstripped), begidx and endidx are the beginning and end + indexes of the text being matched, which could be used to provide + different completion depending upon which position the argument is in. + +The `default' method may be overridden to intercept commands for which there +is no do_ method. + +The `completedefault' method may be overridden to intercept completions for +commands that have no complete_ method. + +The data member `self.ruler' sets the character used to draw separator lines +in the help messages. If empty, no ruler line is drawn. It defaults to "=". + +If the value of `self.intro' is nonempty when the cmdloop method is called, +it is printed out on interpreter startup. This value may be overridden +via an optional argument to the cmdloop() method. + +The data members `self.doc_header', `self.misc_header', and +`self.undoc_header' set the headers used for the help function's +listings of documented functions, miscellaneous topics, and undocumented +functions respectively. +'b'Cmd'u'Cmd'b'(Cmd) 'u'(Cmd) 'b'A simple framework for writing line-oriented command interpreters. + + These are often useful for test harnesses, administrative tools, and + prototypes that will later be wrapped in a more sophisticated interface. + + A Cmd instance or subclass instance is a line-oriented interpreter + framework. There is no good reason to instantiate Cmd itself; rather, + it's useful as a superclass of an interpreter class you define yourself + in order to inherit Cmd's methods and encapsulate action methods. + + 'u'A simple framework for writing line-oriented command interpreters. + + These are often useful for test harnesses, administrative tools, and + prototypes that will later be wrapped in a more sophisticated interface. + + A Cmd instance or subclass instance is a line-oriented interpreter + framework. There is no good reason to instantiate Cmd itself; rather, + it's useful as a superclass of an interpreter class you define yourself + in order to inherit Cmd's methods and encapsulate action methods. + + 'b'Documented commands (type help ):'u'Documented commands (type help ):'b'Miscellaneous help topics:'u'Miscellaneous help topics:'b'Undocumented commands:'u'Undocumented commands:'b'*** No help on %s'u'*** No help on %s'b'tab'u'tab'b'Instantiate a line-oriented interpreter framework. + + The optional argument 'completekey' is the readline name of a + completion key; it defaults to the Tab key. If completekey is + not None and the readline module is available, command completion + is done automatically. The optional arguments stdin and stdout + specify alternate input and output file objects; if not specified, + sys.stdin and sys.stdout are used. + + 'u'Instantiate a line-oriented interpreter framework. + + The optional argument 'completekey' is the readline name of a + completion key; it defaults to the Tab key. If completekey is + not None and the readline module is available, command completion + is done automatically. The optional arguments stdin and stdout + specify alternate input and output file objects; if not specified, + sys.stdin and sys.stdout are used. + + 'b'Repeatedly issue a prompt, accept input, parse an initial prefix + off the received input, and dispatch to action methods, passing them + the remainder of the line as argument. + + 'u'Repeatedly issue a prompt, accept input, parse an initial prefix + off the received input, and dispatch to action methods, passing them + the remainder of the line as argument. + + 'b': complete'u': complete'b'EOF'u'EOF'b'Hook method executed just before the command line is + interpreted, but after the input prompt is generated and issued. + + 'u'Hook method executed just before the command line is + interpreted, but after the input prompt is generated and issued. + + 'b'Hook method executed just after a command dispatch is finished.'u'Hook method executed just after a command dispatch is finished.'b'Hook method executed once when the cmdloop() method is called.'u'Hook method executed once when the cmdloop() method is called.'b'Hook method executed once when the cmdloop() method is about to + return. + + 'u'Hook method executed once when the cmdloop() method is about to + return. + + 'b'Parse the line into a command name and a string containing + the arguments. Returns a tuple containing (command, args, line). + 'command' and 'args' may be None if the line couldn't be parsed. + 'u'Parse the line into a command name and a string containing + the arguments. Returns a tuple containing (command, args, line). + 'command' and 'args' may be None if the line couldn't be parsed. + 'b'help 'u'help 'u'!'b'do_shell'u'do_shell'b'shell 'u'shell 'b'Interpret the argument as though it had been typed in response + to the prompt. + + This may be overridden, but should not normally need to be; + see the precmd() and postcmd() methods for useful execution hooks. + The return value is a flag indicating whether interpretation of + commands by the interpreter should stop. + + 'u'Interpret the argument as though it had been typed in response + to the prompt. + + This may be overridden, but should not normally need to be; + see the precmd() and postcmd() methods for useful execution hooks. + The return value is a flag indicating whether interpretation of + commands by the interpreter should stop. + + 'b'do_'u'do_'b'Called when an empty line is entered in response to the prompt. + + If this method is not overridden, it repeats the last nonempty + command entered. + + 'u'Called when an empty line is entered in response to the prompt. + + If this method is not overridden, it repeats the last nonempty + command entered. + + 'b'Called on an input line when the command prefix is not recognized. + + If this method is not overridden, it prints an error message and + returns. + + 'u'Called on an input line when the command prefix is not recognized. + + If this method is not overridden, it prints an error message and + returns. + + 'b'*** Unknown syntax: %s +'u'*** Unknown syntax: %s +'b'Method called to complete an input line when no command-specific + complete_*() method is available. + + By default, it returns an empty list. + + 'u'Method called to complete an input line when no command-specific + complete_*() method is available. + + By default, it returns an empty list. + + 'b'Return the next possible completion for 'text'. + + If a command has not been entered, then complete against command list. + Otherwise try to call complete_ to get list of completions. + 'u'Return the next possible completion for 'text'. + + If a command has not been entered, then complete against command list. + Otherwise try to call complete_ to get list of completions. + 'b'complete_'u'complete_'b'help_'u'help_'b'List available commands with "help" or detailed help with "help cmd".'u'List available commands with "help" or detailed help with "help cmd".'b'%s +'u'%s +'b'Display a list of strings as a compact set of columns. + + Each column is only as wide as necessary. + Columns are separated by two spaces (one was not legible enough). + 'u'Display a list of strings as a compact set of columns. + + Each column is only as wide as necessary. + Columns are separated by two spaces (one was not legible enough). + 'b' +'u' +'b'list[i] not a string for i in %s'u'list[i] not a string for i in %s'b' 'u' 'u'cmd'Utilities needed to emulate Python's interactive interpreter. + +codeopCommandCompilercompile_commandInteractiveInterpreterInteractiveConsoleinteractBase class for InteractiveConsole. + + This class deals with parsing and interpreter state (the user's + namespace); it doesn't deal with input buffering or prompting or + input file naming (the filename is always passed in explicitly). + + Constructor. + + The optional 'locals' argument specifies the dictionary in + which code will be executed; it defaults to a newly created + dictionary with key "__name__" set to "__console__" and key + "__doc__" set to None. + + __console__runsourcesinglesymbolCompile and run some source in the interpreter. + + Arguments are as for compile_command(). + + One of several things can happen: + + 1) The input is incorrect; compile_command() raised an + exception (SyntaxError or OverflowError). A syntax traceback + will be printed by calling the showsyntaxerror() method. + + 2) The input is incomplete, and more input is required; + compile_command() returned None. Nothing happens. + + 3) The input is complete; compile_command() returned a code + object. The code is executed by calling self.runcode() (which + also handles run-time exceptions, except for SystemExit). + + The return value is True in case 2, False in the other cases (unless + an exception is raised). The return value can be used to + decide whether to use sys.ps1 or sys.ps2 to prompt the next + line. + + showsyntaxerrorruncodeExecute a code object. + + When an exception occurs, self.showtraceback() is called to + display a traceback. All exceptions are caught except + SystemExit, which is reraised. + + A note about KeyboardInterrupt: this exception may occur + elsewhere in this code, and may not always be caught. The + caller should be prepared to deal with it. + + showtracebackDisplay the syntax error that just occurred. + + This doesn't display a stack trace because there isn't one. + + If a filename is given, it is stuffed in the exception instead + of what was there before (because Python's parser always uses + "" when reading from a string). + + The output is written by self.write(), below. + + dummy_filenameDisplay the exception that just occurred. + + We remove the first stack item because it is our own code. + + The output is written by self.write(), below. + + last_tbformat_exceptionWrite a string. + + The base implementation writes to sys.stderr; a subclass may + replace this with a different implementation. + + Closely emulate the behavior of the interactive Python interpreter. + + This class builds on InteractiveInterpreter and adds prompting + using the familiar sys.ps1 and sys.ps2, and input buffering. + + Constructor. + + The optional locals argument will be passed to the + InteractiveInterpreter base class. + + The optional filename argument should specify the (file)name + of the input stream; it will show up in tracebacks. + + resetbufferReset the input buffer.bannerexitmsgClosely emulate the interactive Python console. + + The optional banner argument specifies the banner to print + before the first interaction; by default it prints a banner + similar to the one printed by the real Python interpreter, + followed by the current class name in parentheses (so as not + to confuse this with the real interpreter -- since it's so + close!). + + The optional exitmsg argument specifies the exit message + printed when exiting. Pass the empty string to suppress + printing an exit message. If exitmsg is not given or None, + a default message is printed. + + ps1>>> ps2... Type "help", "copyright", "credits" or "license" for more information.cprtPython %s on %s +%s +(%s) +moreraw_inputpush +KeyboardInterrupt +now exiting %s... +Push a line to the interpreter. + + The line should not have a trailing newline; it may have + internal newlines. The line is appended to a buffer and the + interpreter's runsource() method is called with the + concatenated contents of the buffer as source. If this + indicates that the command was executed or invalid, the buffer + is reset; otherwise, the command is incomplete, and the buffer + is left as it was after the line was appended. The return + value is 1 if more input is required, 0 if the line was dealt + with in some way (this is the same as runsource()). + + Write a prompt and read a line. + + The returned line does not include the trailing newline. + When the user enters the EOF key sequence, EOFError is raised. + + The base implementation uses the built-in function + input(); a subclass may replace this with a different + implementation. + + readfuncClosely emulate the interactive Python interpreter. + + This is a backwards compatible interface to the InteractiveConsole + class. When readfunc is not specified, it attempts to import the + readline module to enable GNU readline if it is available. + + Arguments (all optional, all default to None): + + banner -- passed to InteractiveConsole.interact() + readfunc -- if not None, replaces InteractiveConsole.raw_input() + local -- passed to InteractiveInterpreter.__init__() + exitmsg -- passed to InteractiveConsole.interact() + + console-qdon't print version and copyright messages# Inspired by similar code by Jeff Epler and Fredrik Lundh.# Case 1# Case 2# Case 3# Work hard to stuff the correct filename in the exception# Not the format we expect; leave it alone# Stuff in the right filename# If someone has set sys.excepthook, we let that take precedence# over self.writeb'Utilities needed to emulate Python's interactive interpreter. + +'u'Utilities needed to emulate Python's interactive interpreter. + +'b'InteractiveInterpreter'u'InteractiveInterpreter'b'InteractiveConsole'u'InteractiveConsole'b'interact'u'interact'b'compile_command'u'compile_command'b'Base class for InteractiveConsole. + + This class deals with parsing and interpreter state (the user's + namespace); it doesn't deal with input buffering or prompting or + input file naming (the filename is always passed in explicitly). + + 'u'Base class for InteractiveConsole. + + This class deals with parsing and interpreter state (the user's + namespace); it doesn't deal with input buffering or prompting or + input file naming (the filename is always passed in explicitly). + + 'b'Constructor. + + The optional 'locals' argument specifies the dictionary in + which code will be executed; it defaults to a newly created + dictionary with key "__name__" set to "__console__" and key + "__doc__" set to None. + + 'u'Constructor. + + The optional 'locals' argument specifies the dictionary in + which code will be executed; it defaults to a newly created + dictionary with key "__name__" set to "__console__" and key + "__doc__" set to None. + + 'b'__console__'u'__console__'b''u''b'single'u'single'b'Compile and run some source in the interpreter. + + Arguments are as for compile_command(). + + One of several things can happen: + + 1) The input is incorrect; compile_command() raised an + exception (SyntaxError or OverflowError). A syntax traceback + will be printed by calling the showsyntaxerror() method. + + 2) The input is incomplete, and more input is required; + compile_command() returned None. Nothing happens. + + 3) The input is complete; compile_command() returned a code + object. The code is executed by calling self.runcode() (which + also handles run-time exceptions, except for SystemExit). + + The return value is True in case 2, False in the other cases (unless + an exception is raised). The return value can be used to + decide whether to use sys.ps1 or sys.ps2 to prompt the next + line. + + 'u'Compile and run some source in the interpreter. + + Arguments are as for compile_command(). + + One of several things can happen: + + 1) The input is incorrect; compile_command() raised an + exception (SyntaxError or OverflowError). A syntax traceback + will be printed by calling the showsyntaxerror() method. + + 2) The input is incomplete, and more input is required; + compile_command() returned None. Nothing happens. + + 3) The input is complete; compile_command() returned a code + object. The code is executed by calling self.runcode() (which + also handles run-time exceptions, except for SystemExit). + + The return value is True in case 2, False in the other cases (unless + an exception is raised). The return value can be used to + decide whether to use sys.ps1 or sys.ps2 to prompt the next + line. + + 'b'Execute a code object. + + When an exception occurs, self.showtraceback() is called to + display a traceback. All exceptions are caught except + SystemExit, which is reraised. + + A note about KeyboardInterrupt: this exception may occur + elsewhere in this code, and may not always be caught. The + caller should be prepared to deal with it. + + 'u'Execute a code object. + + When an exception occurs, self.showtraceback() is called to + display a traceback. All exceptions are caught except + SystemExit, which is reraised. + + A note about KeyboardInterrupt: this exception may occur + elsewhere in this code, and may not always be caught. The + caller should be prepared to deal with it. + + 'b'Display the syntax error that just occurred. + + This doesn't display a stack trace because there isn't one. + + If a filename is given, it is stuffed in the exception instead + of what was there before (because Python's parser always uses + "" when reading from a string). + + The output is written by self.write(), below. + + 'u'Display the syntax error that just occurred. + + This doesn't display a stack trace because there isn't one. + + If a filename is given, it is stuffed in the exception instead + of what was there before (because Python's parser always uses + "" when reading from a string). + + The output is written by self.write(), below. + + 'b'Display the exception that just occurred. + + We remove the first stack item because it is our own code. + + The output is written by self.write(), below. + + 'u'Display the exception that just occurred. + + We remove the first stack item because it is our own code. + + The output is written by self.write(), below. + + 'b'Write a string. + + The base implementation writes to sys.stderr; a subclass may + replace this with a different implementation. + + 'u'Write a string. + + The base implementation writes to sys.stderr; a subclass may + replace this with a different implementation. + + 'b'Closely emulate the behavior of the interactive Python interpreter. + + This class builds on InteractiveInterpreter and adds prompting + using the familiar sys.ps1 and sys.ps2, and input buffering. + + 'u'Closely emulate the behavior of the interactive Python interpreter. + + This class builds on InteractiveInterpreter and adds prompting + using the familiar sys.ps1 and sys.ps2, and input buffering. + + 'b''u''b'Constructor. + + The optional locals argument will be passed to the + InteractiveInterpreter base class. + + The optional filename argument should specify the (file)name + of the input stream; it will show up in tracebacks. + + 'u'Constructor. + + The optional locals argument will be passed to the + InteractiveInterpreter base class. + + The optional filename argument should specify the (file)name + of the input stream; it will show up in tracebacks. + + 'b'Reset the input buffer.'u'Reset the input buffer.'b'Closely emulate the interactive Python console. + + The optional banner argument specifies the banner to print + before the first interaction; by default it prints a banner + similar to the one printed by the real Python interpreter, + followed by the current class name in parentheses (so as not + to confuse this with the real interpreter -- since it's so + close!). + + The optional exitmsg argument specifies the exit message + printed when exiting. Pass the empty string to suppress + printing an exit message. If exitmsg is not given or None, + a default message is printed. + + 'u'Closely emulate the interactive Python console. + + The optional banner argument specifies the banner to print + before the first interaction; by default it prints a banner + similar to the one printed by the real Python interpreter, + followed by the current class name in parentheses (so as not + to confuse this with the real interpreter -- since it's so + close!). + + The optional exitmsg argument specifies the exit message + printed when exiting. Pass the empty string to suppress + printing an exit message. If exitmsg is not given or None, + a default message is printed. + + 'b'>>> 'u'>>> 'b'... 'u'... 'b'Type "help", "copyright", "credits" or "license" for more information.'u'Type "help", "copyright", "credits" or "license" for more information.'b'Python %s on %s +%s +(%s) +'u'Python %s on %s +%s +(%s) +'b' +KeyboardInterrupt +'u' +KeyboardInterrupt +'b'now exiting %s... +'u'now exiting %s... +'b'Push a line to the interpreter. + + The line should not have a trailing newline; it may have + internal newlines. The line is appended to a buffer and the + interpreter's runsource() method is called with the + concatenated contents of the buffer as source. If this + indicates that the command was executed or invalid, the buffer + is reset; otherwise, the command is incomplete, and the buffer + is left as it was after the line was appended. The return + value is 1 if more input is required, 0 if the line was dealt + with in some way (this is the same as runsource()). + + 'u'Push a line to the interpreter. + + The line should not have a trailing newline; it may have + internal newlines. The line is appended to a buffer and the + interpreter's runsource() method is called with the + concatenated contents of the buffer as source. If this + indicates that the command was executed or invalid, the buffer + is reset; otherwise, the command is incomplete, and the buffer + is left as it was after the line was appended. The return + value is 1 if more input is required, 0 if the line was dealt + with in some way (this is the same as runsource()). + + 'b'Write a prompt and read a line. + + The returned line does not include the trailing newline. + When the user enters the EOF key sequence, EOFError is raised. + + The base implementation uses the built-in function + input(); a subclass may replace this with a different + implementation. + + 'u'Write a prompt and read a line. + + The returned line does not include the trailing newline. + When the user enters the EOF key sequence, EOFError is raised. + + The base implementation uses the built-in function + input(); a subclass may replace this with a different + implementation. + + 'b'Closely emulate the interactive Python interpreter. + + This is a backwards compatible interface to the InteractiveConsole + class. When readfunc is not specified, it attempts to import the + readline module to enable GNU readline if it is available. + + Arguments (all optional, all default to None): + + banner -- passed to InteractiveConsole.interact() + readfunc -- if not None, replaces InteractiveConsole.raw_input() + local -- passed to InteractiveInterpreter.__init__() + exitmsg -- passed to InteractiveConsole.interact() + + 'u'Closely emulate the interactive Python interpreter. + + This is a backwards compatible interface to the InteractiveConsole + class. When readfunc is not specified, it attempts to import the + readline module to enable GNU readline if it is available. + + Arguments (all optional, all default to None): + + banner -- passed to InteractiveConsole.interact() + readfunc -- if not None, replaces InteractiveConsole.raw_input() + local -- passed to InteractiveInterpreter.__init__() + exitmsg -- passed to InteractiveConsole.interact() + + 'b'-q'u'-q'b'don't print version and copyright messages'u'don't print version and copyright messages'u'code' codecs -- Python Codec Registry, API and helpers. + + +Written by Marc-Andre Lemburg (mal@lemburg.com). + +(c) Copyright CNRI, All Rights Reserved. NO WARRANTY. + +whyFailed to load the builtin codecs: %sEncodedFileBOMBOM_BEBOM_LEBOM32_BEBOM32_LEBOM64_BEBOM64_LEBOM_UTF8BOM_UTF16BOM_UTF16_LEBOM_UTF16_BEBOM_UTF32BOM_UTF32_LEBOM_UTF32_BECodecIncrementalEncoderIncrementalDecoderStreamReaderStreamWriterStreamReaderWriterStreamRecodergetencodergetdecodergetincrementalencodergetincrementaldecodergetreadergetwriteriterencodeiterdecodestrict_errorsignore_errorsreplace_errorsxmlcharrefreplace_errorsbackslashreplace_errorsnamereplace_errorsÿþþÿÿþþÿCodec details when looking up the codec registry_is_text_encodingstreamreaderstreamwriterincrementalencoderincrementaldecoder<%s.%s object for encoding %s at %#x> Defines the interface for stateless encoders/decoders. + + The .encode()/.decode() methods may use different error + handling schemes by providing the errors argument. These + string values are predefined: + + 'strict' - raise a ValueError error (or a subclass) + 'ignore' - ignore the character and continue with the next + 'replace' - replace with a suitable replacement character; + Python will use the official U+FFFD REPLACEMENT + CHARACTER for the builtin Unicode codecs on + decoding and '?' on encoding. + 'surrogateescape' - replace with private code points U+DCnn. + 'xmlcharrefreplace' - Replace with the appropriate XML + character reference (only for encoding). + 'backslashreplace' - Replace with backslashed escape sequences. + 'namereplace' - Replace with \N{...} escape sequences + (only for encoding). + + The set of allowed values can be extended via register_error. + + Encodes the object input and returns a tuple (output + object, length consumed). + + errors defines the error handling to apply. It defaults to + 'strict' handling. + + The method may not store state in the Codec instance. Use + StreamWriter for codecs which have to keep state in order to + make encoding efficient. + + The encoder must be able to handle zero length input and + return an empty object of the output object type in this + situation. + + Decodes the object input and returns a tuple (output + object, length consumed). + + input must be an object which provides the bf_getreadbuf + buffer slot. Python strings, buffer objects and memory + mapped files are examples of objects providing this slot. + + errors defines the error handling to apply. It defaults to + 'strict' handling. + + The method may not store state in the Codec instance. Use + StreamReader for codecs which have to keep state in order to + make decoding efficient. + + The decoder must be able to handle zero length input and + return an empty object of the output object type in this + situation. + + + An IncrementalEncoder encodes an input in multiple steps. The input can + be passed piece by piece to the encode() method. The IncrementalEncoder + remembers the state of the encoding process between calls to encode(). + + Creates an IncrementalEncoder instance. + + The IncrementalEncoder may use different error handling schemes by + providing the errors keyword argument. See the module docstring + for a list of possible values. + + Encodes input and returns the resulting object. + + Resets the encoder to the initial state. + + Return the current state of the encoder. + + Set the current state of the encoder. state must have been + returned by getstate(). + BufferedIncrementalEncoder + This subclass of IncrementalEncoder can be used as the baseclass for an + incremental encoder if the encoder must keep some of the output in a + buffer between calls to encode(). + _buffer_encodeconsumed + An IncrementalDecoder decodes an input in multiple steps. The input can + be passed piece by piece to the decode() method. The IncrementalDecoder + remembers the state of the decoding process between calls to decode(). + + Create an IncrementalDecoder instance. + + The IncrementalDecoder may use different error handling schemes by + providing the errors keyword argument. See the module docstring + for a list of possible values. + + Decode input and returns the resulting object. + + Reset the decoder to the initial state. + + Return the current state of the decoder. + + This must be a (buffered_input, additional_state_info) tuple. + buffered_input must be a bytes object containing bytes that + were passed to decode() that have not yet been converted. + additional_state_info must be a non-negative integer + representing the state of the decoder WITHOUT yet having + processed the contents of buffered_input. In the initial state + and after reset(), getstate() must return (b"", 0). + + Set the current state of the decoder. + + state must have been returned by getstate(). The effect of + setstate((b"", 0)) must be equivalent to reset(). + BufferedIncrementalDecoder + This subclass of IncrementalDecoder can be used as the baseclass for an + incremental decoder if the decoder must be able to handle incomplete + byte sequences. + _buffer_decode Creates a StreamWriter instance. + + stream must be a file-like object open for writing. + + The StreamWriter may use different error handling + schemes by providing the errors keyword argument. These + parameters are predefined: + + 'strict' - raise a ValueError (or a subclass) + 'ignore' - ignore the character and continue with the next + 'replace'- replace with a suitable replacement character + 'xmlcharrefreplace' - Replace with the appropriate XML + character reference. + 'backslashreplace' - Replace with backslashed escape + sequences. + 'namereplace' - Replace with \N{...} escape sequences. + + The set of allowed parameter values can be extended via + register_error. + Writes the object's contents encoded to self.stream. + Writes the concatenated list of strings to the stream + using .write(). + Flushes and resets the codec buffers used for keeping state. + + Calling this method should ensure that the data on the + output is put into a clean state, that allows appending + of new fresh data without having to rescan the whole + stream to recover state. + + Inherit all other methods from the underlying stream. + charbuffertype Creates a StreamReader instance. + + stream must be a file-like object open for reading. + + The StreamReader may use different error handling + schemes by providing the errors keyword argument. These + parameters are predefined: + + 'strict' - raise a ValueError (or a subclass) + 'ignore' - ignore the character and continue with the next + 'replace'- replace with a suitable replacement character + 'backslashreplace' - Replace with backslashed escape sequences; + + The set of allowed parameter values can be extended via + register_error. + bytebuffer_empty_charbuffercharbufferlinebufferfirstline Decodes data from the stream self.stream and returns the + resulting object. + + chars indicates the number of decoded code points or bytes to + return. read() will never return more data than requested, + but it might return less, if there is not enough available. + + size indicates the approximate maximum number of decoded + bytes or code points to read for decoding. The decoder + can modify this setting as appropriate. The default value + -1 indicates to read and decode as much as possible. size + is intended to prevent having to decode huge files in one + step. + + If firstline is true, and a UnicodeDecodeError happens + after the first line terminator in the input only the first line + will be returned, the rest of the input will be kept until the + next call to read(). + + The method should use a greedy read strategy, meaning that + it should read as much data as is allowed within the + definition of the encoding and the given size, e.g. if + optional encoding endings or state markers are available + on the stream, these should be read too. + newdatanewcharsdecodedbytes Read one line from the input stream and return the + decoded data. + + size, if given, is passed as size argument to the + read() method. + + 72readsizeline0withendline0withoutend8000sizehint Read all lines available on the input stream + and return them as a list. + + Line breaks are implemented using the codec's decoder + method and are included in the list entries. + + sizehint, if given, is ignored since there is no efficient + way to finding the true end-of-line. + + Resets the codec buffers used for keeping state. + + Note that no stream repositioning should take place. + This method is primarily intended to be able to recover + from decoding errors. + + Set the input stream's current position. + + Resets the codec buffers used for keeping state. + Return the next decoded line from the input stream. StreamReaderWriter instances allow wrapping streams which + work in both read and write modes. + + The design is such that one can use the factory functions + returned by the codec.lookup() function to construct the + instance. + + unknownReaderWriter Creates a StreamReaderWriter instance. + + stream must be a Stream-like object. + + Reader, Writer must be factory functions or classes + providing the StreamReader, StreamWriter interface resp. + + Error handling is done in the same way as defined for the + StreamWriter/Readers. + + readerwriter StreamRecoder instances translate data from one encoding to another. + + They use the complete set of APIs returned by the + codecs.lookup() function to implement their task. + + Data written to the StreamRecoder is first decoded into an + intermediate format (depending on the "decode" codec) and then + written to the underlying stream using an instance of the provided + Writer class. + + In the other direction, data is read from the underlying stream using + a Reader instance and then encoded and returned to the caller. + + data_encodingfile_encoding Creates a StreamRecoder instance which implements a two-way + conversion: encode and decode work on the frontend (the + data visible to .read() and .write()) while Reader and Writer + work on the backend (the data in stream). + + You can use these objects to do transparent + transcodings from e.g. latin-1 to utf-8 and back. + + stream must be a file-like object. + + encode and decode must adhere to the Codec interface; Reader and + Writer must be factory functions or classes providing the + StreamReader and StreamWriter interfaces resp. + + Error handling is done in the same way as defined for the + StreamWriter/Readers. + + bytesencodedbytesdecoded Open an encoded file using the given mode and return + a wrapped version providing transparent encoding/decoding. + + Note: The wrapped version will only accept the object format + defined by the codecs, i.e. Unicode objects for most builtin + codecs. Output is also codec dependent and will usually be + Unicode as well. + + Underlying encoded files are always opened in binary mode. + The default file mode is 'r', meaning to open the file in read mode. + + encoding specifies the encoding which is to be used for the + file. + + errors may be given to define the error handling. It defaults + to 'strict' which causes ValueErrors to be raised in case an + encoding error occurs. + + buffering has the same meaning as for the builtin open() API. + It defaults to -1 which means that the default buffer size will + be used. + + The returned wrapped file object provides an extra attribute + .encoding which allows querying the used encoding. This + attribute is only available if an encoding was specified as + parameter. + + srw Return a wrapped version of file which provides transparent + encoding translation. + + Data written to the wrapped file is decoded according + to the given data_encoding and then encoded to the underlying + file using file_encoding. The intermediate data type + will usually be Unicode but depends on the specified codecs. + + Bytes read from the file are decoded using file_encoding and then + passed back to the caller encoded using data_encoding. + + If file_encoding is not given, it defaults to data_encoding. + + errors may be given to define the error handling. It defaults + to 'strict' which causes ValueErrors to be raised in case an + encoding error occurs. + + The returned wrapped file object provides two extra attributes + .data_encoding and .file_encoding which reflect the given + parameters of the same name. The attributes can be used for + introspection by Python programs. + + data_infofile_infosr Lookup up the codec for the given encoding and return + its encoder function. + + Raises a LookupError in case the encoding cannot be found. + + Lookup up the codec for the given encoding and return + its decoder function. + + Raises a LookupError in case the encoding cannot be found. + + Lookup up the codec for the given encoding and return + its IncrementalEncoder class or factory function. + + Raises a LookupError in case the encoding cannot be found + or the codecs doesn't provide an incremental encoder. + + Lookup up the codec for the given encoding and return + its IncrementalDecoder class or factory function. + + Raises a LookupError in case the encoding cannot be found + or the codecs doesn't provide an incremental decoder. + + decoder Lookup up the codec for the given encoding and return + its StreamReader class or factory function. + + Raises a LookupError in case the encoding cannot be found. + + Lookup up the codec for the given encoding and return + its StreamWriter class or factory function. + + Raises a LookupError in case the encoding cannot be found. + + + Encoding iterator. + + Encodes the input strings from the iterator using an IncrementalEncoder. + + errors and kwargs are passed through to the IncrementalEncoder + constructor. + + Decoding iterator. + + Decodes the input strings from the iterator using an IncrementalDecoder. + + errors and kwargs are passed through to the IncrementalDecoder + constructor. + make_identity_dictrng make_identity_dict(rng) -> dict + + Return a dictionary where elements of the rng sequence are + mapped to themselves. + + make_encoding_mapdecoding_map Creates an encoding map from a decoding map. + + If a target mapping in the decoding map occurs multiple + times, then that target is mapped to None (undefined mapping), + causing an exception when encountered by the charmap codec + during translation. + + One example where this happens is cp875.py which decodes + multiple character to \u001a. + + backslashreplacenamereplace_false### Registry and builtin stateless codec functions### Constants# Byte Order Mark (BOM = ZERO WIDTH NO-BREAK SPACE = U+FEFF)# and its possible byte string values# for UTF8/UTF16/UTF32 output and little/big endian machines# UTF-8# UTF-16, little endian# UTF-16, big endian# UTF-32, little endian# UTF-32, big endian# UTF-16, native endianness# UTF-32, native endianness# Old broken names (don't use in new code)### Codec base classes (defining the API)# Private API to allow Python 3.4 to blacklist the known non-Unicode# codecs in the standard library. A more general mechanism to# reliably distinguish test encodings from other codecs will hopefully# be defined for Python 3.5# See http://bugs.python.org/issue19619# Assume codecs are text encodings by default# unencoded input that is kept between calls to encode()# Overwrite this method in subclasses: It must encode input# and return an (output, length consumed) tuple# encode input (taking the buffer into account)# keep unencoded input until the next call# undecoded input that is kept between calls to decode()# Overwrite this method in subclasses: It must decode input# decode input (taking the buffer into account)# keep undecoded input until the next call# additional state info is always 0# ignore additional state info# The StreamWriter and StreamReader class provide generic working# interfaces which can be used to implement new encoding submodules# very easily. See encodings/utf_8.py for an example on how this is# done.#### If we have lines cached, first merge them back into characters# For compatibility with other read() methods that take a# single argument# read until we get the required number of characters (if available)# can the request be satisfied from the character buffer?# we need more data# decode bytes (those remaining from the last call included)# keep undecoded bytes until the next call# put new characters in the character buffer# there was no data available# Return everything we've got# Return the first chars characters# If we have lines cached from an earlier read, return# them unconditionally# revert to charbuffer mode; we might need more data# next time# If size is given, we call read() only once# If we're at a "\r" read one extra character (which might# be a "\n") to get a proper line ending. If the stream is# temporarily exhausted we return the wrong line ending.# More than one line result; the first line is a full line# to return# cache the remaining lines# only one remaining line, put it back into charbuffer# We really have a line end# Put the rest back together and keep it until the next call# we didn't get anything or this was our only try# Optional attributes set by the file wrappers below# these are needed to make "with StreamReaderWriter(...)" work properly# Seeks must be propagated to both the readers and writers# as they might need to reset their internal buffers.### Shortcuts# Force opening of the file in binary mode# Add attributes to simplify introspection### Helpers for codec lookup### Helpers for charmap-based codecs### error handlers# In --disable-unicode builds, these error handler are missing# Tell modulefinder that using codecs probably needs the encodings# package### Tests# Make stdout translate Latin-1 output into UTF-8 output# Have stdin translate Latin-1 input into UTF-8 inputb' codecs -- Python Codec Registry, API and helpers. + + +Written by Marc-Andre Lemburg (mal@lemburg.com). + +(c) Copyright CNRI, All Rights Reserved. NO WARRANTY. + +'u' codecs -- Python Codec Registry, API and helpers. + + +Written by Marc-Andre Lemburg (mal@lemburg.com). + +(c) Copyright CNRI, All Rights Reserved. NO WARRANTY. + +'b'Failed to load the builtin codecs: %s'u'Failed to load the builtin codecs: %s'b'register'u'register'b'lookup'u'lookup'b'EncodedFile'u'EncodedFile'b'BOM'u'BOM'b'BOM_BE'u'BOM_BE'b'BOM_LE'u'BOM_LE'b'BOM32_BE'u'BOM32_BE'b'BOM32_LE'u'BOM32_LE'b'BOM64_BE'u'BOM64_BE'b'BOM64_LE'u'BOM64_LE'b'BOM_UTF8'u'BOM_UTF8'b'BOM_UTF16'u'BOM_UTF16'b'BOM_UTF16_LE'u'BOM_UTF16_LE'b'BOM_UTF16_BE'u'BOM_UTF16_BE'b'BOM_UTF32'u'BOM_UTF32'b'BOM_UTF32_LE'u'BOM_UTF32_LE'b'BOM_UTF32_BE'u'BOM_UTF32_BE'b'CodecInfo'u'CodecInfo'b'Codec'u'Codec'b'IncrementalEncoder'u'IncrementalEncoder'b'IncrementalDecoder'u'IncrementalDecoder'b'StreamReader'u'StreamReader'b'StreamWriter'u'StreamWriter'b'StreamReaderWriter'u'StreamReaderWriter'b'StreamRecoder'u'StreamRecoder'b'getencoder'u'getencoder'b'getdecoder'u'getdecoder'b'getincrementalencoder'u'getincrementalencoder'b'getincrementaldecoder'u'getincrementaldecoder'b'getreader'u'getreader'b'getwriter'u'getwriter'b'iterencode'u'iterencode'b'iterdecode'u'iterdecode'b'strict_errors'u'strict_errors'b'ignore_errors'u'ignore_errors'b'replace_errors'u'replace_errors'b'xmlcharrefreplace_errors'u'xmlcharrefreplace_errors'b'backslashreplace_errors'u'backslashreplace_errors'b'namereplace_errors'u'namereplace_errors'b'register_error'u'register_error'b'lookup_error'u'lookup_error'b'ÿþ'b'þÿ'b'ÿþ'b'þÿ'b'Codec details when looking up the codec registry'u'Codec details when looking up the codec registry'b'<%s.%s object for encoding %s at %#x>'u'<%s.%s object for encoding %s at %#x>'b' Defines the interface for stateless encoders/decoders. + + The .encode()/.decode() methods may use different error + handling schemes by providing the errors argument. These + string values are predefined: + + 'strict' - raise a ValueError error (or a subclass) + 'ignore' - ignore the character and continue with the next + 'replace' - replace with a suitable replacement character; + Python will use the official U+FFFD REPLACEMENT + CHARACTER for the builtin Unicode codecs on + decoding and '?' on encoding. + 'surrogateescape' - replace with private code points U+DCnn. + 'xmlcharrefreplace' - Replace with the appropriate XML + character reference (only for encoding). + 'backslashreplace' - Replace with backslashed escape sequences. + 'namereplace' - Replace with \N{...} escape sequences + (only for encoding). + + The set of allowed values can be extended via register_error. + + 'u' Defines the interface for stateless encoders/decoders. + + The .encode()/.decode() methods may use different error + handling schemes by providing the errors argument. These + string values are predefined: + + 'strict' - raise a ValueError error (or a subclass) + 'ignore' - ignore the character and continue with the next + 'replace' - replace with a suitable replacement character; + Python will use the official U+FFFD REPLACEMENT + CHARACTER for the builtin Unicode codecs on + decoding and '?' on encoding. + 'surrogateescape' - replace with private code points U+DCnn. + 'xmlcharrefreplace' - Replace with the appropriate XML + character reference (only for encoding). + 'backslashreplace' - Replace with backslashed escape sequences. + 'namereplace' - Replace with \N{...} escape sequences + (only for encoding). + + The set of allowed values can be extended via register_error. + + 'b' Encodes the object input and returns a tuple (output + object, length consumed). + + errors defines the error handling to apply. It defaults to + 'strict' handling. + + The method may not store state in the Codec instance. Use + StreamWriter for codecs which have to keep state in order to + make encoding efficient. + + The encoder must be able to handle zero length input and + return an empty object of the output object type in this + situation. + + 'u' Encodes the object input and returns a tuple (output + object, length consumed). + + errors defines the error handling to apply. It defaults to + 'strict' handling. + + The method may not store state in the Codec instance. Use + StreamWriter for codecs which have to keep state in order to + make encoding efficient. + + The encoder must be able to handle zero length input and + return an empty object of the output object type in this + situation. + + 'b' Decodes the object input and returns a tuple (output + object, length consumed). + + input must be an object which provides the bf_getreadbuf + buffer slot. Python strings, buffer objects and memory + mapped files are examples of objects providing this slot. + + errors defines the error handling to apply. It defaults to + 'strict' handling. + + The method may not store state in the Codec instance. Use + StreamReader for codecs which have to keep state in order to + make decoding efficient. + + The decoder must be able to handle zero length input and + return an empty object of the output object type in this + situation. + + 'u' Decodes the object input and returns a tuple (output + object, length consumed). + + input must be an object which provides the bf_getreadbuf + buffer slot. Python strings, buffer objects and memory + mapped files are examples of objects providing this slot. + + errors defines the error handling to apply. It defaults to + 'strict' handling. + + The method may not store state in the Codec instance. Use + StreamReader for codecs which have to keep state in order to + make decoding efficient. + + The decoder must be able to handle zero length input and + return an empty object of the output object type in this + situation. + + 'b' + An IncrementalEncoder encodes an input in multiple steps. The input can + be passed piece by piece to the encode() method. The IncrementalEncoder + remembers the state of the encoding process between calls to encode(). + 'u' + An IncrementalEncoder encodes an input in multiple steps. The input can + be passed piece by piece to the encode() method. The IncrementalEncoder + remembers the state of the encoding process between calls to encode(). + 'b' + Creates an IncrementalEncoder instance. + + The IncrementalEncoder may use different error handling schemes by + providing the errors keyword argument. See the module docstring + for a list of possible values. + 'u' + Creates an IncrementalEncoder instance. + + The IncrementalEncoder may use different error handling schemes by + providing the errors keyword argument. See the module docstring + for a list of possible values. + 'b' + Encodes input and returns the resulting object. + 'u' + Encodes input and returns the resulting object. + 'b' + Resets the encoder to the initial state. + 'u' + Resets the encoder to the initial state. + 'b' + Return the current state of the encoder. + 'u' + Return the current state of the encoder. + 'b' + Set the current state of the encoder. state must have been + returned by getstate(). + 'u' + Set the current state of the encoder. state must have been + returned by getstate(). + 'b' + This subclass of IncrementalEncoder can be used as the baseclass for an + incremental encoder if the encoder must keep some of the output in a + buffer between calls to encode(). + 'u' + This subclass of IncrementalEncoder can be used as the baseclass for an + incremental encoder if the encoder must keep some of the output in a + buffer between calls to encode(). + 'b' + An IncrementalDecoder decodes an input in multiple steps. The input can + be passed piece by piece to the decode() method. The IncrementalDecoder + remembers the state of the decoding process between calls to decode(). + 'u' + An IncrementalDecoder decodes an input in multiple steps. The input can + be passed piece by piece to the decode() method. The IncrementalDecoder + remembers the state of the decoding process between calls to decode(). + 'b' + Create an IncrementalDecoder instance. + + The IncrementalDecoder may use different error handling schemes by + providing the errors keyword argument. See the module docstring + for a list of possible values. + 'u' + Create an IncrementalDecoder instance. + + The IncrementalDecoder may use different error handling schemes by + providing the errors keyword argument. See the module docstring + for a list of possible values. + 'b' + Decode input and returns the resulting object. + 'u' + Decode input and returns the resulting object. + 'b' + Reset the decoder to the initial state. + 'u' + Reset the decoder to the initial state. + 'b' + Return the current state of the decoder. + + This must be a (buffered_input, additional_state_info) tuple. + buffered_input must be a bytes object containing bytes that + were passed to decode() that have not yet been converted. + additional_state_info must be a non-negative integer + representing the state of the decoder WITHOUT yet having + processed the contents of buffered_input. In the initial state + and after reset(), getstate() must return (b"", 0). + 'u' + Return the current state of the decoder. + + This must be a (buffered_input, additional_state_info) tuple. + buffered_input must be a bytes object containing bytes that + were passed to decode() that have not yet been converted. + additional_state_info must be a non-negative integer + representing the state of the decoder WITHOUT yet having + processed the contents of buffered_input. In the initial state + and after reset(), getstate() must return (b"", 0). + 'b' + Set the current state of the decoder. + + state must have been returned by getstate(). The effect of + setstate((b"", 0)) must be equivalent to reset(). + 'u' + Set the current state of the decoder. + + state must have been returned by getstate(). The effect of + setstate((b"", 0)) must be equivalent to reset(). + 'b' + This subclass of IncrementalDecoder can be used as the baseclass for an + incremental decoder if the decoder must be able to handle incomplete + byte sequences. + 'u' + This subclass of IncrementalDecoder can be used as the baseclass for an + incremental decoder if the decoder must be able to handle incomplete + byte sequences. + 'b' Creates a StreamWriter instance. + + stream must be a file-like object open for writing. + + The StreamWriter may use different error handling + schemes by providing the errors keyword argument. These + parameters are predefined: + + 'strict' - raise a ValueError (or a subclass) + 'ignore' - ignore the character and continue with the next + 'replace'- replace with a suitable replacement character + 'xmlcharrefreplace' - Replace with the appropriate XML + character reference. + 'backslashreplace' - Replace with backslashed escape + sequences. + 'namereplace' - Replace with \N{...} escape sequences. + + The set of allowed parameter values can be extended via + register_error. + 'u' Creates a StreamWriter instance. + + stream must be a file-like object open for writing. + + The StreamWriter may use different error handling + schemes by providing the errors keyword argument. These + parameters are predefined: + + 'strict' - raise a ValueError (or a subclass) + 'ignore' - ignore the character and continue with the next + 'replace'- replace with a suitable replacement character + 'xmlcharrefreplace' - Replace with the appropriate XML + character reference. + 'backslashreplace' - Replace with backslashed escape + sequences. + 'namereplace' - Replace with \N{...} escape sequences. + + The set of allowed parameter values can be extended via + register_error. + 'b' Writes the object's contents encoded to self.stream. + 'u' Writes the object's contents encoded to self.stream. + 'b' Writes the concatenated list of strings to the stream + using .write(). + 'u' Writes the concatenated list of strings to the stream + using .write(). + 'b' Flushes and resets the codec buffers used for keeping state. + + Calling this method should ensure that the data on the + output is put into a clean state, that allows appending + of new fresh data without having to rescan the whole + stream to recover state. + + 'u' Flushes and resets the codec buffers used for keeping state. + + Calling this method should ensure that the data on the + output is put into a clean state, that allows appending + of new fresh data without having to rescan the whole + stream to recover state. + + 'b' Inherit all other methods from the underlying stream. + 'u' Inherit all other methods from the underlying stream. + 'b' Creates a StreamReader instance. + + stream must be a file-like object open for reading. + + The StreamReader may use different error handling + schemes by providing the errors keyword argument. These + parameters are predefined: + + 'strict' - raise a ValueError (or a subclass) + 'ignore' - ignore the character and continue with the next + 'replace'- replace with a suitable replacement character + 'backslashreplace' - Replace with backslashed escape sequences; + + The set of allowed parameter values can be extended via + register_error. + 'u' Creates a StreamReader instance. + + stream must be a file-like object open for reading. + + The StreamReader may use different error handling + schemes by providing the errors keyword argument. These + parameters are predefined: + + 'strict' - raise a ValueError (or a subclass) + 'ignore' - ignore the character and continue with the next + 'replace'- replace with a suitable replacement character + 'backslashreplace' - Replace with backslashed escape sequences; + + The set of allowed parameter values can be extended via + register_error. + 'b' Decodes data from the stream self.stream and returns the + resulting object. + + chars indicates the number of decoded code points or bytes to + return. read() will never return more data than requested, + but it might return less, if there is not enough available. + + size indicates the approximate maximum number of decoded + bytes or code points to read for decoding. The decoder + can modify this setting as appropriate. The default value + -1 indicates to read and decode as much as possible. size + is intended to prevent having to decode huge files in one + step. + + If firstline is true, and a UnicodeDecodeError happens + after the first line terminator in the input only the first line + will be returned, the rest of the input will be kept until the + next call to read(). + + The method should use a greedy read strategy, meaning that + it should read as much data as is allowed within the + definition of the encoding and the given size, e.g. if + optional encoding endings or state markers are available + on the stream, these should be read too. + 'u' Decodes data from the stream self.stream and returns the + resulting object. + + chars indicates the number of decoded code points or bytes to + return. read() will never return more data than requested, + but it might return less, if there is not enough available. + + size indicates the approximate maximum number of decoded + bytes or code points to read for decoding. The decoder + can modify this setting as appropriate. The default value + -1 indicates to read and decode as much as possible. size + is intended to prevent having to decode huge files in one + step. + + If firstline is true, and a UnicodeDecodeError happens + after the first line terminator in the input only the first line + will be returned, the rest of the input will be kept until the + next call to read(). + + The method should use a greedy read strategy, meaning that + it should read as much data as is allowed within the + definition of the encoding and the given size, e.g. if + optional encoding endings or state markers are available + on the stream, these should be read too. + 'b' Read one line from the input stream and return the + decoded data. + + size, if given, is passed as size argument to the + read() method. + + 'u' Read one line from the input stream and return the + decoded data. + + size, if given, is passed as size argument to the + read() method. + + 'b' Read all lines available on the input stream + and return them as a list. + + Line breaks are implemented using the codec's decoder + method and are included in the list entries. + + sizehint, if given, is ignored since there is no efficient + way to finding the true end-of-line. + + 'u' Read all lines available on the input stream + and return them as a list. + + Line breaks are implemented using the codec's decoder + method and are included in the list entries. + + sizehint, if given, is ignored since there is no efficient + way to finding the true end-of-line. + + 'b' Resets the codec buffers used for keeping state. + + Note that no stream repositioning should take place. + This method is primarily intended to be able to recover + from decoding errors. + + 'u' Resets the codec buffers used for keeping state. + + Note that no stream repositioning should take place. + This method is primarily intended to be able to recover + from decoding errors. + + 'b' Set the input stream's current position. + + Resets the codec buffers used for keeping state. + 'u' Set the input stream's current position. + + Resets the codec buffers used for keeping state. + 'b' Return the next decoded line from the input stream.'u' Return the next decoded line from the input stream.'b' StreamReaderWriter instances allow wrapping streams which + work in both read and write modes. + + The design is such that one can use the factory functions + returned by the codec.lookup() function to construct the + instance. + + 'u' StreamReaderWriter instances allow wrapping streams which + work in both read and write modes. + + The design is such that one can use the factory functions + returned by the codec.lookup() function to construct the + instance. + + 'b'unknown'u'unknown'b' Creates a StreamReaderWriter instance. + + stream must be a Stream-like object. + + Reader, Writer must be factory functions or classes + providing the StreamReader, StreamWriter interface resp. + + Error handling is done in the same way as defined for the + StreamWriter/Readers. + + 'u' Creates a StreamReaderWriter instance. + + stream must be a Stream-like object. + + Reader, Writer must be factory functions or classes + providing the StreamReader, StreamWriter interface resp. + + Error handling is done in the same way as defined for the + StreamWriter/Readers. + + 'b' StreamRecoder instances translate data from one encoding to another. + + They use the complete set of APIs returned by the + codecs.lookup() function to implement their task. + + Data written to the StreamRecoder is first decoded into an + intermediate format (depending on the "decode" codec) and then + written to the underlying stream using an instance of the provided + Writer class. + + In the other direction, data is read from the underlying stream using + a Reader instance and then encoded and returned to the caller. + + 'u' StreamRecoder instances translate data from one encoding to another. + + They use the complete set of APIs returned by the + codecs.lookup() function to implement their task. + + Data written to the StreamRecoder is first decoded into an + intermediate format (depending on the "decode" codec) and then + written to the underlying stream using an instance of the provided + Writer class. + + In the other direction, data is read from the underlying stream using + a Reader instance and then encoded and returned to the caller. + + 'b' Creates a StreamRecoder instance which implements a two-way + conversion: encode and decode work on the frontend (the + data visible to .read() and .write()) while Reader and Writer + work on the backend (the data in stream). + + You can use these objects to do transparent + transcodings from e.g. latin-1 to utf-8 and back. + + stream must be a file-like object. + + encode and decode must adhere to the Codec interface; Reader and + Writer must be factory functions or classes providing the + StreamReader and StreamWriter interfaces resp. + + Error handling is done in the same way as defined for the + StreamWriter/Readers. + + 'u' Creates a StreamRecoder instance which implements a two-way + conversion: encode and decode work on the frontend (the + data visible to .read() and .write()) while Reader and Writer + work on the backend (the data in stream). + + You can use these objects to do transparent + transcodings from e.g. latin-1 to utf-8 and back. + + stream must be a file-like object. + + encode and decode must adhere to the Codec interface; Reader and + Writer must be factory functions or classes providing the + StreamReader and StreamWriter interfaces resp. + + Error handling is done in the same way as defined for the + StreamWriter/Readers. + + 'b' Open an encoded file using the given mode and return + a wrapped version providing transparent encoding/decoding. + + Note: The wrapped version will only accept the object format + defined by the codecs, i.e. Unicode objects for most builtin + codecs. Output is also codec dependent and will usually be + Unicode as well. + + Underlying encoded files are always opened in binary mode. + The default file mode is 'r', meaning to open the file in read mode. + + encoding specifies the encoding which is to be used for the + file. + + errors may be given to define the error handling. It defaults + to 'strict' which causes ValueErrors to be raised in case an + encoding error occurs. + + buffering has the same meaning as for the builtin open() API. + It defaults to -1 which means that the default buffer size will + be used. + + The returned wrapped file object provides an extra attribute + .encoding which allows querying the used encoding. This + attribute is only available if an encoding was specified as + parameter. + + 'u' Open an encoded file using the given mode and return + a wrapped version providing transparent encoding/decoding. + + Note: The wrapped version will only accept the object format + defined by the codecs, i.e. Unicode objects for most builtin + codecs. Output is also codec dependent and will usually be + Unicode as well. + + Underlying encoded files are always opened in binary mode. + The default file mode is 'r', meaning to open the file in read mode. + + encoding specifies the encoding which is to be used for the + file. + + errors may be given to define the error handling. It defaults + to 'strict' which causes ValueErrors to be raised in case an + encoding error occurs. + + buffering has the same meaning as for the builtin open() API. + It defaults to -1 which means that the default buffer size will + be used. + + The returned wrapped file object provides an extra attribute + .encoding which allows querying the used encoding. This + attribute is only available if an encoding was specified as + parameter. + + 'b' Return a wrapped version of file which provides transparent + encoding translation. + + Data written to the wrapped file is decoded according + to the given data_encoding and then encoded to the underlying + file using file_encoding. The intermediate data type + will usually be Unicode but depends on the specified codecs. + + Bytes read from the file are decoded using file_encoding and then + passed back to the caller encoded using data_encoding. + + If file_encoding is not given, it defaults to data_encoding. + + errors may be given to define the error handling. It defaults + to 'strict' which causes ValueErrors to be raised in case an + encoding error occurs. + + The returned wrapped file object provides two extra attributes + .data_encoding and .file_encoding which reflect the given + parameters of the same name. The attributes can be used for + introspection by Python programs. + + 'u' Return a wrapped version of file which provides transparent + encoding translation. + + Data written to the wrapped file is decoded according + to the given data_encoding and then encoded to the underlying + file using file_encoding. The intermediate data type + will usually be Unicode but depends on the specified codecs. + + Bytes read from the file are decoded using file_encoding and then + passed back to the caller encoded using data_encoding. + + If file_encoding is not given, it defaults to data_encoding. + + errors may be given to define the error handling. It defaults + to 'strict' which causes ValueErrors to be raised in case an + encoding error occurs. + + The returned wrapped file object provides two extra attributes + .data_encoding and .file_encoding which reflect the given + parameters of the same name. The attributes can be used for + introspection by Python programs. + + 'b' Lookup up the codec for the given encoding and return + its encoder function. + + Raises a LookupError in case the encoding cannot be found. + + 'u' Lookup up the codec for the given encoding and return + its encoder function. + + Raises a LookupError in case the encoding cannot be found. + + 'b' Lookup up the codec for the given encoding and return + its decoder function. + + Raises a LookupError in case the encoding cannot be found. + + 'u' Lookup up the codec for the given encoding and return + its decoder function. + + Raises a LookupError in case the encoding cannot be found. + + 'b' Lookup up the codec for the given encoding and return + its IncrementalEncoder class or factory function. + + Raises a LookupError in case the encoding cannot be found + or the codecs doesn't provide an incremental encoder. + + 'u' Lookup up the codec for the given encoding and return + its IncrementalEncoder class or factory function. + + Raises a LookupError in case the encoding cannot be found + or the codecs doesn't provide an incremental encoder. + + 'b' Lookup up the codec for the given encoding and return + its IncrementalDecoder class or factory function. + + Raises a LookupError in case the encoding cannot be found + or the codecs doesn't provide an incremental decoder. + + 'u' Lookup up the codec for the given encoding and return + its IncrementalDecoder class or factory function. + + Raises a LookupError in case the encoding cannot be found + or the codecs doesn't provide an incremental decoder. + + 'b' Lookup up the codec for the given encoding and return + its StreamReader class or factory function. + + Raises a LookupError in case the encoding cannot be found. + + 'u' Lookup up the codec for the given encoding and return + its StreamReader class or factory function. + + Raises a LookupError in case the encoding cannot be found. + + 'b' Lookup up the codec for the given encoding and return + its StreamWriter class or factory function. + + Raises a LookupError in case the encoding cannot be found. + + 'u' Lookup up the codec for the given encoding and return + its StreamWriter class or factory function. + + Raises a LookupError in case the encoding cannot be found. + + 'b' + Encoding iterator. + + Encodes the input strings from the iterator using an IncrementalEncoder. + + errors and kwargs are passed through to the IncrementalEncoder + constructor. + 'u' + Encoding iterator. + + Encodes the input strings from the iterator using an IncrementalEncoder. + + errors and kwargs are passed through to the IncrementalEncoder + constructor. + 'b' + Decoding iterator. + + Decodes the input strings from the iterator using an IncrementalDecoder. + + errors and kwargs are passed through to the IncrementalDecoder + constructor. + 'u' + Decoding iterator. + + Decodes the input strings from the iterator using an IncrementalDecoder. + + errors and kwargs are passed through to the IncrementalDecoder + constructor. + 'b' make_identity_dict(rng) -> dict + + Return a dictionary where elements of the rng sequence are + mapped to themselves. + + 'u' make_identity_dict(rng) -> dict + + Return a dictionary where elements of the rng sequence are + mapped to themselves. + + 'b' Creates an encoding map from a decoding map. + + If a target mapping in the decoding map occurs multiple + times, then that target is mapped to None (undefined mapping), + causing an exception when encountered by the charmap codec + during translation. + + One example where this happens is cp875.py which decodes + multiple character to \u001a. + + 'u' Creates an encoding map from a decoding map. + + If a target mapping in the decoding map occurs multiple + times, then that target is mapped to None (undefined mapping), + causing an exception when encountered by the charmap codec + during translation. + + One example where this happens is cp875.py which decodes + multiple character to \u001a. + + 'b'backslashreplace'u'backslashreplace'b'namereplace'u'namereplace'u'codecs'Utilities to compile possibly incomplete Python source code. + +This module provides two interfaces, broadly similar to the builtin +function compile(), which take program text, a filename and a 'mode' +and: + +- Return code object if the command is complete and valid +- Return None if the command is incomplete +- Raise SyntaxError, ValueError or OverflowError if the command is a + syntax error (OverflowError and ValueError can be produced by + malformed literals). + +Approach: + +First, check if the source consists entirely of blank lines and +comments; if so, replace it with 'pass', because the built-in +parser doesn't always do the right thing for these. + +Compile three times: as is, with \n, and with \n\n appended. If it +compiles as is, it's complete. If it compiles with one \n appended, +we expect more. If it doesn't compile either way, we compare the +error we get when compiling with \n or \n\n appended. If the errors +are the same, the code is broken. But if the errors are different, we +expect more. Not intuitive; not even guaranteed to hold in future +releases; but this matches the compiler's behavior from Python 1.4 +through 2.2, at least. + +Caveat: + +It is possible (but not likely) that the parser stops parsing with a +successful outcome before reaching the end of the source; in this +case, trailing symbols may be ignored instead of causing an error. +For example, a backslash followed by two newlines may be followed by +arbitrary garbage. This will be fixed once the API for the parser is +better. + +The two interfaces are: + +compile_command(source, filename, symbol): + + Compiles a single command in the manner described above. + +CommandCompiler(): + + Instances of this class have __call__ methods identical in + signature to compile_command; the difference is that if the + instance compiles program text containing a __future__ statement, + the instance 'remembers' and compiles all subsequent program texts + with the statement in force. + +The module also provides another class: + +Compile(): + + Instances of this class act like the built-in function compile, + but with 'memory' in the sense described above. +__future___featuresCompile0x200PyCF_DONT_IMPLY_DEDENT_maybe_compilepasserr1err2code1code2Compile a command and determine whether it is incomplete. + + Arguments: + + source -- the source string; may contain \n characters + filename -- optional filename from which source was read; default + "" + symbol -- optional grammar start symbol; "single" (default), "exec" + or "eval" + + Return value / exceptions raised: + + - Return a code object if the command is complete and valid + - Return None if the command is incomplete + - Raise SyntaxError, ValueError or OverflowError if the command is a + syntax error (OverflowError and ValueError can be produced by + malformed literals). + Instances of this class behave much like the built-in compile + function, but if one is used to compile text containing a future + statement, it "remembers" and compiles all subsequent program texts + with the statement in force.codeobfeatureInstances of this class have __call__ methods identical in + signature to compile_command; the difference is that if the + instance compiles program text containing a __future__ statement, + the instance 'remembers' and compiles all subsequent program texts + with the statement in force.Compile a command and determine whether it is incomplete. + + Arguments: + + source -- the source string; may contain \n characters + filename -- optional filename from which source was read; + default "" + symbol -- optional grammar start symbol; "single" (default) or + "eval" + + Return value / exceptions raised: + + - Return a code object if the command is complete and valid + - Return None if the command is incomplete + - Raise SyntaxError, ValueError or OverflowError if the command is a + syntax error (OverflowError and ValueError can be produced by + malformed literals). + # Matches pythonrun.h# Check for source consisting of only blank lines and comments# Leave it alone# Replace it with a 'pass' statement# Catch syntax warnings after the first compile# to emit warnings (SyntaxWarning, DeprecationWarning) at most once.b'Utilities to compile possibly incomplete Python source code. + +This module provides two interfaces, broadly similar to the builtin +function compile(), which take program text, a filename and a 'mode' +and: + +- Return code object if the command is complete and valid +- Return None if the command is incomplete +- Raise SyntaxError, ValueError or OverflowError if the command is a + syntax error (OverflowError and ValueError can be produced by + malformed literals). + +Approach: + +First, check if the source consists entirely of blank lines and +comments; if so, replace it with 'pass', because the built-in +parser doesn't always do the right thing for these. + +Compile three times: as is, with \n, and with \n\n appended. If it +compiles as is, it's complete. If it compiles with one \n appended, +we expect more. If it doesn't compile either way, we compare the +error we get when compiling with \n or \n\n appended. If the errors +are the same, the code is broken. But if the errors are different, we +expect more. Not intuitive; not even guaranteed to hold in future +releases; but this matches the compiler's behavior from Python 1.4 +through 2.2, at least. + +Caveat: + +It is possible (but not likely) that the parser stops parsing with a +successful outcome before reaching the end of the source; in this +case, trailing symbols may be ignored instead of causing an error. +For example, a backslash followed by two newlines may be followed by +arbitrary garbage. This will be fixed once the API for the parser is +better. + +The two interfaces are: + +compile_command(source, filename, symbol): + + Compiles a single command in the manner described above. + +CommandCompiler(): + + Instances of this class have __call__ methods identical in + signature to compile_command; the difference is that if the + instance compiles program text containing a __future__ statement, + the instance 'remembers' and compiles all subsequent program texts + with the statement in force. + +The module also provides another class: + +Compile(): + + Instances of this class act like the built-in function compile, + but with 'memory' in the sense described above. +'u'Utilities to compile possibly incomplete Python source code. + +This module provides two interfaces, broadly similar to the builtin +function compile(), which take program text, a filename and a 'mode' +and: + +- Return code object if the command is complete and valid +- Return None if the command is incomplete +- Raise SyntaxError, ValueError or OverflowError if the command is a + syntax error (OverflowError and ValueError can be produced by + malformed literals). + +Approach: + +First, check if the source consists entirely of blank lines and +comments; if so, replace it with 'pass', because the built-in +parser doesn't always do the right thing for these. + +Compile three times: as is, with \n, and with \n\n appended. If it +compiles as is, it's complete. If it compiles with one \n appended, +we expect more. If it doesn't compile either way, we compare the +error we get when compiling with \n or \n\n appended. If the errors +are the same, the code is broken. But if the errors are different, we +expect more. Not intuitive; not even guaranteed to hold in future +releases; but this matches the compiler's behavior from Python 1.4 +through 2.2, at least. + +Caveat: + +It is possible (but not likely) that the parser stops parsing with a +successful outcome before reaching the end of the source; in this +case, trailing symbols may be ignored instead of causing an error. +For example, a backslash followed by two newlines may be followed by +arbitrary garbage. This will be fixed once the API for the parser is +better. + +The two interfaces are: + +compile_command(source, filename, symbol): + + Compiles a single command in the manner described above. + +CommandCompiler(): + + Instances of this class have __call__ methods identical in + signature to compile_command; the difference is that if the + instance compiles program text containing a __future__ statement, + the instance 'remembers' and compiles all subsequent program texts + with the statement in force. + +The module also provides another class: + +Compile(): + + Instances of this class act like the built-in function compile, + but with 'memory' in the sense described above. +'b'Compile'u'Compile'b'CommandCompiler'u'CommandCompiler'b'pass'u'pass'b'Compile a command and determine whether it is incomplete. + + Arguments: + + source -- the source string; may contain \n characters + filename -- optional filename from which source was read; default + "" + symbol -- optional grammar start symbol; "single" (default), "exec" + or "eval" + + Return value / exceptions raised: + + - Return a code object if the command is complete and valid + - Return None if the command is incomplete + - Raise SyntaxError, ValueError or OverflowError if the command is a + syntax error (OverflowError and ValueError can be produced by + malformed literals). + 'u'Compile a command and determine whether it is incomplete. + + Arguments: + + source -- the source string; may contain \n characters + filename -- optional filename from which source was read; default + "" + symbol -- optional grammar start symbol; "single" (default), "exec" + or "eval" + + Return value / exceptions raised: + + - Return a code object if the command is complete and valid + - Return None if the command is incomplete + - Raise SyntaxError, ValueError or OverflowError if the command is a + syntax error (OverflowError and ValueError can be produced by + malformed literals). + 'b'Instances of this class behave much like the built-in compile + function, but if one is used to compile text containing a future + statement, it "remembers" and compiles all subsequent program texts + with the statement in force.'u'Instances of this class behave much like the built-in compile + function, but if one is used to compile text containing a future + statement, it "remembers" and compiles all subsequent program texts + with the statement in force.'b'Instances of this class have __call__ methods identical in + signature to compile_command; the difference is that if the + instance compiles program text containing a __future__ statement, + the instance 'remembers' and compiles all subsequent program texts + with the statement in force.'u'Instances of this class have __call__ methods identical in + signature to compile_command; the difference is that if the + instance compiles program text containing a __future__ statement, + the instance 'remembers' and compiles all subsequent program texts + with the statement in force.'b'Compile a command and determine whether it is incomplete. + + Arguments: + + source -- the source string; may contain \n characters + filename -- optional filename from which source was read; + default "" + symbol -- optional grammar start symbol; "single" (default) or + "eval" + + Return value / exceptions raised: + + - Return a code object if the command is complete and valid + - Return None if the command is incomplete + - Raise SyntaxError, ValueError or OverflowError if the command is a + syntax error (OverflowError and ValueError can be produced by + malformed literals). + 'u'Compile a command and determine whether it is incomplete. + + Arguments: + + source -- the source string; may contain \n characters + filename -- optional filename from which source was read; + default "" + symbol -- optional grammar start symbol; "single" (default) or + "eval" + + Return value / exceptions raised: + + - Return a code object if the command is complete and valid + - Return None if the command is incomplete + - Raise SyntaxError, ValueError or OverflowError if the command is a + syntax error (OverflowError and ValueError can be produced by + malformed literals). + 'u'codeop'ClientListenerPipereductionForkingPickler_ForkingPicklerWAIT_OBJECT_0WAIT_ABANDONED_0WAIT_TIMEOUTINFINITEBUFSIZE20.020.CONNECTION_TIMEOUT_mmap_counterdefault_familyfamiliesAF_PIPE_init_timeout_check_timeoutarbitrary_address + Return an arbitrary free address for the given family + mktemplistener-get_temp_dir\\.\pipe\pyc-%d-%d-unrecognized family_validate_family + Checks if the family is valid for the current environment. + Family %s is not recognized.address_type + Return the types of the address + + This can be 'AF_INET', 'AF_UNIX', or 'AF_PIPE' + is_abstract_socket_namespaceaddress type of %r unrecognized_ConnectionBaseinvalid handleat least one of `readable` and `writable` must be True_readable_writable_closehandle is closed_check_readableconnection is write-only_check_writableconnection is read-only_bad_message_lengthbad message lengthTrue if the connection is closedTrue if the connection is readableTrue if the connection is writableFile descriptor or handle of the connectionClose the connectionsend_bytesSend the bytes data from a bytes-like objectoffset is negativebuffer length < offsetsize is negativebuffer length < offset + size_send_bytesSend a (picklable) objectrecv_bytesmaxlength + Receive bytes data as a bytes object. + negative maxlength_recv_bytesrecv_bytes_into + Receive bytes data into a writeable bytes-like object. + Return the number of bytes read. + bytesizenegative offsetoffset too largeReceive a (picklable) objectWhether there is any input available to be read_pollPipeConnection + Connection class based on a Windows named pipe. + Overlapped I/O is used, so the handles must have been created + with FILE_FLAG_OVERLAPPED. + _got_empty_messageCloseHandle_CloseHandleWriteFileoverlappedovERROR_IO_PENDINGWaitForMultipleObjectswaitresGetOverlappedResultnwrittenbsizeReadFilenreadERROR_MORE_DATA_get_more_datawinerrorERROR_BROKEN_PIPEshouldn't get here; expected KeyboardInterruptPeekNamedPiperbytes + Connection class based on an arbitrary file descriptor (Unix only), or + a socket handle (Windows). + closesocket_read_sendremaining_recvgot end of file during message0x7fffffff!ipre_header!Q + Returns a listener object. + + This is a wrapper for a bound socket which is 'listening' for + connections, or for a Windows named pipe. + authkeyPipeListener_listenerSocketListenerauthkey should be a byte string_authkey + Accept a connection on the bound socket or named pipe of `self`. + + Returns a `Connection` object. + listener is closeddeliver_challengeanswer_challenge + Close the bound socket or named pipe of `self`. + listener_addresslast_accepted_last_accepted + Returns a connection to the address of a `Listener` + PipeClientSocketClientduplex + Returns pair of connection objects at either end of a pipe + c1c2fd1PIPE_ACCESS_DUPLEXopenmodeGENERIC_READGENERIC_WRITEaccessobsizeibsizePIPE_ACCESS_INBOUNDCreateNamedPipeFILE_FLAG_OVERLAPPEDFILE_FLAG_FIRST_PIPE_INSTANCEPIPE_TYPE_MESSAGEPIPE_READMODE_MESSAGEPIPE_WAITNMPWAIT_WAIT_FOREVERNULLh1CreateFileOPEN_EXISTINGh2SetNamedPipeHandleStateConnectNamedPipe + Representation of a socket which is bound to an address and listening + _familyFinalizeexitpriority + Return a connection object connected to the socket given by `address` + + Representation of a named pipe + _new_handle_handle_queuesub_debuglistener created with address=%r_finalize_pipe_listenerPIPE_UNLIMITED_INSTANCESERROR_NO_DATAclosing listener with address=%r + Return a connection object connected to the pipe given by `address` + WaitNamedPipeERROR_SEM_TIMEOUTERROR_PIPE_BUSYMESSAGE_LENGTH#CHALLENGE#CHALLENGE#WELCOME#WELCOME#FAILURE#FAILUREhmacAuthkey must be bytes, not {0!s}urandomdigest received was wrongmessage = %rdigest sent was rejectedConnectionWrapper_conn_dumps_loads_xml_dumps_xml_loadsXmlListenerXmlClient_exhaustive_waithandlesreadyShould not get hereERROR_NETNAME_DELETED_ready_errorsobject_list + Wait till an object in object_list is ready/readable. + + Returns list of those objects in object_list which are ready/readable. + waithandle_to_objov_listready_objectsready_handlesERROR_OPERATION_ABORTEDselectorsPollSelector_WaitSelectorSelectSelectorEVENT_READreduce_connectionresource_sharerDupSocketdsrebuild_connectionreduce_pipe_connectionFILE_GENERIC_READFILE_GENERIC_WRITEDupHandledhrebuild_pipe_connectionDupFddf# A higher level module for using sockets (or Windows named pipes)# multiprocessing/connection.py# A very generous timeout when it comes to local connections...# double check# Connection classes# XXX should we use util.Finalize instead of a __del__?# HACK for byte-indexing of non-bytewise buffers (e.g. array.array)# Get bytesize of arbitrary buffer# Message can fit in dest# For wire compatibility with 3.7 and lower# The payload is large so Nagle's algorithm won't be triggered# and we'd better avoid the cost of concatenation.# Issue #20540: concatenate before sending, to avoid delays due# to Nagle's algorithm on a TCP socket.# Also note we want to avoid sending a 0-length buffer separately,# to avoid "broken pipe" errors if the other end closed the pipe.# Public functions# default security descriptor: the handle cannot be inherited# Definitions for connections based on sockets# SO_REUSEADDR has different semantics on Windows (issue #2550).# Linux abstract socket namespaces do not need to be explicitly unlinked# Definitions for connections based on named pipes# ERROR_NO_DATA can occur if a client has already connected,# written data and then disconnected -- see Issue 14725.# Authentication stuff# reject large message# Support for using xmlrpclib for serialization# Wait# Return ALL handles which are currently signalled. (Only# returning the first signalled might create starvation issues.)# start an overlapped read of length zero# If o.fileno() is an overlapped pipe handle and# err == 0 then there is a zero length message# in the pipe, but it HAS NOT been consumed...# ... except on Windows 8 and later, where# the message HAS been consumed.# request that overlapped reads stop# wait for all overlapped reads to stop# If o.fileno() is an overlapped pipe handle then# a zero length message HAS been consumed.# poll/select have the advantage of not requiring any extra file# descriptor, contrarily to epoll/kqueue (also, they require a single# syscall).# Make connection and socket objects sharable if possibleb'Client'u'Client'b'Listener'u'Listener'b'Pipe'u'Pipe'b'AF_INET'u'AF_INET'b'AF_PIPE'u'AF_PIPE'b' + Return an arbitrary free address for the given family + 'u' + Return an arbitrary free address for the given family + 'b'listener-'u'listener-'b'\\.\pipe\pyc-%d-%d-'u'\\.\pipe\pyc-%d-%d-'b'unrecognized family'u'unrecognized family'b' + Checks if the family is valid for the current environment. + 'u' + Checks if the family is valid for the current environment. + 'b'Family %s is not recognized.'u'Family %s is not recognized.'b' + Return the types of the address + + This can be 'AF_INET', 'AF_UNIX', or 'AF_PIPE' + 'u' + Return the types of the address + + This can be 'AF_INET', 'AF_UNIX', or 'AF_PIPE' + 'b'address type of %r unrecognized'u'address type of %r unrecognized'b'invalid handle'u'invalid handle'b'at least one of `readable` and `writable` must be True'u'at least one of `readable` and `writable` must be True'b'handle is closed'u'handle is closed'b'connection is write-only'u'connection is write-only'b'connection is read-only'u'connection is read-only'b'bad message length'u'bad message length'b'True if the connection is closed'u'True if the connection is closed'b'True if the connection is readable'u'True if the connection is readable'b'True if the connection is writable'u'True if the connection is writable'b'File descriptor or handle of the connection'u'File descriptor or handle of the connection'b'Close the connection'u'Close the connection'b'Send the bytes data from a bytes-like object'u'Send the bytes data from a bytes-like object'b'offset is negative'u'offset is negative'b'buffer length < offset'u'buffer length < offset'b'size is negative'u'size is negative'b'buffer length < offset + size'u'buffer length < offset + size'b'Send a (picklable) object'u'Send a (picklable) object'b' + Receive bytes data as a bytes object. + 'u' + Receive bytes data as a bytes object. + 'b'negative maxlength'u'negative maxlength'b' + Receive bytes data into a writeable bytes-like object. + Return the number of bytes read. + 'u' + Receive bytes data into a writeable bytes-like object. + Return the number of bytes read. + 'b'negative offset'u'negative offset'b'offset too large'u'offset too large'b'Receive a (picklable) object'u'Receive a (picklable) object'b'Whether there is any input available to be read'u'Whether there is any input available to be read'b' + Connection class based on a Windows named pipe. + Overlapped I/O is used, so the handles must have been created + with FILE_FLAG_OVERLAPPED. + 'u' + Connection class based on a Windows named pipe. + Overlapped I/O is used, so the handles must have been created + with FILE_FLAG_OVERLAPPED. + 'b'shouldn't get here; expected KeyboardInterrupt'u'shouldn't get here; expected KeyboardInterrupt'b' + Connection class based on an arbitrary file descriptor (Unix only), or + a socket handle (Windows). + 'u' + Connection class based on an arbitrary file descriptor (Unix only), or + a socket handle (Windows). + 'b'got end of file during message'u'got end of file during message'b'!i'u'!i'b'!Q'u'!Q'b' + Returns a listener object. + + This is a wrapper for a bound socket which is 'listening' for + connections, or for a Windows named pipe. + 'u' + Returns a listener object. + + This is a wrapper for a bound socket which is 'listening' for + connections, or for a Windows named pipe. + 'b'authkey should be a byte string'u'authkey should be a byte string'b' + Accept a connection on the bound socket or named pipe of `self`. + + Returns a `Connection` object. + 'u' + Accept a connection on the bound socket or named pipe of `self`. + + Returns a `Connection` object. + 'b'listener is closed'u'listener is closed'b' + Close the bound socket or named pipe of `self`. + 'u' + Close the bound socket or named pipe of `self`. + 'b' + Returns a connection to the address of a `Listener` + 'u' + Returns a connection to the address of a `Listener` + 'b' + Returns pair of connection objects at either end of a pipe + 'u' + Returns pair of connection objects at either end of a pipe + 'b' + Representation of a socket which is bound to an address and listening + 'u' + Representation of a socket which is bound to an address and listening + 'b' + Return a connection object connected to the socket given by `address` + 'u' + Return a connection object connected to the socket given by `address` + 'b' + Representation of a named pipe + 'u' + Representation of a named pipe + 'b'listener created with address=%r'u'listener created with address=%r'b'closing listener with address=%r'u'closing listener with address=%r'b' + Return a connection object connected to the pipe given by `address` + 'u' + Return a connection object connected to the pipe given by `address` + 'b'#CHALLENGE#'b'#WELCOME#'b'#FAILURE#'b'Authkey must be bytes, not {0!s}'u'Authkey must be bytes, not {0!s}'b'md5'u'md5'b'digest received was wrong'u'digest received was wrong'b'message = %r'u'message = %r'b'digest sent was rejected'u'digest sent was rejected'b'fileno'u'fileno'b'poll'u'poll'b'recv_bytes'u'recv_bytes'b'send_bytes'u'send_bytes'b'Should not get here'u'Should not get here'b' + Wait till an object in object_list is ready/readable. + + Returns list of those objects in object_list which are ready/readable. + 'u' + Wait till an object in object_list is ready/readable. + + Returns list of those objects in object_list which are ready/readable. + 'b'_got_empty_message'u'_got_empty_message'b'PollSelector'u'PollSelector'NOFALSEOFFYESTRUEONSnwNWswSWNEseSENSEWnsewNSEWCENTERNONEbothBOTHLEFTtopTOPRIGHTbottomBOTTOMraisedsunkenSUNKENflatFLATridgeRIDGEgrooveGROOVEsolidSOLIDhorizontalHORIZONTALverticalVERTICALnumericNUMERICCHARWORDbaselineBASELINEinsideINSIDEoutsideOUTSIDEselSELsel.firstSEL_FIRSTsel.lastSEL_LASTENDINSERTCURRENTANCHORALLnormalNORMALDISABLEDactiveACTIVEhiddenHIDDENCASCADECHECKBUTTONCOMMANDRADIOBUTTONSEPARATORSINGLEbrowseBROWSEmultipleMULTIPLEextendedEXTENDEDdotboxDOTBOXunderlineUNDERLINEpieslicePIESLICEchordCHORDARCFIRSTLASTbuttBUTTprojectingPROJECTINGROUNDbevelBEVELmiterMITERMOVETOSCROLLunitsUNITSpagesPAGES# Symbolic constants for Tk# Booleans# -anchor and -sticky# -fill# -side# -relief# -orient# -tabs# -wrap# -align# -bordermode# Special tags, marks and insert positions# e.g. Canvas.delete(ALL)# Text widget and button states# Canvas state# Menu item types# Selection modes for list boxes# Activestyle for list boxes# NONE='none' is also valid# Various canvas styles# Arguments to xview/yviewb'nw'u'nw'b'sw'u'sw'b'ne'u'ne'b'se'u'se'b'ns'u'ns'b'ew'u'ew'b'nsew'u'nsew'b'center'u'center'b'both'u'both'b'left'b'top'u'top'b'right'b'bottom'u'bottom'b'raised'u'raised'b'sunken'u'sunken'b'flat'u'flat'b'ridge'u'ridge'b'groove'u'groove'b'solid'u'solid'b'horizontal'u'horizontal'b'vertical'u'vertical'b'numeric'u'numeric'b'word'u'word'b'baseline'u'baseline'b'inside'u'inside'b'outside'u'outside'b'sel'u'sel'b'sel.first'u'sel.first'b'sel.last'u'sel.last'b'normal'u'normal'b'disabled'u'disabled'b'active'u'active'b'hidden'u'hidden'b'browse'u'browse'b'multiple'u'multiple'b'extended'u'extended'b'dotbox'u'dotbox'b'underline'u'underline'b'pieslice'u'pieslice'b'chord'u'chord'b'first'u'first'b'butt'u'butt'b'projecting'u'projecting'b'round'u'round'b'bevel'u'bevel'b'miter'u'miter'b'units'u'units'b'pages'u'pages'u'constants'LOG_THRESHOLD_FOR_CONNLOST_WRITESACCEPT_RETRY_DELAYSSL_HANDSHAKE_TIMEOUTautoFALLBACK# After the connection is lost, log warnings after this many write()s.# Seconds to wait before retrying accept().# Number of stack entries to capture in debug mode.# The larger the number, the slower the operation in debug mode# (see extract_stack() in format_helpers.py).# Number of seconds to wait for SSL handshake to complete# The default timeout matches that of Nginx.# Used in sendfile fallback code. We use fallback for platforms# that don't support sendfile, or for TLS connections.# The enum should be here to break circular dependencies between# base_events and sslprotou'asyncio.constants'BaseContextparent_processactive_childrencpu_countReturns the number of CPUs in the systemcannot determine number of cpusReturns a manager associated with a running server process + + The managers methods such as `Lock()`, `Condition()` and `Queue()` + can be used to create shared objects. + managersSyncManagerget_contextReturns two connection object connected by a pipeReturns a non-recursive lock objectReturns a recursive lock objectReturns a condition objectSemaphoreReturns a semaphore objectBoundedSemaphoreReturns a bounded semaphore objectReturns an event objectBarrierpartiesReturns a barrier objectReturns a queue objectJoinableQueuePoolprocessesinitializerinitargsmaxtasksperchildReturns a process pool objectpoolRawValuetypecode_or_typeReturns a shared objectsharedctypesRawArraysize_or_initializerReturns a shared arrayValueReturns a synchronized shared objectReturns a synchronized shared arrayfreeze_supportCheck whether this is a fake forked process in a frozen executable. + If so then run code specified by commandline and exit. + get_loggerReturn package logger -- if it does not already exist then + it is created. + log_to_stderrTurn on logging and add a handler which prints to stderrallow_connection_picklingInstall support for sending connections and sockets + between processes + Sets the path to a python.exe or pythonw.exe binary used to run + child processes instead of sys.executable when using the 'spawn' + start method. Useful for people embedding Python. + set_forkserver_preloadSet list of module names to try to load in forkserver process. + This is really just a hint. + forkserver_concrete_contextscannot find context for %r_check_availableget_start_methodset_start_methodcannot set start method of concrete contextreducerControls how objects will be reduced to a form that can be + shared with other processes.BaseProcess_start_method_Popenprocess_obj_actual_contextcontext has already been setget_all_start_methodsforkHAVE_SEND_HANDLEForkProcesspopen_forkSpawnProcesspopen_spawn_posixForkServerProcesspopen_forkserverForkContextSpawnContextForkServerContextforkserver start method not availablepopen_spawn_win32_force_start_method_tlsget_spawning_popenspawning_popenset_spawning_popenpopenassert_spawning%s objects should only be shared between processes through inheritance'%s objects should only be shared between processes'' through inheritance'# Base type for contexts. Bound methods of an instance of this type are included in __all__ of __init__.py# This is undocumented. In previous versions of multiprocessing# its only effect was to make socket objects inheritable on Windows.# Type of default context -- underlying context can be set at most once# Context types for fixed start method# bpo-33725: running arbitrary code after fork() is no longer reliable# on macOS since macOS 10.14 (Mojave). Use spawn by default instead.# Force the start method# Check that the current thread is spawning a child processb'Returns the number of CPUs in the system'u'Returns the number of CPUs in the system'b'cannot determine number of cpus'u'cannot determine number of cpus'b'Returns a manager associated with a running server process + + The managers methods such as `Lock()`, `Condition()` and `Queue()` + can be used to create shared objects. + 'u'Returns a manager associated with a running server process + + The managers methods such as `Lock()`, `Condition()` and `Queue()` + can be used to create shared objects. + 'b'Returns two connection object connected by a pipe'u'Returns two connection object connected by a pipe'b'Returns a non-recursive lock object'u'Returns a non-recursive lock object'b'Returns a recursive lock object'u'Returns a recursive lock object'b'Returns a condition object'u'Returns a condition object'b'Returns a semaphore object'u'Returns a semaphore object'b'Returns a bounded semaphore object'u'Returns a bounded semaphore object'b'Returns an event object'u'Returns an event object'b'Returns a barrier object'u'Returns a barrier object'b'Returns a queue object'u'Returns a queue object'b'Returns a process pool object'u'Returns a process pool object'b'Returns a shared object'u'Returns a shared object'b'Returns a shared array'u'Returns a shared array'b'Returns a synchronized shared object'u'Returns a synchronized shared object'b'Returns a synchronized shared array'u'Returns a synchronized shared array'b'Check whether this is a fake forked process in a frozen executable. + If so then run code specified by commandline and exit. + 'u'Check whether this is a fake forked process in a frozen executable. + If so then run code specified by commandline and exit. + 'b'Return package logger -- if it does not already exist then + it is created. + 'u'Return package logger -- if it does not already exist then + it is created. + 'b'Turn on logging and add a handler which prints to stderr'u'Turn on logging and add a handler which prints to stderr'b'Install support for sending connections and sockets + between processes + 'u'Install support for sending connections and sockets + between processes + 'b'Sets the path to a python.exe or pythonw.exe binary used to run + child processes instead of sys.executable when using the 'spawn' + start method. Useful for people embedding Python. + 'u'Sets the path to a python.exe or pythonw.exe binary used to run + child processes instead of sys.executable when using the 'spawn' + start method. Useful for people embedding Python. + 'b'Set list of module names to try to load in forkserver process. + This is really just a hint. + 'u'Set list of module names to try to load in forkserver process. + This is really just a hint. + 'b'cannot find context for %r'u'cannot find context for %r'b'cannot set start method of concrete context'u'cannot set start method of concrete context'b'Controls how objects will be reduced to a form that can be + shared with other processes.'u'Controls how objects will be reduced to a form that can be + shared with other processes.'b'reduction'u'reduction'b'context has already been set'u'context has already been set'b'spawn'u'spawn'b'fork'u'fork'b'forkserver'u'forkserver'b'forkserver start method not available'u'forkserver start method not available'b'spawning_popen'u'spawning_popen'b'%s objects should only be shared between processes through inheritance'u'%s objects should only be shared between processes through inheritance'Utilities for with-statement contexts. See PEP 343.asynccontextmanagernullcontextAbstractContextManagerAbstractAsyncContextManagerAsyncExitStackContextDecoratorredirect_stdoutredirect_stderrsuppressAn abstract base class for context managers.Return `self` upon entering the runtime context.Raise any exception triggered within the runtime context.An abstract base class for asynchronous context managers.__aenter____aexit__A base class or mixin that enables context managers to work as decorators._recreate_cmReturn a recreated instance of self. + + Allows an otherwise one-shot context manager like + _GeneratorContextManager to support use as + a decorator via implicit recreation. + + This is a private interface just for _GeneratorContextManager. + See issue #11647 for details. + _GeneratorContextManagerBaseShared functionality for @contextmanager and @asynccontextmanager.gen_GeneratorContextManagerHelper for @contextmanager decorator.generator didn't yieldgenerator didn't stopgenerator didn't stop after throw()_AsyncGeneratorContextManagerHelper for @asynccontextmanager.generator didn't stop after athrow()@contextmanager decorator. + + Typical usage: + + @contextmanager + def some_generator(): + + try: + yield + finally: + + + This makes this: + + with some_generator() as : + + + equivalent to this: + + + try: + = + + finally: + + helper@asynccontextmanager decorator. + + Typical usage: + + @asynccontextmanager + async def some_async_generator(): + + try: + yield + finally: + + + This makes this: + + async with some_async_generator() as : + + + equivalent to this: + + + try: + = + + finally: + + Context to automatically close something at the end of a block. + + Code like this: + + with closing(.open()) as f: + + + is equivalent to this: + + f = .open() + try: + + finally: + f.close() + + thing_RedirectStream_streamnew_target_new_target_old_targetsexctypeexcinstexctbContext manager for temporarily redirecting stdout to another file. + + # How to send help() to stderr + with redirect_stdout(sys.stderr): + help(dir) + + # How to write help() to a file + with open('help.txt', 'w') as f: + with redirect_stdout(f): + help(pow) + Context manager for temporarily redirecting stderr to another file.Context manager to suppress specified exceptions + + After the exception is suppressed, execution proceeds with the next + statement following the with statement. + + with suppress(FileNotFoundError): + os.remove(somefile) + # Execution still resumes here if the file was already removed + _exceptions_BaseExitStackA base class for ExitStack and AsyncExitStack._create_exit_wrappercm_exit_create_cb_wrapper_exit_wrapper_exit_callbackspop_allPreserve the context stack by transferring it to a new instance.new_stackRegisters a callback with the standard __exit__ method signature. + + Can suppress exceptions the same way __exit__ method can. + Also accepts any object with an __exit__ method (registering a call + to the method instead of the object itself). + _cb_typeexit_method_push_cm_exit_push_exit_callbackenter_contextEnters the supplied context manager. + + If successful, also pushes its __exit__ method as a callback and + returns the result of the __enter__ method. + _cm_typeRegisters an arbitrary callback and arguments. + + Cannot suppress exceptions. + descriptor 'callback' of '_BaseExitStack' object needs an argument"descriptor 'callback' of '_BaseExitStack' object "Passing 'callback' as keyword argument is deprecatedcallback expected at least 1 positional argument, got %d'callback expected at least 1 positional argument, '__wrapped__($self, callback, /, *args, **kwds)Helper to correctly register callbacks to __exit__ methods.is_syncContext manager for dynamic management of a stack of exit callbacks. + + For example: + with ExitStack() as stack: + files = [stack.enter_context(open(fname)) for fname in filenames] + # All opened files will automatically be closed at the end of + # the with statement, even if attempts to open files later + # in the list raise an exception. + received_excframe_exc_fix_exception_contextnew_excold_excexc_contextsuppressed_excpending_raisenew_exc_detailsfixed_ctxImmediately unwind the context stack.Async context manager for dynamic management of a stack of exit + callbacks. + + For example: + async with AsyncExitStack() as stack: + connections = [await stack.enter_async_context(get_connection()) + for i in range(5)] + # All opened connections will automatically be released at the + # end of the async with statement, even if attempts to open a + # connection later in the list raise an exception. + _create_async_exit_wrapper_create_async_cb_wrapperenter_async_contextEnters the supplied async context manager. + + If successful, also pushes its __aexit__ method as a callback and + returns the result of the __aenter__ method. + _push_async_cm_exitpush_async_exitRegisters a coroutine function with the standard __aexit__ method + signature. + + Can suppress exceptions the same way __aexit__ method can. + Also accepts any object with an __aexit__ method (registering a call + to the method instead of the object itself). + push_async_callbackRegisters an arbitrary coroutine function and arguments. + + Cannot suppress exceptions. + descriptor 'push_async_callback' of 'AsyncExitStack' object needs an argument"descriptor 'push_async_callback' of ""'AsyncExitStack' object needs an argument"push_async_callback expected at least 1 positional argument, got %d'push_async_callback expected at least 1 ''positional argument, got %d'Helper to correctly register coroutine function to __aexit__ + method.cb_suppressContext manager that does no additional processing. + + Used as a stand-in for a normal context manager, when a particular + block of code is only sometimes used with a normal context manager: + + cm = optional_cm if condition else nullcontext() + with cm: + # Perform operation, using optional_cm if condition is True + enter_resultexcinfo# Issue 19330: ensure context manager instances have good docstrings# Unfortunately, this still doesn't provide good help output when# inspecting the created context manager instances, since pydoc# currently bypasses the instance docstring and shows the docstring# for the class instead.# See http://bugs.python.org/issue19404 for more details.# _GCM instances are one-shot context managers, so the# CM must be recreated each time a decorated function is# called# do not keep args and kwds alive unnecessarily# they are only needed for recreation, which is not possible anymore# Need to force instantiation so we can reliably# tell if we get the same exception back# Suppress StopIteration *unless* it's the same exception that# was passed to throw(). This prevents a StopIteration# raised inside the "with" statement from being suppressed.# Don't re-raise the passed in exception. (issue27122)# Likewise, avoid suppressing if a StopIteration exception# was passed to throw() and later wrapped into a RuntimeError# (see PEP 479).# only re-raise if it's *not* the exception that was# passed to throw(), because __exit__() must not raise# an exception unless __exit__() itself failed. But throw()# has to raise the exception to signal propagation, so this# fixes the impedance mismatch between the throw() protocol# and the __exit__() protocol.# This cannot use 'except BaseException as exc' (as in the# async implementation) to maintain compatibility with# Python 2, where old-style class exceptions are not caught# by 'except BaseException'.# See _GeneratorContextManager.__exit__ for comments on subtleties# in this implementation# Avoid suppressing if a StopIteration exception# (see PEP 479 for sync generators; async generators also# have this behavior). But do this only if the exception wrapped# by the RuntimeError is actully Stop(Async)Iteration (see# issue29692).# We use a list of old targets to make this CM re-entrant# Unlike isinstance and issubclass, CPython exception handling# currently only looks at the concrete type hierarchy (ignoring# the instance and subclass checking hooks). While Guido considers# that a bug rather than a feature, it's a fairly hard one to fix# due to various internal implementation details. suppress provides# the simpler issubclass based semantics, rather than trying to# exactly reproduce the limitations of the CPython interpreter.# See http://bugs.python.org/issue12029 for more details# We use an unbound method rather than a bound method to follow# the standard lookup behaviour for special methods.# Not a context manager, so assume it's a callable.# Allow use as a decorator.# We look up the special methods on the type to match the with# statement.# We changed the signature, so using @wraps is not appropriate, but# setting __wrapped__ may still help with introspection.# Allow use as a decorator# Inspired by discussions on http://bugs.python.org/issue13585# We manipulate the exception state so it behaves as though# we were actually nesting multiple with statements# Context may not be correct, so find the end of the chain# Context is already set correctly (see issue 20317)# Change the end of the chain to point to the exception# we expect it to reference# Callbacks are invoked in LIFO order to match the behaviour of# nested context managers# simulate the stack of exceptions by setting the context# bare "raise exc_details[1]" replaces our carefully# set-up context# Inspired by discussions on https://bugs.python.org/issue29302# Not an async context manager, so assume it's a coroutine functionb'Utilities for with-statement contexts. See PEP 343.'u'Utilities for with-statement contexts. See PEP 343.'b'asynccontextmanager'u'asynccontextmanager'b'contextmanager'u'contextmanager'b'closing'u'closing'b'nullcontext'u'nullcontext'b'AbstractContextManager'u'AbstractContextManager'b'AbstractAsyncContextManager'u'AbstractAsyncContextManager'b'AsyncExitStack'u'AsyncExitStack'b'ContextDecorator'u'ContextDecorator'b'ExitStack'u'ExitStack'b'redirect_stdout'u'redirect_stdout'b'redirect_stderr'u'redirect_stderr'b'suppress'u'suppress'b'An abstract base class for context managers.'u'An abstract base class for context managers.'b'Return `self` upon entering the runtime context.'u'Return `self` upon entering the runtime context.'b'Raise any exception triggered within the runtime context.'u'Raise any exception triggered within the runtime context.'b'__enter__'u'__enter__'b'__exit__'u'__exit__'b'An abstract base class for asynchronous context managers.'u'An abstract base class for asynchronous context managers.'b'__aenter__'u'__aenter__'b'__aexit__'u'__aexit__'b'A base class or mixin that enables context managers to work as decorators.'u'A base class or mixin that enables context managers to work as decorators.'b'Return a recreated instance of self. + + Allows an otherwise one-shot context manager like + _GeneratorContextManager to support use as + a decorator via implicit recreation. + + This is a private interface just for _GeneratorContextManager. + See issue #11647 for details. + 'u'Return a recreated instance of self. + + Allows an otherwise one-shot context manager like + _GeneratorContextManager to support use as + a decorator via implicit recreation. + + This is a private interface just for _GeneratorContextManager. + See issue #11647 for details. + 'b'Shared functionality for @contextmanager and @asynccontextmanager.'u'Shared functionality for @contextmanager and @asynccontextmanager.'b'Helper for @contextmanager decorator.'u'Helper for @contextmanager decorator.'b'generator didn't yield'u'generator didn't yield'b'generator didn't stop'u'generator didn't stop'b'generator didn't stop after throw()'u'generator didn't stop after throw()'b'Helper for @asynccontextmanager.'u'Helper for @asynccontextmanager.'b'generator didn't stop after athrow()'u'generator didn't stop after athrow()'b'@contextmanager decorator. + + Typical usage: + + @contextmanager + def some_generator(): + + try: + yield + finally: + + + This makes this: + + with some_generator() as : + + + equivalent to this: + + + try: + = + + finally: + + 'u'@contextmanager decorator. + + Typical usage: + + @contextmanager + def some_generator(): + + try: + yield + finally: + + + This makes this: + + with some_generator() as : + + + equivalent to this: + + + try: + = + + finally: + + 'b'@asynccontextmanager decorator. + + Typical usage: + + @asynccontextmanager + async def some_async_generator(): + + try: + yield + finally: + + + This makes this: + + async with some_async_generator() as : + + + equivalent to this: + + + try: + = + + finally: + + 'u'@asynccontextmanager decorator. + + Typical usage: + + @asynccontextmanager + async def some_async_generator(): + + try: + yield + finally: + + + This makes this: + + async with some_async_generator() as : + + + equivalent to this: + + + try: + = + + finally: + + 'b'Context to automatically close something at the end of a block. + + Code like this: + + with closing(.open()) as f: + + + is equivalent to this: + + f = .open() + try: + + finally: + f.close() + + 'u'Context to automatically close something at the end of a block. + + Code like this: + + with closing(.open()) as f: + + + is equivalent to this: + + f = .open() + try: + + finally: + f.close() + + 'b'Context manager for temporarily redirecting stdout to another file. + + # How to send help() to stderr + with redirect_stdout(sys.stderr): + help(dir) + + # How to write help() to a file + with open('help.txt', 'w') as f: + with redirect_stdout(f): + help(pow) + 'u'Context manager for temporarily redirecting stdout to another file. + + # How to send help() to stderr + with redirect_stdout(sys.stderr): + help(dir) + + # How to write help() to a file + with open('help.txt', 'w') as f: + with redirect_stdout(f): + help(pow) + 'b'Context manager for temporarily redirecting stderr to another file.'u'Context manager for temporarily redirecting stderr to another file.'b'Context manager to suppress specified exceptions + + After the exception is suppressed, execution proceeds with the next + statement following the with statement. + + with suppress(FileNotFoundError): + os.remove(somefile) + # Execution still resumes here if the file was already removed + 'u'Context manager to suppress specified exceptions + + After the exception is suppressed, execution proceeds with the next + statement following the with statement. + + with suppress(FileNotFoundError): + os.remove(somefile) + # Execution still resumes here if the file was already removed + 'b'A base class for ExitStack and AsyncExitStack.'u'A base class for ExitStack and AsyncExitStack.'b'Preserve the context stack by transferring it to a new instance.'u'Preserve the context stack by transferring it to a new instance.'b'Registers a callback with the standard __exit__ method signature. + + Can suppress exceptions the same way __exit__ method can. + Also accepts any object with an __exit__ method (registering a call + to the method instead of the object itself). + 'u'Registers a callback with the standard __exit__ method signature. + + Can suppress exceptions the same way __exit__ method can. + Also accepts any object with an __exit__ method (registering a call + to the method instead of the object itself). + 'b'Enters the supplied context manager. + + If successful, also pushes its __exit__ method as a callback and + returns the result of the __enter__ method. + 'u'Enters the supplied context manager. + + If successful, also pushes its __exit__ method as a callback and + returns the result of the __enter__ method. + 'b'Registers an arbitrary callback and arguments. + + Cannot suppress exceptions. + 'u'Registers an arbitrary callback and arguments. + + Cannot suppress exceptions. + 'b'descriptor 'callback' of '_BaseExitStack' object needs an argument'u'descriptor 'callback' of '_BaseExitStack' object needs an argument'b'callback'u'callback'b'Passing 'callback' as keyword argument is deprecated'u'Passing 'callback' as keyword argument is deprecated'b'callback expected at least 1 positional argument, got %d'u'callback expected at least 1 positional argument, got %d'b'($self, callback, /, *args, **kwds)'u'($self, callback, /, *args, **kwds)'b'Helper to correctly register callbacks to __exit__ methods.'u'Helper to correctly register callbacks to __exit__ methods.'b'Context manager for dynamic management of a stack of exit callbacks. + + For example: + with ExitStack() as stack: + files = [stack.enter_context(open(fname)) for fname in filenames] + # All opened files will automatically be closed at the end of + # the with statement, even if attempts to open files later + # in the list raise an exception. + 'u'Context manager for dynamic management of a stack of exit callbacks. + + For example: + with ExitStack() as stack: + files = [stack.enter_context(open(fname)) for fname in filenames] + # All opened files will automatically be closed at the end of + # the with statement, even if attempts to open files later + # in the list raise an exception. + 'b'Immediately unwind the context stack.'u'Immediately unwind the context stack.'b'Async context manager for dynamic management of a stack of exit + callbacks. + + For example: + async with AsyncExitStack() as stack: + connections = [await stack.enter_async_context(get_connection()) + for i in range(5)] + # All opened connections will automatically be released at the + # end of the async with statement, even if attempts to open a + # connection later in the list raise an exception. + 'u'Async context manager for dynamic management of a stack of exit + callbacks. + + For example: + async with AsyncExitStack() as stack: + connections = [await stack.enter_async_context(get_connection()) + for i in range(5)] + # All opened connections will automatically be released at the + # end of the async with statement, even if attempts to open a + # connection later in the list raise an exception. + 'b'Enters the supplied async context manager. + + If successful, also pushes its __aexit__ method as a callback and + returns the result of the __aenter__ method. + 'u'Enters the supplied async context manager. + + If successful, also pushes its __aexit__ method as a callback and + returns the result of the __aenter__ method. + 'b'Registers a coroutine function with the standard __aexit__ method + signature. + + Can suppress exceptions the same way __aexit__ method can. + Also accepts any object with an __aexit__ method (registering a call + to the method instead of the object itself). + 'u'Registers a coroutine function with the standard __aexit__ method + signature. + + Can suppress exceptions the same way __aexit__ method can. + Also accepts any object with an __aexit__ method (registering a call + to the method instead of the object itself). + 'b'Registers an arbitrary coroutine function and arguments. + + Cannot suppress exceptions. + 'u'Registers an arbitrary coroutine function and arguments. + + Cannot suppress exceptions. + 'b'descriptor 'push_async_callback' of 'AsyncExitStack' object needs an argument'u'descriptor 'push_async_callback' of 'AsyncExitStack' object needs an argument'b'push_async_callback expected at least 1 positional argument, got %d'u'push_async_callback expected at least 1 positional argument, got %d'b'Helper to correctly register coroutine function to __aexit__ + method.'u'Helper to correctly register coroutine function to __aexit__ + method.'b'Context manager that does no additional processing. + + Used as a stand-in for a normal context manager, when a particular + block of code is only sometimes used with a normal context manager: + + cm = optional_cm if condition else nullcontext() + with cm: + # Perform operation, using optional_cm if condition is True + 'u'Context manager that does no additional processing. + + Used as a stand-in for a normal context manager, when a particular + block of code is only sometimes used with a normal context manager: + + cm = optional_cm if condition else nullcontext() + with cm: + # Perform operation, using optional_cm if condition is True + 'u'contextlib'b'ContextVar'u'ContextVar'b'Token'u'Token'b'copy_context'u'copy_context'u'contextvars'Generic (shallow and deep) copying operations. + +Interface summary: + + import copy + + x = copy.copy(y) # make a shallow copy of y + x = copy.deepcopy(y) # make a deep copy of y + +For module specific errors, copy.Error is raised. + +The difference between shallow and deep copying is only relevant for +compound objects (objects that contain other objects, like lists or +class instances). + +- A shallow copy constructs a new compound object and then (to the + extent possible) inserts *the same objects* into it that the + original contains. + +- A deep copy constructs a new compound object and then, recursively, + inserts *copies* into it of the objects found in the original. + +Two problems often exist with deep copy operations that don't exist +with shallow copy operations: + + a) recursive objects (compound objects that, directly or indirectly, + contain a reference to themselves) may cause a recursive loop + + b) because deep copy copies *everything* it may copy too much, e.g. + administrative data structures that should be shared even between + copies + +Python's deep copy operation avoids these problems by: + + a) keeping a table of objects already copied during the current + copying pass + + b) letting user-defined classes override the copying operation or the + set of components copied + +This version does not copy types like module, class, function, method, +nor stack trace, stack frame, nor file, socket, window, nor array, nor +any similar types. + +Classes can use the same interfaces to control copying that they use +to control pickling: they can define methods called __getinitargs__(), +__getstate__() and __setstate__(). See the documentation for module +"pickle" for information on these methods. +org.python.corePyStringMapdeepcopyShallow copy operation on arbitrary Python objects. + + See the module's __doc__ string for more info. + _copy_dispatchcopier_copy_immutablereductorun(shallow)copyable object of type %s_reconstructCodeType_nilDeep copy operation on arbitrary Python objects. + + See the module's __doc__ string for more info. + _deepcopy_dispatch_deepcopy_atomicun(deep)copyable object of type %s_keep_alive_deepcopy_list_deepcopy_tuple_deepcopy_dict_deepcopy_methodKeeps a reference to the object x in the memo. + + Because we remember objects by their id, we have + to assure that possibly temporary objects are kept + alive by referencing them. + We store a reference at the id of the memo, which should + normally not be used unless someone tries to deepcopy + the memo itself... + listiterdictiterdeepslotstate# backward compatibility# treat it as a regular class:# If is its own copy, don't memoize.# Make sure x lives at least as long as d# We're not going to put the tuple in the memo, but it's still important we# check for it, in case the tuple contains recursive mutable structures.# Copy instance methods# aha, this is the first one :-)b'Generic (shallow and deep) copying operations. + +Interface summary: + + import copy + + x = copy.copy(y) # make a shallow copy of y + x = copy.deepcopy(y) # make a deep copy of y + +For module specific errors, copy.Error is raised. + +The difference between shallow and deep copying is only relevant for +compound objects (objects that contain other objects, like lists or +class instances). + +- A shallow copy constructs a new compound object and then (to the + extent possible) inserts *the same objects* into it that the + original contains. + +- A deep copy constructs a new compound object and then, recursively, + inserts *copies* into it of the objects found in the original. + +Two problems often exist with deep copy operations that don't exist +with shallow copy operations: + + a) recursive objects (compound objects that, directly or indirectly, + contain a reference to themselves) may cause a recursive loop + + b) because deep copy copies *everything* it may copy too much, e.g. + administrative data structures that should be shared even between + copies + +Python's deep copy operation avoids these problems by: + + a) keeping a table of objects already copied during the current + copying pass + + b) letting user-defined classes override the copying operation or the + set of components copied + +This version does not copy types like module, class, function, method, +nor stack trace, stack frame, nor file, socket, window, nor array, nor +any similar types. + +Classes can use the same interfaces to control copying that they use +to control pickling: they can define methods called __getinitargs__(), +__getstate__() and __setstate__(). See the documentation for module +"pickle" for information on these methods. +'u'Generic (shallow and deep) copying operations. + +Interface summary: + + import copy + + x = copy.copy(y) # make a shallow copy of y + x = copy.deepcopy(y) # make a deep copy of y + +For module specific errors, copy.Error is raised. + +The difference between shallow and deep copying is only relevant for +compound objects (objects that contain other objects, like lists or +class instances). + +- A shallow copy constructs a new compound object and then (to the + extent possible) inserts *the same objects* into it that the + original contains. + +- A deep copy constructs a new compound object and then, recursively, + inserts *copies* into it of the objects found in the original. + +Two problems often exist with deep copy operations that don't exist +with shallow copy operations: + + a) recursive objects (compound objects that, directly or indirectly, + contain a reference to themselves) may cause a recursive loop + + b) because deep copy copies *everything* it may copy too much, e.g. + administrative data structures that should be shared even between + copies + +Python's deep copy operation avoids these problems by: + + a) keeping a table of objects already copied during the current + copying pass + + b) letting user-defined classes override the copying operation or the + set of components copied + +This version does not copy types like module, class, function, method, +nor stack trace, stack frame, nor file, socket, window, nor array, nor +any similar types. + +Classes can use the same interfaces to control copying that they use +to control pickling: they can define methods called __getinitargs__(), +__getstate__() and __setstate__(). See the documentation for module +"pickle" for information on these methods. +'b'deepcopy'u'deepcopy'b'Shallow copy operation on arbitrary Python objects. + + See the module's __doc__ string for more info. + 'u'Shallow copy operation on arbitrary Python objects. + + See the module's __doc__ string for more info. + 'b'__copy__'u'__copy__'b'__reduce_ex__'u'__reduce_ex__'b'__reduce__'u'__reduce__'b'un(shallow)copyable object of type %s'u'un(shallow)copyable object of type %s'b'CodeType'u'CodeType'b'Deep copy operation on arbitrary Python objects. + + See the module's __doc__ string for more info. + 'u'Deep copy operation on arbitrary Python objects. + + See the module's __doc__ string for more info. + 'b'__deepcopy__'u'__deepcopy__'b'un(deep)copyable object of type %s'u'un(deep)copyable object of type %s'b'Keeps a reference to the object x in the memo. + + Because we remember objects by their id, we have + to assure that possibly temporary objects are kept + alive by referencing them. + We store a reference at the id of the memo, which should + normally not be used unless someone tries to deepcopy + the memo itself... + 'u'Keeps a reference to the object x in the memo. + + Because we remember objects by their id, we have + to assure that possibly temporary objects are kept + alive by referencing them. + We store a reference at the id of the memo, which should + normally not be used unless someone tries to deepcopy + the memo itself... + 'b'__setstate__'u'__setstate__'Helper to provide extensibility for pickle. + +This is only useful to add pickle support for extension types defined in +C, not for instances of user-defined classes. +constructoradd_extensionremove_extensionclear_extension_cacheob_typepickle_functionconstructor_obreduction functions must be callableconstructors must be callablepickle_complex_reconstructor_HEAPTYPE_reduce_excannot pickle object object: a class that defines __slots__ without defining __getstate__ cannot be pickled with protocol " object: ""a class that defines __slots__ without ""defining __getstate__ cannot be pickled ""with protocol "__newobj____newobj_ex__Used by pickle protocol 4, instead of __newobj__ to allow classes with + keyword-only arguments to be pickled correctly. + _slotnamesReturn a list of slot names for a given class. + + This needs to find slots defined by the class and its bases, so we + can't simply return the __slots__ attribute. We must walk down + the Method Resolution Order and concatenate the __slots__ of each + class found there. (This assumes classes don't modify their + __slots__ attribute to misrepresent their slots after the class is + defined.) + __slotnames__slots_%s%s_extension_registry_inverted_registry_extension_cacheRegister an extension code.code out of rangekey %s is already registered with code %scode %s is already in use for key %sUnregister an extension code. For testing only.key %s is not registered with code %s# The constructor_ob function is a vestige of safe for unpickling.# There is no reason for the caller to pass it anymore.# Example: provide pickling support for complex numbers.# Support for pickling new-style objects# Python code for object.__reduce_ex__ for protocols 0 and 1# not really reachable# Helper for __reduce_ex__ protocol 2# Get the value from a cache in the class if possible# Not cached -- calculate the value# This class has no slots# Slots found -- gather slot names from all base classes# if class has a single slot, it can be given as a string# special descriptors# mangled names# Cache the outcome in the class if at all possible# But don't die if we can't# A registry of extension codes. This is an ad-hoc compression# mechanism. Whenever a global reference to , is about# to be pickled, the (, ) tuple is looked up here to see# if it is a registered extension code for it. Extension codes are# universal, so that the meaning of a pickle does not depend on# context. (There are also some codes reserved for local use that# don't have this restriction.) Codes are positive ints; 0 is# reserved.# key -> code# code -> key# code -> object# Don't ever rebind those names: pickling grabs a reference to them when# it's initialized, and won't see a rebinding.# Redundant registrations are benign# Standard extension code assignments# Reserved ranges# First Last Count Purpose# 1 127 127 Reserved for Python standard library# 128 191 64 Reserved for Zope# 192 239 48 Reserved for 3rd parties# 240 255 16 Reserved for private use (will never be assigned)# 256 Inf Inf Reserved for future assignment# Extension codes are assigned by the Python Software Foundation.b'Helper to provide extensibility for pickle. + +This is only useful to add pickle support for extension types defined in +C, not for instances of user-defined classes. +'u'Helper to provide extensibility for pickle. + +This is only useful to add pickle support for extension types defined in +C, not for instances of user-defined classes. +'b'constructor'u'constructor'b'add_extension'u'add_extension'b'remove_extension'u'remove_extension'b'clear_extension_cache'u'clear_extension_cache'b'reduction functions must be callable'u'reduction functions must be callable'b'constructors must be callable'u'constructors must be callable'b'__flags__'u'__flags__'b'cannot pickle 'u'cannot pickle 'b' object'u' object'b' object: a class that defines __slots__ without defining __getstate__ cannot be pickled with protocol 'u' object: a class that defines __slots__ without defining __getstate__ cannot be pickled with protocol 'b'Used by pickle protocol 4, instead of __newobj__ to allow classes with + keyword-only arguments to be pickled correctly. + 'u'Used by pickle protocol 4, instead of __newobj__ to allow classes with + keyword-only arguments to be pickled correctly. + 'b'Return a list of slot names for a given class. + + This needs to find slots defined by the class and its bases, so we + can't simply return the __slots__ attribute. We must walk down + the Method Resolution Order and concatenate the __slots__ of each + class found there. (This assumes classes don't modify their + __slots__ attribute to misrepresent their slots after the class is + defined.) + 'u'Return a list of slot names for a given class. + + This needs to find slots defined by the class and its bases, so we + can't simply return the __slots__ attribute. We must walk down + the Method Resolution Order and concatenate the __slots__ of each + class found there. (This assumes classes don't modify their + __slots__ attribute to misrepresent their slots after the class is + defined.) + 'b'__slotnames__'u'__slotnames__'b'_%s%s'u'_%s%s'b'Register an extension code.'u'Register an extension code.'b'code out of range'u'code out of range'b'key %s is already registered with code %s'u'key %s is already registered with code %s'b'code %s is already in use for key %s'u'code %s is already in use for key %s'b'Unregister an extension code. For testing only.'u'Unregister an extension code. For testing only.'b'key %s is not registered with code %s'u'key %s is not registered with code %s'PYTHONASYNCIODEBUG_DEBUGCoroWrapperisgeneratorextract_stackcoro_repr, created at f_lasti was never yielded from +Coroutine object created at (most recent call last, truncated to '\nCoroutine object created at ''(most recent call last, truncated to ' last lines): +Decorator to mark coroutines. + + If the coroutine is not yielded from before it is destroyed, + an error message is logged. + "@coroutine" decorator is deprecated since Python 3.8, use "async def" insteadisgeneratorfunctionawait_meth_is_coroutineReturn True if func is a decorated coroutine function.CoroutineTypeGeneratorType_COROUTINE_TYPES_iscoroutine_typecacheReturn True if obj is a coroutine object.is_corowrapper_format_callbackcoro_name without __name__>cr_runningcoro_codecr_code runningcoro_frame_get_function_source done, defined at running, defined at running at # If you set _DEBUG to true, @coroutine will wrap the resulting# generator objects in a CoroWrapper instance (defined below). That# instance will log a message when the generator is never iterated# over, which may happen when you forget to use "await" or "yield from"# with a coroutine call.# Note that the value of the _DEBUG flag is taken# when the decorator is used, so to be of any use it must be set# before you define your coroutines. A downside of using this feature# is that tracebacks show entries for the CoroWrapper.__next__ method# when _DEBUG is true.# Wrapper for coroutine object in _DEBUG mode.# Used to unwrap @coroutine decorator# Be careful accessing self.gen.frame -- self.gen might not exist.# In Python 3.5 that's all we need to do for coroutines# defined with "async def".# If 'res' is an awaitable, run it.# Python < 3.5 does not implement __qualname__# on generator objects, so we set it manually.# We use getattr as some callables (such as# functools.partial may lack __qualname__).# For iscoroutinefunction().# A marker for iscoroutinefunction.# Prioritize native coroutine check to speed-up# asyncio.iscoroutine.# Just in case we don't want to cache more than 100# positive types. That shouldn't ever happen, unless# someone stressing the system on purpose.# Coroutines compiled with Cython sometimes don't have# proper __qualname__ or __name__. While that is a bug# in Cython, asyncio shouldn't crash with an AttributeError# in its __repr__ functions.# Stop masking Cython bugs, expose them in a friendly way.# Built-in types might not have __qualname__ or __name__.# If Cython's coroutine has a fake code object without proper# co_filename -- expose that.b'coroutine'u'coroutine'b'iscoroutinefunction'u'iscoroutinefunction'b'iscoroutine'u'iscoroutine'b'PYTHONASYNCIODEBUG'u'PYTHONASYNCIODEBUG'b', created at 'u', created at 'b'gen'u'gen'b' was never yielded from'u' was never yielded from'b'_source_traceback'u'_source_traceback'b' +Coroutine object created at (most recent call last, truncated to 'u' +Coroutine object created at (most recent call last, truncated to 'b' last lines): +'u' last lines): +'b'Decorator to mark coroutines. + + If the coroutine is not yielded from before it is destroyed, + an error message is logged. + 'u'Decorator to mark coroutines. + + If the coroutine is not yielded from before it is destroyed, + an error message is logged. + 'b'"@coroutine" decorator is deprecated since Python 3.8, use "async def" instead'u'"@coroutine" decorator is deprecated since Python 3.8, use "async def" instead'b'Return True if func is a decorated coroutine function.'u'Return True if func is a decorated coroutine function.'b'_is_coroutine'u'_is_coroutine'b'Return True if obj is a coroutine object.'u'Return True if obj is a coroutine object.'b' without __name__>'u' without __name__>'b'cr_code'u'cr_code'b'gi_code'u'gi_code'b' running'u' running'b''u''b' done, defined at 'u' done, defined at 'b' running, defined at 'u' running, defined at 'b' running at 'u' running at 'u'asyncio.coroutines'u'coroutines'Concrete date/time and related types. + +See http://www.iana.org/time-zones/repository/tz-link.html for +time zone and DST data sources. +_time3652059_MAXORDINAL_DAYS_IN_MONTH_DAYS_BEFORE_MONTHdim_is_leapyear -> 1 if leap year, else 0._days_before_yearyear -> number of days before January 1st of year.365_days_in_monthyear, month -> number of days in that month in that year._days_before_monthyear, month -> number of days in year preceding first day of month.month must be in 1..12_ymd2ordyear, month, day -> ordinal, considering 01-Jan-0001 as day 1.day must be in 1..%d_DI400Y_DI100Y_DI4Y_ord2ymdordinal -> (year, month, day), considering 01-Jan-0001 as day 1.n400n100n4n1leapyearprecedingJanFebMarAprMayJunJulAugSepOctNovDec_MONTHNAMESMonTueWedThuFriSatSun_DAYNAMES_build_struct_timehhdstflagwdaydnum_format_timetimespec{:02d}{:02d}:{:02d}{:02d}:{:02d}:{:02d}{:02d}:{:02d}:{:02d}.{:03d}milliseconds{:02d}:{:02d}:{:02d}.{:06d}specsUnknown timespec value_format_offsetoff%s%02d:%02d:%02d.%06d_wrap_strftimefreplacezreplaceZreplacenewformatch%06d%c%02d%02d%02d.%06d%c%02d%02d%02d%c%02d%02d%%_parse_isoformat_datedtstrInvalid date separator: %sInvalid date separator_parse_hh_mm_ss_fftstrlen_strtime_compsIncomplete time componentnext_charInvalid time separator: %cInvalid microsecond componentlen_remainder_parse_isoformat_timeIsoformat time too shorttz_postimestrtzitzstrMalformed time zone stringtz_compstd_check_tznametzinfo.tzname() must return None or string, not '%s'"tzinfo.tzname() must return None or string, ""not '%s'"_check_utc_offsettzinfo.%s() must return None or timedelta, not '%s'"tzinfo.%s() must return None ""or timedelta, not '%s'"%s()=%s, must be strictly between -timedelta(hours=24) and timedelta(hours=24)"%s()=%s, must be strictly between ""-timedelta(hours=24) and timedelta(hours=24)"_check_int_fieldinteger argument expected, got float__index__ returned non-int (type %s)orig__int__ returned non-int (type %s)an integer is required (got type %s)_check_date_fieldsyear must be in %d..%d_check_time_fieldshour must be in 0..23minute must be in 0..59second must be in 0..59microsecond must be in 0..999999fold must be either 0 or 1_check_tzinfo_argtzinfo argument must be None or of a tzinfo subclass_cmperrorcan't compare '%s' to '%s'_divide_and_rounddivide a by b and round result to the nearest integer + + When the ratio is exactly half-way between two integers, + the even integer is returned. + greater_than_halfRepresent the difference between two datetime objects. + + Supported operators: + + - add, subtract timedelta + - unary plus, minus, abs + - compare to timedelta + - multiply, divide by int + + In addition, datetime supports subtraction of two datetime objects + returning a timedelta, and addition or subtraction of a datetime + and a timedelta giving a datetime. + + Representation: (days, seconds, microseconds). Why? Because I + felt like it. + _seconds_microseconds_hashcodemodfdayfrac24.024.3600.03600.daysecondsfracdaysecondswholesecondsfrac2.01000000.01e6usdouble2100000.02.1e610000003100000.03.1e6999999999timedelta # of days is too large: %ddays=%dseconds=%dmicroseconds=%d%s.%s(%s)%d:%02d:%02dplural%d day%s, Total seconds in the duration.86400_to_microsecondsusec_getstateConcrete date type. + + Constructors: + + __new__() + fromtimestamp() + today() + fromordinal() + + Operators: + + __repr__, __str__ + __eq__, __le__, __lt__, __ge__, __gt__, __hash__ + __add__, __radd__, __sub__ (add/radd only with timedelta arg) + + Methods: + + timetuple() + toordinal() + weekday() + isoweekday(), isocalendar(), isoformat() + ctime() + strftime() + + Properties (readonly): + year, month, day + _year_month_dayConstructor. + + Arguments: + + year, month, day (required, base 1) + Failed to encode latin1 string when unpickling a date object. pickle.load(data, encoding='latin1') is assumed."Failed to encode latin1 string when unpickling ""a date object. ""pickle.load(data, encoding='latin1') is assumed."__setstateConstruct a date from a POSIX timestamp (like time.time()).jdayConstruct a date from time.time().Construct a date from a proleptic Gregorian ordinal. + + January 1 of year 1 is day 1. Only the year, month and day are + non-zero in the result. + date_stringConstruct a date from the output of date.isoformat().fromisoformat: argument must be strInvalid isoformat string: Construct a date from the ISO year, week number and weekday. + + This is the inverse of the date.isocalendar() functionYear is out of range: out_of_rangefirst_weekdayInvalid week: Invalid weekday: (range is [1, 7])day_offset_isoweek1mondayday_1ord_dayConvert to formal string, for repr(). + + >>> dt = datetime(2010, 1, 1) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0)' + + >>> dt = datetime(2010, 1, 1, tzinfo=timezone.utc) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)' + %s.%s(%d, %d, %d)Return ctime() style string.%s %s %2d 00:00:00 %04dFormat using strftime().must be str, not %sReturn the date formatted according to ISO. + + This is 'YYYY-MM-DD'. + + References: + - http://www.w3.org/TR/NOTE-datetime + - http://www.cl.cam.ac.uk/~mgk25/iso-time.html + %04d-%02d-%02dyear (1-9999)month (1-12)day (1-31)Return local time tuple compatible with time.localtime().Return proleptic Gregorian ordinal for the year, month and day. + + January 1 of year 1 is day 1. Only the year, month and day values + contribute to the result. + Return a new date with new values for the specified fields.m2Hash.Add a date to a timedelta.result out of rangeSubtract two dates, or a date and a timedelta.days1days2Return day of the week, where Monday == 0 ... Sunday == 6.Return day of the week, where Monday == 1 ... Sunday == 7.Return a 3-tuple containing ISO year, week number, and weekday. + + The first ISO week of the year is the (Mon-Sun) week + containing the year's first Thursday; everything else derives + from that. + + The first week is 1; Monday is 1 ... Sunday is 7. + + ISO calendar algorithm taken from + http://www.phys.uu.nl/~vgent/calendar/isocalendar.htm + (used with permission) + week1mondayyhiylo_date_classAbstract base class for time zone info classes. + + Subclasses must override the name(), utcoffset() and dst() methods. + datetime -> string name of time zone.tzinfo subclass must override tzname()datetime -> timedelta, positive for east of UTC, negative for west of UTCtzinfo subclass must override utcoffset()datetime -> DST offset as timedelta, positive for east of UTC. + + Return 0 if DST not in effect. utcoffset() must include the DST + offset. + tzinfo subclass must override dst()datetime in UTC -> datetime in local time.fromutc() requires a datetime argumentdt.tzinfo is not selfdtofffromutc() requires a non-None utcoffset() result"fromutc() requires a non-None utcoffset() ""result"dtdstfromutc() requires a non-None dst() resultfromutc(): dt.dst gave inconsistent results; cannot convert"fromutc(): dt.dst gave inconsistent ""results; cannot convert"getinitargs_tzinfo_classTime with time zone. + + Constructors: + + __new__() + + Operators: + + __repr__, __str__ + __eq__, __le__, __lt__, __ge__, __gt__, __hash__ + + Methods: + + strftime() + isoformat() + utcoffset() + tzname() + dst() + + Properties (readonly): + hour, minute, second, microsecond, tzinfo, fold + _hour_minute_second_microsecond_tzinfoConstructor. + + Arguments: + + hour, minute (required) + second, microsecond (default to zero) + tzinfo (default to None) + fold (keyword only, default to zero) + 0x7FFailed to encode latin1 string when unpickling a time object. pickle.load(data, encoding='latin1') is assumed."a time object. "hour (0-23)minute (0-59)second (0-59)microsecond (0-999999)timezone info objectallow_mixedmytzottzmyoffotoffbase_comparecannot compare naive and aware timesmyhhmmothhmmtzoffwhole minute_tzstrReturn formatted timezone offset (+xx:xx) or an empty string.Convert to formal string, for repr()., %d, %d, %d%s.%s(%d, %d%s), tzinfo=%r, fold=1)Return the time formatted according to ISO. + + The full format is 'HH:MM:SS.mmmmmm+zz:zz'. By default, the fractional + part is omitted if self.microsecond == 0. + + The optional argument timespec specifies the number of additional + terms of the time to include. Valid options are 'auto', 'hours', + 'minutes', 'seconds', 'milliseconds' and 'microseconds'. + time_stringConstruct a time from the output of isoformat().Format using strftime(). The date part of the timestamp passed + to underlying strftime should not be used. + Return the timezone offset as timedelta, positive east of UTC + (negative west of UTC).Return the timezone name. + + Note that the name is 100% informational -- there's no requirement that + it mean anything in particular. For example, "GMT", "UTC", "-500", + "-5:00", "EDT", "US/Eastern", "America/New York" are all valid replies. + Return 0 if DST is not in effect, or the DST offset (as timedelta + positive eastward) if DST is in effect. + + This is purely informational; the DST offset has already been added to + the UTC offset returned by utcoffset() if applicable, so there's no + need to consult dst() unless you're interested in displaying the DST + info. + Return a new time with new values for the specified fields.us2us3us1basestatebad tzinfo state arg_time_classdatetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]]) + + The year, month and day arguments are required. tzinfo may be None, or an + instance of a tzinfo subclass. The remaining arguments may be ints. + Failed to encode latin1 string when unpickling a datetime object. pickle.load(data, encoding='latin1') is assumed."a datetime object. "_fromtimestampConstruct a datetime from a POSIX timestamp (like time.time()). + + A timezone info object may be passed in as well. + gmtimemax_fold_secondsprobe1transprobe2Construct a naive UTC datetime from a POSIX timestamp.Construct a datetime from time.time() and optional time zone info.Construct a UTC datetime from time.time().Construct a datetime from a given date and a given time.date argument must be a date instancetime argument must be a time instanceConstruct a datetime from the output of datetime.isoformat().dstrdate_componentstime_components_mktimeReturn integer POSIX timestamp.epochu1t1t2Return POSIX timestamp as float_EPOCHReturn UTC time tuple compatible with time.gmtime().Return the date part.Return the time part, with tzinfo None.Return the time part, with same tzinfo.Return a new datetime with new values for the specified fields._local_timezonetslocaltmtm_gmtoffgmtoffzonetz argument must be an instance of tzinfomyoffset%s %s %2d %02d:%02d:%02d %04dReturn the time formatted according to ISO. + + The full format looks like 'YYYY-MM-DD HH:MM:SS.mmmmmm'. + By default, the fractional part is omitted if self.microsecond == 0. + + If self.tzinfo is not None, the UTC offset is also attached, giving + giving a full format of 'YYYY-MM-DD HH:MM:SS.mmmmmm+HH:MM'. + + Optional argument sep specifies the separator between date and + time, default 'T'. + + The optional argument timespec specifies the number of additional + terms of the time to include. Valid options are 'auto', 'hours', + 'minutes', 'seconds', 'milliseconds' and 'microseconds'. + %04d-%02d-%02d%cConvert to string, for str().string, format -> new datetime parsed from a string (like time.strptime())._strptime_strptime_datetimeReturn the timezone offset as timedelta positive east of UTC (negative west of + UTC).cannot compare naive and aware datetimesAdd a datetime and a timedelta.Subtract two datetimes, or a datetime and a timedelta.secs1secs2cannot mix naive and timezone-aware timefirstday_offset_Omittedoffset must be a timedelta_minoffset_maxoffsetoffset must be a timedelta strictly between -timedelta(hours=24) and timedelta(hours=24)."offset must be a timedelta ""strictly between -timedelta(hours=24) and ""timedelta(hours=24)."pickle supportConvert to formal string, for repr(). + + >>> tz = timezone.utc + >>> repr(tz) + 'datetime.timezone.utc' + >>> tz = timezone(timedelta(hours=-5), 'EST') + >>> repr(tz) + "datetime.timezone(datetime.timedelta(-1, 68400), 'EST')" + datetime.timezone.utc%s.%s(%r)%s.%s(%r, %r)utcoffset() argument must be a datetime instance or None"utcoffset() argument must be a datetime instance"" or None"_name_from_offsettzname() argument must be a datetime instance or None"tzname() argument must be a datetime instance"dst() argument must be a datetime instance or None"dst() argument must be a datetime instance"fromutc: dt.tzinfo is not self"fromutc: dt.tzinfo ""is not self"fromutc() argument must be a datetime instance or None"fromutc() argument must be a datetime instance"'.'# date.max.toordinal()# Utility functions, adapted from Python's Demo/classes/Dates.py, which# also assumes the current Gregorian calendar indefinitely extended in# both directions. Difference: Dates.py calls January 1 of year 0 day# number 1. The code here calls January 1 of year 1 day number 1. This is# to match the definition of the "proleptic Gregorian" calendar in Dershowitz# and Reingold's "Calendrical Calculations", where it's the base calendar# for all computations. See the book for algorithms for converting between# proleptic Gregorian ordinals and many other calendar systems.# -1 is a placeholder for indexing purposes.# number of days in 400 years# " " " " 100 "# " " " " 4 "# A 4-year cycle has an extra leap day over what we'd get from pasting# together 4 single years.# Similarly, a 400-year cycle has an extra leap day over what we'd get from# pasting together 4 100-year cycles.# OTOH, a 100-year cycle has one fewer leap day than we'd get from# pasting together 25 4-year cycles.# n is a 1-based index, starting at 1-Jan-1. The pattern of leap years# repeats exactly every 400 years. The basic strategy is to find the# closest 400-year boundary at or before n, then work with the offset# from that boundary to n. Life is much clearer if we subtract 1 from# n first -- then the values of n at 400-year boundaries are exactly# those divisible by _DI400Y:# D M Y n n-1# -- --- ---- ---------- ----------------# 31 Dec -400 -_DI400Y -_DI400Y -1# 1 Jan -399 -_DI400Y +1 -_DI400Y 400-year boundary# ...# 30 Dec 000 -1 -2# 31 Dec 000 0 -1# 1 Jan 001 1 0 400-year boundary# 2 Jan 001 2 1# 3 Jan 001 3 2# 31 Dec 400 _DI400Y _DI400Y -1# 1 Jan 401 _DI400Y +1 _DI400Y 400-year boundary# ..., -399, 1, 401, ...# Now n is the (non-negative) offset, in days, from January 1 of year, to# the desired date. Now compute how many 100-year cycles precede n.# Note that it's possible for n100 to equal 4! In that case 4 full# 100-year cycles precede the desired day, which implies the desired# day is December 31 at the end of a 400-year cycle.# Now compute how many 4-year cycles precede it.# And now how many single years. Again n1 can be 4, and again meaning# that the desired day is December 31 at the end of the 4-year cycle.# Now the year is correct, and n is the offset from January 1. We find# the month via an estimate that's either exact or one too large.# estimate is too large# Now the year and month are correct, and n is the offset from the# start of that month: we're done!# Month and day names. For localized versions, see the calendar module.# Skip trailing microseconds when us==0.# Correctly substitute for %z and %Z escapes in strftime formats.# Don't call utcoffset() or tzname() unless actually needed.# the string to use for %f# the string to use for %z# the string to use for %Z# Scan format for %z and %Z escapes, replacing as needed.# strftime is going to have at this: escape %# Helpers for parsing the result of isoformat()# It is assumed that this function will only be called with a# string of length exactly 10, and (though this is not used) ASCII-only# Parses things of the form HH[:MM[:SS[.fff[fff]]]]# Format supported is HH[:MM[:SS[.fff[fff]]]][+HH:MM[:SS[.ffffff]]]# This is equivalent to re.search('[+-]', tstr), but faster# Valid time zone strings are:# HH:MM len: 5# HH:MM:SS len: 8# HH:MM:SS.ffffff len: 15# Just raise TypeError if the arg isn't None or a string.# name is the offset-producing method, "utcoffset" or "dst".# offset is what it returned.# If offset isn't None or timedelta, raises TypeError.# If offset is None, returns None.# Else offset is checked for being in range.# If it is, its integer value is returned. Else ValueError is raised.# Based on the reference implementation for divmod_near# in Objects/longobject.c.# round up if either r / b > 0.5, or r / b == 0.5 and q is odd.# The expression r / b > 0.5 is equivalent to 2 * r > b if b is# positive, 2 * r < b if b negative.# Doing this efficiently and accurately in C is going to be difficult# and error-prone, due to ubiquitous overflow possibilities, and that# C double doesn't have enough bits of precision to represent# microseconds over 10K years faithfully. The code here tries to make# explicit where go-fast assumptions can be relied on, in order to# guide the C implementation; it's way more convoluted than speed-# ignoring auto-overflow-to-long idiomatic Python could be.# XXX Check that all inputs are ints or floats.# Final values, all integer.# s and us fit in 32-bit signed ints; d isn't bounded.# Normalize everything to days, seconds, microseconds.# Get rid of all fractions, and normalize s and us.# Take a deep breath .# can't overflow# days isn't referenced again before redefinition# daysecondsfrac isn't referenced again# seconds isn't referenced again before redefinition# exact value not critical# secondsfrac isn't referenced again# Just a little bit of carrying possible for microseconds and seconds.# Read-only field accessors# for CPython compatibility, we cannot use# our __class__ here, but need a real timedelta# Comparisons of timedelta objects with other.# Pickle support.# Pickle support# More informative error message.# Additional constructors# Year is bounded this way because 9999-12-31 is (9999, 52, 5)# ISO years have 53 weeks in them on years starting with a# Thursday and leap years starting on a Wednesday# Now compute the offset from (Y, 1, 1) in days:# Calculate the ordinal day for monday, week 1# Conversions to string# XXX These shouldn't depend on time.localtime(), because that# clips the usable dates to [1970 .. 2038). At least ctime() is# easily done without using strftime() -- that's better too because# strftime("%c", ...) is locale specific.# Standard conversions, __eq__, __le__, __lt__, __ge__, __gt__,# __hash__ (and helpers)# Comparisons of date objects with other.# Computations# Day-of-the-week and week-of-the-year, according to ISO# 1-Jan-0001 is a Monday# Internally, week and day have origin 0# so functions w/ args named "date" can get at the class# See the long comment block at the end of this file for an# explanation of this algorithm.# Standard conversions, __hash__ (and helpers)# Comparisons of time objects with other.# arbitrary non-zero value# zero or None# Conversion to string# The year must be >= 1000 else Python's strftime implementation# can raise a bogus exception.# Timezone functions# so functions w/ args named "time" can get at the class# clamp out leap seconds if the platform has them# As of version 2015f max fold in IANA database is# 23 hours at 1969-09-30 13:00:00 in Kwajalein.# Let's probe 24 hours in the past to detect a transition:# On Windows localtime_s throws an OSError for negative values,# thus we can't perform fold detection for values of time less# than the max time fold. See comments in _datetimemodule's# version of this method for more details.# Split this at the separator# Our goal is to solve t = local(u) for u.# We found one solution, but it may not be the one we need.# Look for an earlier solution (if `fold` is 0), or a# later one (if `fold` is 1).# We have found both offsets a and b, but neither t - a nor t - b is# a solution. This means t is in the gap.# Extract TZ data# Convert self to UTC, and attach the new time zone object.# Convert from UTC to tz's local time.# Ways to produce a string.# These are never zero# Comparisons of datetime objects with other.# Assume that allow_mixed means that we are called from __eq__# XXX What follows could be done more efficiently...# this will take offsets into account# Helper to calculate the day number of the Monday starting week 1# XXX This could be done more efficiently# See weekday() above# Sentinel value to disallow None# bpo-37642: These attributes are rounded to the nearest minute for backwards# compatibility, even though the constructor will accept a wider range of# values. This may change in the future.# Some time zone algebra. For a datetime x, let# x.n = x stripped of its timezone -- its naive time.# x.o = x.utcoffset(), and assuming that doesn't raise an exception or# return None# x.d = x.dst(), and assuming that doesn't raise an exception or# x.s = x's standard offset, x.o - x.d# Now some derived rules, where k is a duration (timedelta).# 1. x.o = x.s + x.d# This follows from the definition of x.s.# 2. If x and y have the same tzinfo member, x.s = y.s.# This is actually a requirement, an assumption we need to make about# sane tzinfo classes.# 3. The naive UTC time corresponding to x is x.n - x.o.# This is again a requirement for a sane tzinfo class.# 4. (x+k).s = x.s# This follows from #2, and that datetime.timetz+timedelta preserves tzinfo.# 5. (x+k).n = x.n + k# Again follows from how arithmetic is defined.# Now we can explain tz.fromutc(x). Let's assume it's an interesting case# (meaning that the various tzinfo methods exist, and don't blow up or return# None when called).# The function wants to return a datetime y with timezone tz, equivalent to x.# x is already in UTC.# By #3, we want# y.n - y.o = x.n [1]# The algorithm starts by attaching tz to x.n, and calling that y. So# x.n = y.n at the start. Then it wants to add a duration k to y, so that [1]# becomes true; in effect, we want to solve [2] for k:# (y+k).n - (y+k).o = x.n [2]# By #1, this is the same as# (y+k).n - ((y+k).s + (y+k).d) = x.n [3]# By #5, (y+k).n = y.n + k, which equals x.n + k because x.n=y.n at the start.# Substituting that into [3],# x.n + k - (y+k).s - (y+k).d = x.n; the x.n terms cancel, leaving# k - (y+k).s - (y+k).d = 0; rearranging,# k = (y+k).s - (y+k).d; by #4, (y+k).s == y.s, so# k = y.s - (y+k).d# On the RHS, (y+k).d can't be computed directly, but y.s can be, and we# approximate k by ignoring the (y+k).d term at first. Note that k can't be# very large, since all offset-returning methods return a duration of magnitude# less than 24 hours. For that reason, if y is firmly in std time, (y+k).d must# be 0, so ignoring it has no consequence then.# In any case, the new value is# z = y + y.s [4]# It's helpful to step back at look at [4] from a higher level: it's simply# mapping from UTC to tz's standard time.# At this point, if# z.n - z.o = x.n [5]# we have an equivalent time, and are almost done. The insecurity here is# at the start of daylight time. Picture US Eastern for concreteness. The wall# time jumps from 1:59 to 3:00, and wall hours of the form 2:MM don't make good# sense then. The docs ask that an Eastern tzinfo class consider such a time to# be EDT (because it's "after 2"), which is a redundant spelling of 1:MM EST# on the day DST starts. We want to return the 1:MM EST spelling because that's# the only spelling that makes sense on the local wall clock.# In fact, if [5] holds at this point, we do have the standard-time spelling,# but that takes a bit of proof. We first prove a stronger result. What's the# difference between the LHS and RHS of [5]? Let# diff = x.n - (z.n - z.o) [6]# Now# z.n = by [4]# (y + y.s).n = by #5# y.n + y.s = since y.n = x.n# x.n + y.s = since z and y are have the same tzinfo member,# y.s = z.s by #2# x.n + z.s# Plugging that back into [6] gives# diff =# x.n - ((x.n + z.s) - z.o) = expanding# x.n - x.n - z.s + z.o = cancelling# - z.s + z.o = by #2# z.d# So diff = z.d.# If [5] is true now, diff = 0, so z.d = 0 too, and we have the standard-time# spelling we wanted in the endcase described above. We're done. Contrarily,# if z.d = 0, then we have a UTC equivalent, and are also done.# If [5] is not true now, diff = z.d != 0, and z.d is the offset we need to# add to z (in effect, z is in tz's standard time, and we need to shift the# local clock into tz's daylight time).# Let# z' = z + z.d = z + diff [7]# and we can again ask whether# z'.n - z'.o = x.n [8]# If so, we're done. If not, the tzinfo class is insane, according to the# assumptions we've made. This also requires a bit of proof. As before, let's# compute the difference between the LHS and RHS of [8] (and skipping some of# the justifications for the kinds of substitutions we've done several times# already):# diff' = x.n - (z'.n - z'.o) = replacing z'.n via [7]# x.n - (z.n + diff - z'.o) = replacing diff via [6]# x.n - (z.n + x.n - (z.n - z.o) - z'.o) =# x.n - z.n - x.n + z.n - z.o + z'.o = cancel x.n# - z.n + z.n - z.o + z'.o = cancel z.n# - z.o + z'.o = #1 twice# -z.s - z.d + z'.s + z'.d = z and z' have same tzinfo# z'.d - z.d# So z' is UTC-equivalent to x iff z'.d = z.d at this point. If they are equal,# we've found the UTC-equivalent so are done. In fact, we stop with [7] and# return z', not bothering to compute z'.d.# How could z.d and z'd differ? z' = z + z.d [7], so merely moving z' by# a dst() offset, and starting *from* a time already in DST (we know z.d != 0),# would have to change the result dst() returns: we start in DST, and moving# a little further into it takes us out of DST.# There isn't a sane case where this can happen. The closest it gets is at# the end of DST, where there's an hour in UTC with no spelling in a hybrid# tzinfo class. In US Eastern, that's 5:MM UTC = 0:MM EST = 1:MM EDT. During# that hour, on an Eastern clock 1:MM is taken as being in standard time (6:MM# UTC) because the docs insist on that, but 0:MM is taken as being in daylight# time (4:MM UTC). There is no local time mapping to 5:MM UTC. The local# clock jumps from 1:59 back to 1:00 again, and repeats the 1:MM hour in# standard time. Since that's what the local clock *does*, we want to map both# UTC hours 5:MM and 6:MM to 1:MM Eastern. The result is ambiguous# in local time, but so it goes -- it's the way the local clock works.# When x = 5:MM UTC is the input to this algorithm, x.o=0, y.o=-5 and y.d=0,# so z=0:MM. z.d=60 (minutes) then, so [5] doesn't hold and we keep going.# z' = z + z.d = 1:MM then, and z'.d=0, and z'.d - z.d = -60 != 0 so [8]# (correctly) concludes that z' is not UTC-equivalent to x.# Because we know z.d said z was in daylight time (else [5] would have held and# we would have stopped then), and we know z.d != z'.d (else [8] would have held# and we have stopped then), and there are only 2 possible values dst() can# return in Eastern, it follows that z'.d must be 0 (which it is in the example,# but the reasoning doesn't depend on the example -- it depends on there being# two possible dst() outcomes, one zero and the other non-zero). Therefore# z' must be in standard time, and is the spelling we want in this case.# Note again that z' is not UTC-equivalent as far as the hybrid tzinfo class is# concerned (because it takes z' as being in standard time rather than the# daylight time we intend here), but returning it gives the real-life "local# clock repeats an hour" behavior when mapping the "unspellable" UTC hour into# tz.# When the input is 6:MM, z=1:MM and z.d=0, and we stop at once, again with# the 1:MM standard time spelling we want.# So how can this break? One of the assumptions must be violated. Two# possibilities:# 1) [2] effectively says that y.s is invariant across all y belong to a given# time zone. This isn't true if, for political reasons or continental drift,# a region decides to change its base offset from UTC.# 2) There may be versions of "double daylight" time where the tail end of# the analysis gives up a step too early. I haven't thought about that# enough to say.# In any case, it's clear that the default fromutc() is strong enough to handle# "almost all" time zones: so long as the standard offset is invariant, it# doesn't matter if daylight time transition points change from year to year, or# if daylight time is skipped in some years; it doesn't matter how large or# small dst() may get within its bounds; and it doesn't even matter if some# perverse time zone returns a negative dst()). So a breaking case must be# pretty bizarre, and a tzinfo subclass can override fromutc() if it is.# Clean up unused names# XXX Since import * above excludes names that start with _,# docstring does not get overwritten. In the future, it may be# appropriate to maintain a single module level docstring and# remove the following line.b'Concrete date/time and related types. + +See http://www.iana.org/time-zones/repository/tz-link.html for +time zone and DST data sources. +'u'Concrete date/time and related types. + +See http://www.iana.org/time-zones/repository/tz-link.html for +time zone and DST data sources. +'b'year -> 1 if leap year, else 0.'u'year -> 1 if leap year, else 0.'b'year -> number of days before January 1st of year.'u'year -> number of days before January 1st of year.'b'year, month -> number of days in that month in that year.'u'year, month -> number of days in that month in that year.'b'year, month -> number of days in year preceding first day of month.'u'year, month -> number of days in year preceding first day of month.'b'month must be in 1..12'u'month must be in 1..12'b'year, month, day -> ordinal, considering 01-Jan-0001 as day 1.'u'year, month, day -> ordinal, considering 01-Jan-0001 as day 1.'b'day must be in 1..%d'u'day must be in 1..%d'b'ordinal -> (year, month, day), considering 01-Jan-0001 as day 1.'u'ordinal -> (year, month, day), considering 01-Jan-0001 as day 1.'b'Jan'u'Jan'b'Feb'u'Feb'b'Mar'u'Mar'b'Apr'u'Apr'b'May'u'May'b'Jun'u'Jun'b'Jul'u'Jul'b'Aug'u'Aug'b'Sep'u'Sep'b'Oct'u'Oct'b'Nov'u'Nov'b'Dec'u'Dec'b'Mon'u'Mon'b'Tue'u'Tue'b'Wed'u'Wed'b'Thu'u'Thu'b'Fri'u'Fri'b'Sat'u'Sat'b'Sun'u'Sun'b'auto'u'auto'b'{:02d}'u'{:02d}'b'hours'u'hours'b'{:02d}:{:02d}'u'{:02d}:{:02d}'b'minutes'u'minutes'b'{:02d}:{:02d}:{:02d}'u'{:02d}:{:02d}:{:02d}'b'seconds'u'seconds'b'{:02d}:{:02d}:{:02d}.{:03d}'u'{:02d}:{:02d}:{:02d}.{:03d}'b'milliseconds'u'milliseconds'b'{:02d}:{:02d}:{:02d}.{:06d}'u'{:02d}:{:02d}:{:02d}.{:06d}'b'microseconds'u'microseconds'b'Unknown timespec value'u'Unknown timespec value'b'%s%02d:%02d'u'%s%02d:%02d'b':%02d'u':%02d'b'.%06d'u'.%06d'b'%06d'u'%06d'b'microsecond'u'microsecond'b'utcoffset'u'utcoffset'b'%c%02d%02d%02d.%06d'u'%c%02d%02d%02d.%06d'b'%c%02d%02d%02d'u'%c%02d%02d%02d'b'%c%02d%02d'u'%c%02d%02d'b'tzname'u'tzname'b'%%'u'%%'b'Invalid date separator: %s'u'Invalid date separator: %s'b'Invalid date separator'u'Invalid date separator'b'Incomplete time component'u'Incomplete time component'b'Invalid time separator: %c'u'Invalid time separator: %c'b'Invalid microsecond component'u'Invalid microsecond component'b'Isoformat time too short'u'Isoformat time too short'b'Malformed time zone string'u'Malformed time zone string'b'tzinfo.tzname() must return None or string, not '%s''u'tzinfo.tzname() must return None or string, not '%s''b'dst'u'dst'b'tzinfo.%s() must return None or timedelta, not '%s''u'tzinfo.%s() must return None or timedelta, not '%s''b'%s()=%s, must be strictly between -timedelta(hours=24) and timedelta(hours=24)'u'%s()=%s, must be strictly between -timedelta(hours=24) and timedelta(hours=24)'b'integer argument expected, got float'u'integer argument expected, got float'b'__index__ returned non-int (type %s)'u'__index__ returned non-int (type %s)'b'__int__ returned non-int (type %s)'u'__int__ returned non-int (type %s)'b'an integer is required (got type %s)'u'an integer is required (got type %s)'b'year must be in %d..%d'u'year must be in %d..%d'b'hour must be in 0..23'u'hour must be in 0..23'b'minute must be in 0..59'u'minute must be in 0..59'b'second must be in 0..59'u'second must be in 0..59'b'microsecond must be in 0..999999'u'microsecond must be in 0..999999'b'fold must be either 0 or 1'u'fold must be either 0 or 1'b'tzinfo argument must be None or of a tzinfo subclass'u'tzinfo argument must be None or of a tzinfo subclass'b'can't compare '%s' to '%s''u'can't compare '%s' to '%s''b'divide a by b and round result to the nearest integer + + When the ratio is exactly half-way between two integers, + the even integer is returned. + 'u'divide a by b and round result to the nearest integer + + When the ratio is exactly half-way between two integers, + the even integer is returned. + 'b'Represent the difference between two datetime objects. + + Supported operators: + + - add, subtract timedelta + - unary plus, minus, abs + - compare to timedelta + - multiply, divide by int + + In addition, datetime supports subtraction of two datetime objects + returning a timedelta, and addition or subtraction of a datetime + and a timedelta giving a datetime. + + Representation: (days, seconds, microseconds). Why? Because I + felt like it. + 'u'Represent the difference between two datetime objects. + + Supported operators: + + - add, subtract timedelta + - unary plus, minus, abs + - compare to timedelta + - multiply, divide by int + + In addition, datetime supports subtraction of two datetime objects + returning a timedelta, and addition or subtraction of a datetime + and a timedelta giving a datetime. + + Representation: (days, seconds, microseconds). Why? Because I + felt like it. + 'b'_days'u'_days'b'_seconds'u'_seconds'b'_microseconds'u'_microseconds'b'_hashcode'u'_hashcode'b'timedelta # of days is too large: %d'u'timedelta # of days is too large: %d'b'days=%d'u'days=%d'b'seconds=%d'u'seconds=%d'b'microseconds=%d'u'microseconds=%d'b'%s.%s(%s)'u'%s.%s(%s)'b'%d:%02d:%02d'u'%d:%02d:%02d'b'%d day%s, 'u'%d day%s, 'b'Total seconds in the duration.'u'Total seconds in the duration.'b'days'u'days'b'Concrete date type. + + Constructors: + + __new__() + fromtimestamp() + today() + fromordinal() + + Operators: + + __repr__, __str__ + __eq__, __le__, __lt__, __ge__, __gt__, __hash__ + __add__, __radd__, __sub__ (add/radd only with timedelta arg) + + Methods: + + timetuple() + toordinal() + weekday() + isoweekday(), isocalendar(), isoformat() + ctime() + strftime() + + Properties (readonly): + year, month, day + 'u'Concrete date type. + + Constructors: + + __new__() + fromtimestamp() + today() + fromordinal() + + Operators: + + __repr__, __str__ + __eq__, __le__, __lt__, __ge__, __gt__, __hash__ + __add__, __radd__, __sub__ (add/radd only with timedelta arg) + + Methods: + + timetuple() + toordinal() + weekday() + isoweekday(), isocalendar(), isoformat() + ctime() + strftime() + + Properties (readonly): + year, month, day + 'b'_year'u'_year'b'_month'u'_month'b'_day'u'_day'b'Constructor. + + Arguments: + + year, month, day (required, base 1) + 'u'Constructor. + + Arguments: + + year, month, day (required, base 1) + 'b'Failed to encode latin1 string when unpickling a date object. pickle.load(data, encoding='latin1') is assumed.'u'Failed to encode latin1 string when unpickling a date object. pickle.load(data, encoding='latin1') is assumed.'b'Construct a date from a POSIX timestamp (like time.time()).'u'Construct a date from a POSIX timestamp (like time.time()).'b'Construct a date from time.time().'u'Construct a date from time.time().'b'Construct a date from a proleptic Gregorian ordinal. + + January 1 of year 1 is day 1. Only the year, month and day are + non-zero in the result. + 'u'Construct a date from a proleptic Gregorian ordinal. + + January 1 of year 1 is day 1. Only the year, month and day are + non-zero in the result. + 'b'Construct a date from the output of date.isoformat().'u'Construct a date from the output of date.isoformat().'b'fromisoformat: argument must be str'u'fromisoformat: argument must be str'b'Invalid isoformat string: 'u'Invalid isoformat string: 'b'Construct a date from the ISO year, week number and weekday. + + This is the inverse of the date.isocalendar() function'u'Construct a date from the ISO year, week number and weekday. + + This is the inverse of the date.isocalendar() function'b'Year is out of range: 'u'Year is out of range: 'b'Invalid week: 'u'Invalid week: 'b'Invalid weekday: 'u'Invalid weekday: 'b' (range is [1, 7])'u' (range is [1, 7])'b'Convert to formal string, for repr(). + + >>> dt = datetime(2010, 1, 1) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0)' + + >>> dt = datetime(2010, 1, 1, tzinfo=timezone.utc) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)' + 'u'Convert to formal string, for repr(). + + >>> dt = datetime(2010, 1, 1) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0)' + + >>> dt = datetime(2010, 1, 1, tzinfo=timezone.utc) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)' + 'b'%s.%s(%d, %d, %d)'u'%s.%s(%d, %d, %d)'b'Return ctime() style string.'u'Return ctime() style string.'b'%s %s %2d 00:00:00 %04d'u'%s %s %2d 00:00:00 %04d'b'Format using strftime().'u'Format using strftime().'b'must be str, not %s'u'must be str, not %s'b'Return the date formatted according to ISO. + + This is 'YYYY-MM-DD'. + + References: + - http://www.w3.org/TR/NOTE-datetime + - http://www.cl.cam.ac.uk/~mgk25/iso-time.html + 'u'Return the date formatted according to ISO. + + This is 'YYYY-MM-DD'. + + References: + - http://www.w3.org/TR/NOTE-datetime + - http://www.cl.cam.ac.uk/~mgk25/iso-time.html + 'b'%04d-%02d-%02d'u'%04d-%02d-%02d'b'year (1-9999)'u'year (1-9999)'b'month (1-12)'u'month (1-12)'b'day (1-31)'u'day (1-31)'b'Return local time tuple compatible with time.localtime().'u'Return local time tuple compatible with time.localtime().'b'Return proleptic Gregorian ordinal for the year, month and day. + + January 1 of year 1 is day 1. Only the year, month and day values + contribute to the result. + 'u'Return proleptic Gregorian ordinal for the year, month and day. + + January 1 of year 1 is day 1. Only the year, month and day values + contribute to the result. + 'b'Return a new date with new values for the specified fields.'u'Return a new date with new values for the specified fields.'b'Hash.'u'Hash.'b'Add a date to a timedelta.'u'Add a date to a timedelta.'b'result out of range'u'result out of range'b'Subtract two dates, or a date and a timedelta.'u'Subtract two dates, or a date and a timedelta.'b'Return day of the week, where Monday == 0 ... Sunday == 6.'u'Return day of the week, where Monday == 0 ... Sunday == 6.'b'Return day of the week, where Monday == 1 ... Sunday == 7.'u'Return day of the week, where Monday == 1 ... Sunday == 7.'b'Return a 3-tuple containing ISO year, week number, and weekday. + + The first ISO week of the year is the (Mon-Sun) week + containing the year's first Thursday; everything else derives + from that. + + The first week is 1; Monday is 1 ... Sunday is 7. + + ISO calendar algorithm taken from + http://www.phys.uu.nl/~vgent/calendar/isocalendar.htm + (used with permission) + 'u'Return a 3-tuple containing ISO year, week number, and weekday. + + The first ISO week of the year is the (Mon-Sun) week + containing the year's first Thursday; everything else derives + from that. + + The first week is 1; Monday is 1 ... Sunday is 7. + + ISO calendar algorithm taken from + http://www.phys.uu.nl/~vgent/calendar/isocalendar.htm + (used with permission) + 'b'Abstract base class for time zone info classes. + + Subclasses must override the name(), utcoffset() and dst() methods. + 'u'Abstract base class for time zone info classes. + + Subclasses must override the name(), utcoffset() and dst() methods. + 'b'datetime -> string name of time zone.'u'datetime -> string name of time zone.'b'tzinfo subclass must override tzname()'u'tzinfo subclass must override tzname()'b'datetime -> timedelta, positive for east of UTC, negative for west of UTC'u'datetime -> timedelta, positive for east of UTC, negative for west of UTC'b'tzinfo subclass must override utcoffset()'u'tzinfo subclass must override utcoffset()'b'datetime -> DST offset as timedelta, positive for east of UTC. + + Return 0 if DST not in effect. utcoffset() must include the DST + offset. + 'u'datetime -> DST offset as timedelta, positive for east of UTC. + + Return 0 if DST not in effect. utcoffset() must include the DST + offset. + 'b'tzinfo subclass must override dst()'u'tzinfo subclass must override dst()'b'datetime in UTC -> datetime in local time.'u'datetime in UTC -> datetime in local time.'b'fromutc() requires a datetime argument'u'fromutc() requires a datetime argument'b'dt.tzinfo is not self'u'dt.tzinfo is not self'b'fromutc() requires a non-None utcoffset() result'u'fromutc() requires a non-None utcoffset() result'b'fromutc() requires a non-None dst() result'u'fromutc() requires a non-None dst() result'b'fromutc(): dt.dst gave inconsistent results; cannot convert'u'fromutc(): dt.dst gave inconsistent results; cannot convert'b'__getinitargs__'u'__getinitargs__'b'__getstate__'u'__getstate__'b'Time with time zone. + + Constructors: + + __new__() + + Operators: + + __repr__, __str__ + __eq__, __le__, __lt__, __ge__, __gt__, __hash__ + + Methods: + + strftime() + isoformat() + utcoffset() + tzname() + dst() + + Properties (readonly): + hour, minute, second, microsecond, tzinfo, fold + 'u'Time with time zone. + + Constructors: + + __new__() + + Operators: + + __repr__, __str__ + __eq__, __le__, __lt__, __ge__, __gt__, __hash__ + + Methods: + + strftime() + isoformat() + utcoffset() + tzname() + dst() + + Properties (readonly): + hour, minute, second, microsecond, tzinfo, fold + 'b'_hour'u'_hour'b'_minute'u'_minute'b'_second'u'_second'b'_microsecond'u'_microsecond'b'_tzinfo'u'_tzinfo'b'_fold'u'_fold'b'Constructor. + + Arguments: + + hour, minute (required) + second, microsecond (default to zero) + tzinfo (default to None) + fold (keyword only, default to zero) + 'u'Constructor. + + Arguments: + + hour, minute (required) + second, microsecond (default to zero) + tzinfo (default to None) + fold (keyword only, default to zero) + 'b'Failed to encode latin1 string when unpickling a time object. pickle.load(data, encoding='latin1') is assumed.'u'Failed to encode latin1 string when unpickling a time object. pickle.load(data, encoding='latin1') is assumed.'b'hour (0-23)'u'hour (0-23)'b'minute (0-59)'u'minute (0-59)'b'second (0-59)'u'second (0-59)'b'microsecond (0-999999)'u'microsecond (0-999999)'b'timezone info object'u'timezone info object'b'cannot compare naive and aware times'u'cannot compare naive and aware times'b'whole minute'u'whole minute'b'Return formatted timezone offset (+xx:xx) or an empty string.'u'Return formatted timezone offset (+xx:xx) or an empty string.'b'Convert to formal string, for repr().'u'Convert to formal string, for repr().'b', %d, %d'u', %d, %d'b', %d'u', %d'b'%s.%s(%d, %d%s)'u'%s.%s(%d, %d%s)'b', tzinfo=%r'u', tzinfo=%r'b', fold=1)'u', fold=1)'b'Return the time formatted according to ISO. + + The full format is 'HH:MM:SS.mmmmmm+zz:zz'. By default, the fractional + part is omitted if self.microsecond == 0. + + The optional argument timespec specifies the number of additional + terms of the time to include. Valid options are 'auto', 'hours', + 'minutes', 'seconds', 'milliseconds' and 'microseconds'. + 'u'Return the time formatted according to ISO. + + The full format is 'HH:MM:SS.mmmmmm+zz:zz'. By default, the fractional + part is omitted if self.microsecond == 0. + + The optional argument timespec specifies the number of additional + terms of the time to include. Valid options are 'auto', 'hours', + 'minutes', 'seconds', 'milliseconds' and 'microseconds'. + 'b'Construct a time from the output of isoformat().'u'Construct a time from the output of isoformat().'b'Format using strftime(). The date part of the timestamp passed + to underlying strftime should not be used. + 'u'Format using strftime(). The date part of the timestamp passed + to underlying strftime should not be used. + 'b'Return the timezone offset as timedelta, positive east of UTC + (negative west of UTC).'u'Return the timezone offset as timedelta, positive east of UTC + (negative west of UTC).'b'Return the timezone name. + + Note that the name is 100% informational -- there's no requirement that + it mean anything in particular. For example, "GMT", "UTC", "-500", + "-5:00", "EDT", "US/Eastern", "America/New York" are all valid replies. + 'u'Return the timezone name. + + Note that the name is 100% informational -- there's no requirement that + it mean anything in particular. For example, "GMT", "UTC", "-500", + "-5:00", "EDT", "US/Eastern", "America/New York" are all valid replies. + 'b'Return 0 if DST is not in effect, or the DST offset (as timedelta + positive eastward) if DST is in effect. + + This is purely informational; the DST offset has already been added to + the UTC offset returned by utcoffset() if applicable, so there's no + need to consult dst() unless you're interested in displaying the DST + info. + 'u'Return 0 if DST is not in effect, or the DST offset (as timedelta + positive eastward) if DST is in effect. + + This is purely informational; the DST offset has already been added to + the UTC offset returned by utcoffset() if applicable, so there's no + need to consult dst() unless you're interested in displaying the DST + info. + 'b'Return a new time with new values for the specified fields.'u'Return a new time with new values for the specified fields.'b'bad tzinfo state arg'u'bad tzinfo state arg'b'datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]]) + + The year, month and day arguments are required. tzinfo may be None, or an + instance of a tzinfo subclass. The remaining arguments may be ints. + 'u'datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]]) + + The year, month and day arguments are required. tzinfo may be None, or an + instance of a tzinfo subclass. The remaining arguments may be ints. + 'b'Failed to encode latin1 string when unpickling a datetime object. pickle.load(data, encoding='latin1') is assumed.'u'Failed to encode latin1 string when unpickling a datetime object. pickle.load(data, encoding='latin1') is assumed.'b'Construct a datetime from a POSIX timestamp (like time.time()). + + A timezone info object may be passed in as well. + 'u'Construct a datetime from a POSIX timestamp (like time.time()). + + A timezone info object may be passed in as well. + 'b'Construct a naive UTC datetime from a POSIX timestamp.'u'Construct a naive UTC datetime from a POSIX timestamp.'b'Construct a datetime from time.time() and optional time zone info.'u'Construct a datetime from time.time() and optional time zone info.'b'Construct a UTC datetime from time.time().'u'Construct a UTC datetime from time.time().'b'Construct a datetime from a given date and a given time.'u'Construct a datetime from a given date and a given time.'b'date argument must be a date instance'u'date argument must be a date instance'b'time argument must be a time instance'u'time argument must be a time instance'b'Construct a datetime from the output of datetime.isoformat().'u'Construct a datetime from the output of datetime.isoformat().'b'Return integer POSIX timestamp.'u'Return integer POSIX timestamp.'b'Return POSIX timestamp as float'u'Return POSIX timestamp as float'b'Return UTC time tuple compatible with time.gmtime().'u'Return UTC time tuple compatible with time.gmtime().'b'Return the date part.'u'Return the date part.'b'Return the time part, with tzinfo None.'u'Return the time part, with tzinfo None.'b'Return the time part, with same tzinfo.'u'Return the time part, with same tzinfo.'b'Return a new datetime with new values for the specified fields.'u'Return a new datetime with new values for the specified fields.'b'tz argument must be an instance of tzinfo'u'tz argument must be an instance of tzinfo'b'%s %s %2d %02d:%02d:%02d %04d'u'%s %s %2d %02d:%02d:%02d %04d'b'T'u'T'b'Return the time formatted according to ISO. + + The full format looks like 'YYYY-MM-DD HH:MM:SS.mmmmmm'. + By default, the fractional part is omitted if self.microsecond == 0. + + If self.tzinfo is not None, the UTC offset is also attached, giving + giving a full format of 'YYYY-MM-DD HH:MM:SS.mmmmmm+HH:MM'. + + Optional argument sep specifies the separator between date and + time, default 'T'. + + The optional argument timespec specifies the number of additional + terms of the time to include. Valid options are 'auto', 'hours', + 'minutes', 'seconds', 'milliseconds' and 'microseconds'. + 'u'Return the time formatted according to ISO. + + The full format looks like 'YYYY-MM-DD HH:MM:SS.mmmmmm'. + By default, the fractional part is omitted if self.microsecond == 0. + + If self.tzinfo is not None, the UTC offset is also attached, giving + giving a full format of 'YYYY-MM-DD HH:MM:SS.mmmmmm+HH:MM'. + + Optional argument sep specifies the separator between date and + time, default 'T'. + + The optional argument timespec specifies the number of additional + terms of the time to include. Valid options are 'auto', 'hours', + 'minutes', 'seconds', 'milliseconds' and 'microseconds'. + 'b'%04d-%02d-%02d%c'u'%04d-%02d-%02d%c'b'Convert to string, for str().'u'Convert to string, for str().'b'string, format -> new datetime parsed from a string (like time.strptime()).'u'string, format -> new datetime parsed from a string (like time.strptime()).'b'Return the timezone offset as timedelta positive east of UTC (negative west of + UTC).'u'Return the timezone offset as timedelta positive east of UTC (negative west of + UTC).'b'cannot compare naive and aware datetimes'u'cannot compare naive and aware datetimes'b'Add a datetime and a timedelta.'u'Add a datetime and a timedelta.'b'Subtract two datetimes, or a datetime and a timedelta.'u'Subtract two datetimes, or a datetime and a timedelta.'b'cannot mix naive and timezone-aware time'u'cannot mix naive and timezone-aware time'b'_offset'u'_offset'b'_name'u'_name'b'offset must be a timedelta'u'offset must be a timedelta'b'offset must be a timedelta strictly between -timedelta(hours=24) and timedelta(hours=24).'u'offset must be a timedelta strictly between -timedelta(hours=24) and timedelta(hours=24).'b'pickle support'u'pickle support'b'Convert to formal string, for repr(). + + >>> tz = timezone.utc + >>> repr(tz) + 'datetime.timezone.utc' + >>> tz = timezone(timedelta(hours=-5), 'EST') + >>> repr(tz) + "datetime.timezone(datetime.timedelta(-1, 68400), 'EST')" + 'u'Convert to formal string, for repr(). + + >>> tz = timezone.utc + >>> repr(tz) + 'datetime.timezone.utc' + >>> tz = timezone(timedelta(hours=-5), 'EST') + >>> repr(tz) + "datetime.timezone(datetime.timedelta(-1, 68400), 'EST')" + 'b'datetime.timezone.utc'u'datetime.timezone.utc'b'%s.%s(%r)'u'%s.%s(%r)'b'%s.%s(%r, %r)'u'%s.%s(%r, %r)'b'utcoffset() argument must be a datetime instance or None'u'utcoffset() argument must be a datetime instance or None'b'tzname() argument must be a datetime instance or None'u'tzname() argument must be a datetime instance or None'b'dst() argument must be a datetime instance or None'u'dst() argument must be a datetime instance or None'b'fromutc: dt.tzinfo is not self'u'fromutc: dt.tzinfo is not self'b'fromutc() argument must be a datetime instance or None'u'fromutc() argument must be a datetime instance or None'u'datetime'DISTUTILS_DEBUG# If DISTUTILS_DEBUG is anything other than the empty string, we run in# debug mode.b'DISTUTILS_DEBUG'u'DISTUTILS_DEBUG'u'distutils.debug'_decimal_pydecimaldistutils.dep_util + +Utility functions for simple, timestamp-based dependency of files +and groups of files; also, function based entirely on such +timestamp dependency analysis.DistutilsFileErrorReturn true if 'source' exists and is more recently modified than + 'target', or if 'source' exists and 'target' doesn't. Return false if + both exist and 'target' is the same age or younger than 'source'. + Raise DistutilsFileError if 'source' does not exist. + file '%s' does not existmtime1mtime2targetsWalk two filename lists in parallel, testing if each source is newer + than its corresponding target. Return a pair of lists (sources, + targets) where source is newer than target, according to the semantics + of 'newer()'. + 'sources' and 'targets' must be same lengthn_sourcesn_targetsReturn true if 'target' is out-of-date with respect to any file + listed in 'sources'. In other words, if 'target' exists and is newer + than every file in 'sources', return false; otherwise return true. + 'missing' controls what we do when a source file is missing; the + default ("error") is to blow up with an OSError from inside 'stat()'; + if it is "ignore", we silently drop any missing source files; if it is + "newer", any missing source files make us assume that 'target' is + out-of-date (this is handy in "dry-run" mode: it'll make you pretend to + carry out commands that wouldn't work because inputs are missing, but + that doesn't matter because you're not actually going to run the + commands). + target_mtime# newer ()# build a pair of lists (sources, targets) where source is newer# newer_pairwise ()# If the target doesn't even exist, then it's definitely out-of-date.# Otherwise we have to find out the hard way: if *any* source file# is more recent than 'target', then 'target' is out-of-date and# we can immediately return true. If we fall through to the end# of the loop, then 'target' is up-to-date and we return false.# blow up when we stat() the file# missing source dropped from# target's dependency list# missing source means target is# out-of-date# newer_group ()b'distutils.dep_util + +Utility functions for simple, timestamp-based dependency of files +and groups of files; also, function based entirely on such +timestamp dependency analysis.'u'distutils.dep_util + +Utility functions for simple, timestamp-based dependency of files +and groups of files; also, function based entirely on such +timestamp dependency analysis.'b'Return true if 'source' exists and is more recently modified than + 'target', or if 'source' exists and 'target' doesn't. Return false if + both exist and 'target' is the same age or younger than 'source'. + Raise DistutilsFileError if 'source' does not exist. + 'u'Return true if 'source' exists and is more recently modified than + 'target', or if 'source' exists and 'target' doesn't. Return false if + both exist and 'target' is the same age or younger than 'source'. + Raise DistutilsFileError if 'source' does not exist. + 'b'file '%s' does not exist'u'file '%s' does not exist'b'Walk two filename lists in parallel, testing if each source is newer + than its corresponding target. Return a pair of lists (sources, + targets) where source is newer than target, according to the semantics + of 'newer()'. + 'u'Walk two filename lists in parallel, testing if each source is newer + than its corresponding target. Return a pair of lists (sources, + targets) where source is newer than target, according to the semantics + of 'newer()'. + 'b''sources' and 'targets' must be same length'u''sources' and 'targets' must be same length'b'Return true if 'target' is out-of-date with respect to any file + listed in 'sources'. In other words, if 'target' exists and is newer + than every file in 'sources', return false; otherwise return true. + 'missing' controls what we do when a source file is missing; the + default ("error") is to blow up with an OSError from inside 'stat()'; + if it is "ignore", we silently drop any missing source files; if it is + "newer", any missing source files make us assume that 'target' is + out-of-date (this is handy in "dry-run" mode: it'll make you pretend to + carry out commands that wouldn't work because inputs are missing, but + that doesn't matter because you're not actually going to run the + commands). + 'u'Return true if 'target' is out-of-date with respect to any file + listed in 'sources'. In other words, if 'target' exists and is newer + than every file in 'sources', return false; otherwise return true. + 'missing' controls what we do when a source file is missing; the + default ("error") is to blow up with an OSError from inside 'stat()'; + if it is "ignore", we silently drop any missing source files; if it is + "newer", any missing source files make us assume that 'target' is + out-of-date (this is handy in "dry-run" mode: it'll make you pretend to + carry out commands that wouldn't work because inputs are missing, but + that doesn't matter because you're not actually going to run the + commands). + 'u'distutils.dep_util'u'dep_util' +Module difflib -- helpers for computing deltas between objects. + +Function get_close_matches(word, possibilities, n=3, cutoff=0.6): + Use SequenceMatcher to return list of the best "good enough" matches. + +Function context_diff(a, b): + For two lists of strings, return a delta in context diff format. + +Function ndiff(a, b): + Return a delta: the difference between `a` and `b` (lists of strings). + +Function restore(delta, which): + Return one of the two sequences that generated an ndiff delta. + +Function unified_diff(a, b): + For two lists of strings, return a delta in unified diff format. + +Class SequenceMatcher: + A flexible class for comparing pairs of sequences of any type. + +Class Differ: + For producing human-readable deltas from sequences of lines of text. + +Class HtmlDiff: + For producing HTML side by side comparison with change highlights. +get_close_matchesSequenceMatcherDifferIS_CHARACTER_JUNKIS_LINE_JUNKcontext_diffunified_diffdiff_bytesHtmlDiffMatch_nlargesta b size_calculate_ratio + SequenceMatcher is a flexible class for comparing pairs of sequences of + any type, so long as the sequence elements are hashable. The basic + algorithm predates, and is a little fancier than, an algorithm + published in the late 1980's by Ratcliff and Obershelp under the + hyperbolic name "gestalt pattern matching". The basic idea is to find + the longest contiguous matching subsequence that contains no "junk" + elements (R-O doesn't address junk). The same idea is then applied + recursively to the pieces of the sequences to the left and to the right + of the matching subsequence. This does not yield minimal edit + sequences, but does tend to yield matches that "look right" to people. + + SequenceMatcher tries to compute a "human-friendly diff" between two + sequences. Unlike e.g. UNIX(tm) diff, the fundamental notion is the + longest *contiguous* & junk-free matching subsequence. That's what + catches peoples' eyes. The Windows(tm) windiff has another interesting + notion, pairing up elements that appear uniquely in each sequence. + That, and the method here, appear to yield more intuitive difference + reports than does diff. This method appears to be the least vulnerable + to synching up on blocks of "junk lines", though (like blank lines in + ordinary text files, or maybe "

" lines in HTML files). That may be + because this is the only method of the 3 that has a *concept* of + "junk" . + + Example, comparing two strings, and considering blanks to be "junk": + + >>> s = SequenceMatcher(lambda x: x == " ", + ... "private Thread currentThread;", + ... "private volatile Thread currentThread;") + >>> + + .ratio() returns a float in [0, 1], measuring the "similarity" of the + sequences. As a rule of thumb, a .ratio() value over 0.6 means the + sequences are close matches: + + >>> print(round(s.ratio(), 3)) + 0.866 + >>> + + If you're only interested in where the sequences match, + .get_matching_blocks() is handy: + + >>> for block in s.get_matching_blocks(): + ... print("a[%d] and b[%d] match for %d elements" % block) + a[0] and b[0] match for 8 elements + a[8] and b[17] match for 21 elements + a[29] and b[38] match for 0 elements + + Note that the last tuple returned by .get_matching_blocks() is always a + dummy, (len(a), len(b), 0), and this is the only case in which the last + tuple element (number of elements matched) is 0. + + If you want to know how to change the first sequence into the second, + use .get_opcodes(): + + >>> for opcode in s.get_opcodes(): + ... print("%6s a[%d:%d] b[%d:%d]" % opcode) + equal a[0:8] b[0:8] + insert a[8:8] b[8:17] + equal a[8:29] b[17:38] + + See the Differ class for a fancy human-friendly file differencer, which + uses SequenceMatcher both to compare sequences of lines, and to compare + sequences of characters within similar (near-matching) lines. + + See also function get_close_matches() in this module, which shows how + simple code building on SequenceMatcher can be used to do useful work. + + Timing: Basic R-O is cubic time worst case and quadratic time expected + case. SequenceMatcher is quadratic time for the worst case and has + expected-case behavior dependent in a complicated way on how many + elements the sequences have in common; best case time is linear. + + Methods: + + __init__(isjunk=None, a='', b='') + Construct a SequenceMatcher. + + set_seqs(a, b) + Set the two sequences to be compared. + + set_seq1(a) + Set the first sequence to be compared. + + set_seq2(b) + Set the second sequence to be compared. + + find_longest_match(alo, ahi, blo, bhi) + Find longest matching block in a[alo:ahi] and b[blo:bhi]. + + get_matching_blocks() + Return list of triples describing matching subsequences. + + get_opcodes() + Return list of 5-tuples describing how to turn a into b. + + ratio() + Return a measure of the sequences' similarity (float in [0,1]). + + quick_ratio() + Return an upper bound on .ratio() relatively quickly. + + real_quick_ratio() + Return an upper bound on ratio() very quickly. + isjunkautojunkConstruct a SequenceMatcher. + + Optional arg isjunk is None (the default), or a one-argument + function that takes a sequence element and returns true iff the + element is junk. None is equivalent to passing "lambda x: 0", i.e. + no elements are considered to be junk. For example, pass + lambda x: x in " \t" + if you're comparing lines as sequences of characters, and don't + want to synch up on blanks or hard tabs. + + Optional arg a is the first of two sequences to be compared. By + default, an empty string. The elements of a must be hashable. See + also .set_seqs() and .set_seq1(). + + Optional arg b is the second of two sequences to be compared. By + default, an empty string. The elements of b must be hashable. See + also .set_seqs() and .set_seq2(). + + Optional arg autojunk should be set to False to disable the + "automatic junk heuristic" that treats popular elements as junk + (see module documentation for more information). + set_seqsSet the two sequences to be compared. + + >>> s = SequenceMatcher() + >>> s.set_seqs("abcd", "bcde") + >>> s.ratio() + 0.75 + set_seq1set_seq2Set the first sequence to be compared. + + The second sequence to be compared is not changed. + + >>> s = SequenceMatcher(None, "abcd", "bcde") + >>> s.ratio() + 0.75 + >>> s.set_seq1("bcde") + >>> s.ratio() + 1.0 + >>> + + SequenceMatcher computes and caches detailed information about the + second sequence, so if you want to compare one sequence S against + many sequences, use .set_seq2(S) once and call .set_seq1(x) + repeatedly for each of the other sequences. + + See also set_seqs() and set_seq2(). + matching_blocksopcodesSet the second sequence to be compared. + + The first sequence to be compared is not changed. + + >>> s = SequenceMatcher(None, "abcd", "bcde") + >>> s.ratio() + 0.75 + >>> s.set_seq2("abcd") + >>> s.ratio() + 1.0 + >>> + + SequenceMatcher computes and caches detailed information about the + second sequence, so if you want to compare one sequence S against + many sequences, use .set_seq2(S) once and call .set_seq1(x) + repeatedly for each of the other sequences. + + See also set_seqs() and set_seq1(). + fullbcount__chain_bb2jbjunkjunkbpopularpopularntestidxsfind_longest_matchaloahiblobhiFind longest matching block in a[alo:ahi] and b[blo:bhi]. + + If isjunk is not defined: + + Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where + alo <= i <= i+k <= ahi + blo <= j <= j+k <= bhi + and for all (i',j',k') meeting those conditions, + k >= k' + i <= i' + and if i == i', j <= j' + + In other words, of all maximal matching blocks, return one that + starts earliest in a, and of all those maximal matching blocks that + start earliest in a, return the one that starts earliest in b. + + >>> s = SequenceMatcher(None, " abcd", "abcd abcd") + >>> s.find_longest_match(0, 5, 0, 9) + Match(a=0, b=4, size=5) + + If isjunk is defined, first the longest matching block is + determined as above, but with the additional restriction that no + junk element appears in the block. Then that block is extended as + far as possible by matching (only) junk elements on both sides. So + the resulting block never matches on junk except as identical junk + happens to be adjacent to an "interesting" match. + + Here's the same example as before, but considering blanks to be + junk. That prevents " abcd" from matching the " abcd" at the tail + end of the second sequence directly. Instead only the "abcd" can + match, and matches the leftmost "abcd" in the second sequence: + + >>> s = SequenceMatcher(lambda x: x==" ", " abcd", "abcd abcd") + >>> s.find_longest_match(0, 5, 0, 9) + Match(a=1, b=0, size=4) + + If no blocks match, return (alo, blo, 0). + + >>> s = SequenceMatcher(None, "ab", "c") + >>> s.find_longest_match(0, 2, 0, 1) + Match(a=0, b=0, size=0) + isbjunkbestibestjbestsizej2lennothingj2lengetnewj2lenget_matching_blocksReturn list of triples describing matching subsequences. + + Each triple is of the form (i, j, n), and means that + a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in + i and in j. New in Python 2.5, it's also guaranteed that if + (i, j, n) and (i', j', n') are adjacent triples in the list, and + the second is not the last triple in the list, then i+n != i' or + j+n != j'. IOW, adjacent triples never describe adjacent equal + blocks. + + The last triple is a dummy, (len(a), len(b), 0), and is the only + triple with n==0. + + >>> s = SequenceMatcher(None, "abxcd", "abcd") + >>> list(s.get_matching_blocks()) + [Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)] + lalbj1k1non_adjacentj2k2get_opcodesReturn list of 5-tuples describing how to turn a into b. + + Each tuple is of the form (tag, i1, i2, j1, j2). The first tuple + has i1 == j1 == 0, and remaining tuples have i1 == the i2 from the + tuple preceding it, and likewise for j1 == the previous j2. + + The tags are strings, with these meanings: + + 'replace': a[i1:i2] should be replaced by b[j1:j2] + 'delete': a[i1:i2] should be deleted. + Note that j1==j2 in this case. + 'insert': b[j1:j2] should be inserted at a[i1:i1]. + Note that i1==i2 in this case. + 'equal': a[i1:i2] == b[j1:j2] + + >>> a = "qabxcd" + >>> b = "abycdf" + >>> s = SequenceMatcher(None, a, b) + >>> for tag, i1, i2, j1, j2 in s.get_opcodes(): + ... print(("%7s a[%d:%d] (%s) b[%d:%d] (%s)" % + ... (tag, i1, i2, a[i1:i2], j1, j2, b[j1:j2]))) + delete a[0:1] (q) b[0:0] () + equal a[1:3] (ab) b[0:2] (ab) + replace a[3:4] (x) b[2:3] (y) + equal a[4:6] (cd) b[3:5] (cd) + insert a[6:6] () b[5:6] (f) + answeraibjequalget_grouped_opcodes Isolate change clusters by eliminating ranges with no changes. + + Return a generator of groups with up to n lines of context. + Each group is in the same format as returned by get_opcodes(). + + >>> from pprint import pprint + >>> a = list(map(str, range(1,40))) + >>> b = a[:] + >>> b[8:8] = ['i'] # Make an insertion + >>> b[20] += 'x' # Make a replacement + >>> b[23:28] = [] # Make a deletion + >>> b[30] += 'y' # Make another replacement + >>> pprint(list(SequenceMatcher(None,a,b).get_grouped_opcodes())) + [[('equal', 5, 8, 5, 8), ('insert', 8, 8, 8, 9), ('equal', 8, 11, 9, 12)], + [('equal', 16, 19, 17, 20), + ('replace', 19, 20, 20, 21), + ('equal', 20, 22, 21, 23), + ('delete', 22, 27, 23, 23), + ('equal', 27, 30, 23, 26)], + [('equal', 31, 34, 27, 30), + ('replace', 34, 35, 30, 31), + ('equal', 35, 38, 31, 34)]] + codesnnratioReturn a measure of the sequences' similarity (float in [0,1]). + + Where T is the total number of elements in both sequences, and + M is the number of matches, this is 2.0*M / T. + Note that this is 1 if the sequences are identical, and 0 if + they have nothing in common. + + .ratio() is expensive to compute if you haven't already computed + .get_matching_blocks() or .get_opcodes(), in which case you may + want to try .quick_ratio() or .real_quick_ratio() first to get an + upper bound. + + >>> s = SequenceMatcher(None, "abcd", "bcde") + >>> s.ratio() + 0.75 + >>> s.quick_ratio() + 0.75 + >>> s.real_quick_ratio() + 1.0 + triplequick_ratioReturn an upper bound on ratio() relatively quickly. + + This isn't defined beyond that it is an upper bound on .ratio(), and + is faster to compute. + availavailhasnumbreal_quick_ratioReturn an upper bound on ratio() very quickly. + + This isn't defined beyond that it is an upper bound on .ratio(), and + is faster to compute than either .ratio() or .quick_ratio(). + 0.6possibilitiesUse SequenceMatcher to return list of the best "good enough" matches. + + word is a sequence for which close matches are desired (typically a + string). + + possibilities is a list of sequences against which to match word + (typically a list of strings). + + Optional arg n (default 3) is the maximum number of close matches to + return. n must be > 0. + + Optional arg cutoff (default 0.6) is a float in [0, 1]. Possibilities + that don't score at least that similar to word are ignored. + + The best (no more than n) matches among the possibilities are returned + in a list, sorted by similarity score, most similar first. + + >>> get_close_matches("appel", ["ape", "apple", "peach", "puppy"]) + ['apple', 'ape'] + >>> import keyword as _keyword + >>> get_close_matches("wheel", _keyword.kwlist) + ['while'] + >>> get_close_matches("Apple", _keyword.kwlist) + [] + >>> get_close_matches("accept", _keyword.kwlist) + ['except'] + n must be > 0: %rcutoff must be in [0.0, 1.0]: %rscore_keep_original_wstag_sReplace whitespace with the original whitespace characters in `s`tag_c + Differ is a class for comparing sequences of lines of text, and + producing human-readable differences or deltas. Differ uses + SequenceMatcher both to compare sequences of lines, and to compare + sequences of characters within similar (near-matching) lines. + + Each line of a Differ delta begins with a two-letter code: + + '- ' line unique to sequence 1 + '+ ' line unique to sequence 2 + ' ' line common to both sequences + '? ' line not present in either input sequence + + Lines beginning with '? ' attempt to guide the eye to intraline + differences, and were not present in either input sequence. These lines + can be confusing if the sequences contain tab characters. + + Note that Differ makes no claim to produce a *minimal* diff. To the + contrary, minimal diffs are often counter-intuitive, because they synch + up anywhere possible, sometimes accidental matches 100 pages apart. + Restricting synch points to contiguous matches preserves some notion of + locality, at the occasional cost of producing a longer diff. + + Example: Comparing two texts. + + First we set up the texts, sequences of individual single-line strings + ending with newlines (such sequences can also be obtained from the + `readlines()` method of file-like objects): + + >>> text1 = ''' 1. Beautiful is better than ugly. + ... 2. Explicit is better than implicit. + ... 3. Simple is better than complex. + ... 4. Complex is better than complicated. + ... '''.splitlines(keepends=True) + >>> len(text1) + 4 + >>> text1[0][-1] + '\n' + >>> text2 = ''' 1. Beautiful is better than ugly. + ... 3. Simple is better than complex. + ... 4. Complicated is better than complex. + ... 5. Flat is better than nested. + ... '''.splitlines(keepends=True) + + Next we instantiate a Differ object: + + >>> d = Differ() + + Note that when instantiating a Differ object we may pass functions to + filter out line and character 'junk'. See Differ.__init__ for details. + + Finally, we compare the two: + + >>> result = list(d.compare(text1, text2)) + + 'result' is a list of strings, so let's pretty-print it: + + >>> from pprint import pprint as _pprint + >>> _pprint(result) + [' 1. Beautiful is better than ugly.\n', + '- 2. Explicit is better than implicit.\n', + '- 3. Simple is better than complex.\n', + '+ 3. Simple is better than complex.\n', + '? ++\n', + '- 4. Complex is better than complicated.\n', + '? ^ ---- ^\n', + '+ 4. Complicated is better than complex.\n', + '? ++++ ^ ^\n', + '+ 5. Flat is better than nested.\n'] + + As a single multi-line string it looks like this: + + >>> print(''.join(result), end="") + 1. Beautiful is better than ugly. + - 2. Explicit is better than implicit. + - 3. Simple is better than complex. + + 3. Simple is better than complex. + ? ++ + - 4. Complex is better than complicated. + ? ^ ---- ^ + + 4. Complicated is better than complex. + ? ++++ ^ ^ + + 5. Flat is better than nested. + + Methods: + + __init__(linejunk=None, charjunk=None) + Construct a text differencer, with optional filters. + + compare(a, b) + Compare two sequences of lines; generate the resulting delta. + linejunkcharjunk + Construct a text differencer, with optional filters. + + The two optional keyword parameters are for filter functions: + + - `linejunk`: A function that should accept a single string argument, + and return true iff the string is junk. The module-level function + `IS_LINE_JUNK` may be used to filter out lines without visible + characters, except for at most one splat ('#'). It is recommended + to leave linejunk None; the underlying SequenceMatcher class has + an adaptive notion of "noise" lines that's better than any static + definition the author has ever been able to craft. + + - `charjunk`: A function that should accept a string of length 1. The + module-level function `IS_CHARACTER_JUNK` may be used to filter out + whitespace characters (a blank or tab; **note**: bad idea to include + newline in this!). Use of IS_CHARACTER_JUNK is recommended. + + Compare two sequences of lines; generate the resulting delta. + + Each sequence must contain individual single-line strings ending with + newlines. Such sequences can be obtained from the `readlines()` method + of file-like objects. The delta generated also consists of newline- + terminated strings, ready to be printed as-is via the writeline() + method of a file-like object. + + Example: + + >>> print(''.join(Differ().compare('one\ntwo\nthree\n'.splitlines(True), + ... 'ore\ntree\nemu\n'.splitlines(True))), + ... end="") + - one + ? ^ + + ore + ? ^ + - two + - three + ? - + + tree + + emu + cruncher_fancy_replace_dumpGenerate comparison results for a same-tagged range._plain_replace + When replacing one block of lines with another, search the blocks + for *similar* lines; the best-matching pair (if any) is used as a + synch point, and intraline difference marking is done on the + similar pair. Lots of work, but often worth it. + + Example: + + >>> d = Differ() + >>> results = d._fancy_replace(['abcDefghiJkl\n'], 0, 1, + ... ['abcdefGhijkl\n'], 0, 1) + >>> print(''.join(results), end="") + - abcDefghiJkl + ? ^ ^ ^ + + abcdefGhijkl + ? ^ ^ ^ + 0.74best_ratioeqieqjbest_ibest_j_fancy_helperaeltbeltatagsbtagsai1ai2bj1bj2_qformatalinebline + Format "?" output and deal with tabs. + + Example: + + >>> d = Differ() + >>> results = d._qformat('\tabcDefghiJkl\n', '\tabcdefGhijkl\n', + ... ' ^ ^ ^ ', ' ^ ^ ^ ') + >>> for line in results: print(repr(line)) + ... + '- \tabcDefghiJkl\n' + '? \t ^ ^ ^\n' + '+ \tabcdefGhijkl\n' + '? \t ^ ^ ^\n' + - ? + \s*(?:#\s*)?$pat + Return True for ignorable line: iff `line` is blank or contains a single '#'. + + Examples: + + >>> IS_LINE_JUNK('\n') + True + >>> IS_LINE_JUNK(' # \n') + True + >>> IS_LINE_JUNK('hello\n') + False + + Return True for ignorable character: iff `ch` is a space or tab. + + Examples: + + >>> IS_CHARACTER_JUNK(' ') + True + >>> IS_CHARACTER_JUNK('\t') + True + >>> IS_CHARACTER_JUNK('\n') + False + >>> IS_CHARACTER_JUNK('x') + False + _format_range_unifiedConvert range to the "ed" formatbeginning{},{}fromfiledatetofiledatelineterm + Compare two sequences of lines; generate the delta as a unified diff. + + Unified diffs are a compact way of showing line changes and a few + lines of context. The number of context lines is set by 'n' which + defaults to three. + + By default, the diff control lines (those with ---, +++, or @@) are + created with a trailing newline. This is helpful so that inputs + created from file.readlines() result in diffs that are suitable for + file.writelines() since both the inputs and outputs have trailing + newlines. + + For inputs that do not have trailing newlines, set the lineterm + argument to "" so that the output will be uniformly newline free. + + The unidiff format normally has a header for filenames and modification + times. Any or all of these may be specified using strings for + 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'. + The modification times are normally expressed in the ISO 8601 format. + + Example: + + >>> for line in unified_diff('one two three four'.split(), + ... 'zero one tree four'.split(), 'Original', 'Current', + ... '2005-01-26 23:30:50', '2010-04-02 10:20:52', + ... lineterm=''): + ... print(line) # doctest: +NORMALIZE_WHITESPACE + --- Original 2005-01-26 23:30:50 + +++ Current 2010-04-02 10:20:52 + @@ -1,4 +1,4 @@ + +zero + one + -two + -three + +tree + four + _check_types {}fromdatetodate--- {}{}{}+++ {}{}{}file1_rangefile2_range@@ -{} +{} @@{}_format_range_context + Compare two sequences of lines; generate the delta as a context diff. + + Context diffs are a compact way of showing line changes and a few + lines of context. The number of context lines is set by 'n' which + defaults to three. + + By default, the diff control lines (those with *** or ---) are + created with a trailing newline. This is helpful so that inputs + created from file.readlines() result in diffs that are suitable for + file.writelines() since both the inputs and outputs have trailing + newlines. + + For inputs that do not have trailing newlines, set the lineterm + argument to "" so that the output will be uniformly newline free. + + The context diff format normally has a header for filenames and + modification times. Any or all of these may be specified using + strings for 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'. + The modification times are normally expressed in the ISO 8601 format. + If not specified, the strings default to blanks. + + Example: + + >>> print(''.join(context_diff('one\ntwo\nthree\nfour\n'.splitlines(True), + ... 'zero\none\ntree\nfour\n'.splitlines(True), 'Original', 'Current')), + ... end="") + *** Original + --- Current + *************** + *** 1,4 **** + one + ! two + ! three + four + --- 1,4 ---- + + zero + one + ! tree + four + ! *** {}{}{}****************** {} ****{}--- {} ----{}lines to compare must be str, not %s (%r)all arguments must be str, not: %rdfunc + Compare `a` and `b`, two sequences of lines represented as bytes rather + than str. This is a wrapper for `dfunc`, which is typically either + unified_diff() or context_diff(). Inputs are losslessly converted to + strings so that `dfunc` only has to worry about strings, and encoded + back to bytes on return. This is necessary to compare files with + unknown or inconsistent encoding. All other inputs (except `n`) must be + bytes rather than str. + all arguments must be bytes, not %s (%r) + Compare `a` and `b` (lists of strings); return a `Differ`-style delta. + + Optional keyword parameters `linejunk` and `charjunk` are for filter + functions, or can be None: + + - linejunk: A function that should accept a single string argument and + return true iff the string is junk. The default is None, and is + recommended; the underlying SequenceMatcher class has an adaptive + notion of "noise" lines. + + - charjunk: A function that accepts a character (string of length + 1), and returns true iff the character is junk. The default is + the module-level function IS_CHARACTER_JUNK, which filters out + whitespace characters (a blank or tab; note: it's a bad idea to + include newline in this!). + + Tools/scripts/ndiff.py is a command-line front-end to this function. + + Example: + + >>> diff = ndiff('one\ntwo\nthree\n'.splitlines(keepends=True), + ... 'ore\ntree\nemu\n'.splitlines(keepends=True)) + >>> print(''.join(diff), end="") + - one + ? ^ + + ore + ? ^ + - two + - three + ? - + + tree + + emu + _mdifffromlinestolinesReturns generator yielding marked up from/to side by side differences. + + Arguments: + fromlines -- list of text lines to compared to tolines + tolines -- list of text lines to be compared to fromlines + context -- number of context lines to display on each side of difference, + if None, all from/to text lines will be generated. + linejunk -- passed on to ndiff (see ndiff documentation) + charjunk -- passed on to ndiff (see ndiff documentation) + + This function returns an iterator which returns a tuple: + (from line tuple, to line tuple, boolean flag) + + from/to line tuple -- (line num, line text) + line num -- integer or None (to indicate a context separation) + line text -- original line text with following markers inserted: + '\0+' -- marks start of added text + '\0-' -- marks start of deleted text + '\0^' -- marks start of changed text + '\1' -- marks end of added/deleted/changed text + + boolean flag -- None indicates context separation, True indicates + either "from" or "to" line contains a change, otherwise False. + + This function/iterator was originally developed to generate side by side + file difference for making HTML pages (see HtmlDiff class for example + usage). + + Note, this function utilizes the ndiff function to generate the side by + side difference markup. Optional ndiff arguments may be passed to this + function and they in turn will be passed to ndiff. + (\++|\-+|\^+)change_rediff_lines_iterator_make_lineformat_keysidenum_linesReturns line of text with user's change markup and line formatting. + + lines -- list of lines from the ndiff generator to produce a line of + text from. When producing the line of text to return, the + lines used are removed from this list. + format_key -- '+' return first line in list with "add" markup around + the entire line. + '-' return first line in list with "delete" markup around + the entire line. + '?' return first line in list with add/delete/change + intraline markup (indices obtained from second line) + None return first line in list with no markup + side -- indice into the num_lines list (0=from,1=to) + num_lines -- from/to current line number. This is NOT intended to be a + passed parameter. It is present as a keyword argument to + maintain memory of the current line numbers between calls + of this function. + + Note, this function is purposefully not defined at the module scope so + that data it needs from its parent function (within whose context it + is defined) does not need to be of module scope. + markerssub_inforecord_sub_infomatch_objectspan_line_iteratorYields from/to lines of text with a change indication. + + This function is an iterator. It itself pulls lines from a + differencing iterator, processes them and yields them. When it can + it yields both a "from" and a "to" line, otherwise it will yield one + or the other. In addition to yielding the lines of from/to text, a + boolean flag is yielded to indicate if the text line(s) have + differences in them. + + Note, this function is purposefully not defined at the module scope so + that data it needs from its parent function (within whose context it + is defined) does not need to be of module scope. + num_blanks_pendingnum_blanks_to_yield-?+?--++--?+--+from_lineto_line-+?-?++--+-_line_pair_iteratorYields from/to lines of text with a change indication. + + This function is an iterator. It itself pulls lines from the line + iterator. Its difference from that iterator is that this function + always yields a pair of from/to text lines (with the change + indication). If necessary it will collect single from/to lines + until it has a matching pair from/to pair to yield. + + Note, this function is purposefully not defined at the module scope so + that data it needs from its parent function (within whose context it + is defined) does not need to be of module scope. + line_iteratorfound_difffromDiffto_diffline_pair_iteratorlines_to_writecontextLines + + + + + + + + + + + + %(table)s%(legend)s + + +_file_template + table.diff {font-family:Courier; border:medium;} + .diff_header {background-color:#e0e0e0} + td.diff_header {text-align:right} + .diff_next {background-color:#c0c0c0} + .diff_add {background-color:#aaffaa} + .diff_chg {background-color:#ffff77} + .diff_sub {background-color:#ffaaaa}_styles + + + + %(header_row)s + +%(data_rows)s +
_table_template + + + + +
Legends
+ + + + +
Colors
 Added 
Changed
Deleted
+ + + + +
Links
(f)irst change
(n)ext change
(t)op
_legendFor producing HTML side by side comparison with change highlights. + + This class can be used to create an HTML table (or a complete HTML file + containing the table) showing a side by side, line by line comparison + of text with inter-line and intra-line change highlights. The table can + be generated in either full or contextual difference mode. + + The following methods are provided for HTML generation: + + make_table -- generates HTML for a single side by side table + make_file -- generates complete HTML file with a single side by side table + + See tools/scripts/diff.py for an example usage of this class. + _default_prefixwrapcolumnHtmlDiff instance initializer + + Arguments: + tabsize -- tab stop spacing, defaults to 8. + wrapcolumn -- column number where lines are broken and wrapped, + defaults to None where lines are not wrapped. + linejunk,charjunk -- keyword arguments passed into ndiff() (used by + HtmlDiff() to generate the side by side HTML differences). See + ndiff() documentation for argument default values and descriptions. + _tabsize_wrapcolumn_linejunk_charjunkmake_filefromdesctodescnumlinesReturns HTML file of side by side comparison with change highlights + + Arguments: + fromlines -- list of "from" lines + tolines -- list of "to" lines + fromdesc -- "from" file column header string + todesc -- "to" file column header string + context -- set to True for contextual differences (defaults to False + which shows full differences). + numlines -- number of context lines. When context is set True, + controls number of lines displayed before and after the change. + When context is False, controls the number of lines to place + the "next" link anchors before the next change (so click of + "next" link jumps to just before the change). + charset -- charset of the HTML document + styleslegendmake_tabletable_tab_newline_replaceReturns from/to line lists with tabs expanded and newlines removed. + + Instead of tab characters being replaced by the number of spaces + needed to fill in to the next tab stop, this function will fill + the space with tab characters. This is done so that the difference + algorithms can identify changes in a file when tabs are replaced by + spaces and vice versa. At the end of the HTML generation, the tab + characters will be replaced with a nonbreakable space. + expand_tabs_split_linedata_listline_numBuilds list of text lines by splitting text lines at wrap point + + This function will determine if the input text line needs to be + wrapped (split) into separate lines. If so, the first wrap point + will be determined and the first line appended to the output + text line list. This function is used recursively to handle + the second part of the split line to further split it. + line1line2_line_wrapperdiffsReturns iterator that splits (wraps) mdiff text linesfromdatatodatafromlinefromtexttolinetotext_collect_linesCollects mdiff output into separate lists + + Before storing the mdiff from/to data into a list, it is converted + into a single line of text with HTML markup. + flaglist_format_linelinenumReturns HTML markup of "from" / "to" text lines + + side -- 0 or 1 indicating "from" or "to" text + flag -- indicates if difference on line + linenum -- line number (used for line number column) + text -- line text to be marked up + %d id="%s%s"_prefix %s%s_make_prefixCreate unique anchor prefixesfrom%d_fromprefixto%d_toprefix_convert_flagsMakes list of "next" linksnext_idnext_hrefnum_chgin_change id="difflib_chg_%s_%d"n No Differences Found  Empty File ftReturns HTML table of side by side comparison with change highlights + + Arguments: + fromlines -- list of "from" lines + tolines -- list of "to" lines + fromdesc -- "from" file column header string + todesc -- "to" file column header string + context -- set to True for contextual differences (defaults to False + which shows full differences). + numlines -- number of context lines. When context is set True, + controls number of lines displayed before and after the change. + When context is False, controls the number of lines to place + the "next" link anchors before the next change (so click of + "next" link jumps to just before the change). + context_lines %s%s%s%s + + +%s%s%s%s
%sheader_rowdata_rows+-^which + Generate one of the two sequences that generated a delta. + + Given a `delta` produced by `Differ.compare()` or `ndiff()`, extract + lines originating from file 1 or 2 (parameter `which`), stripping off line + prefixes. + + Examples: + + >>> diff = ndiff('one\ntwo\nthree\n'.splitlines(keepends=True), + ... 'ore\ntree\nemu\n'.splitlines(keepends=True)) + >>> diff = list(diff) + >>> print(''.join(restore(diff, 1)), end="") + one + two + three + >>> print(''.join(restore(diff, 2)), end="") + ore + tree + emu + unknown delta choice (must be 1 or 2): %rprefixes# Members:# a# first sequence# b# second sequence; differences are computed as "what do# we need to do to 'a' to change it into 'b'?"# b2j# for x in b, b2j[x] is a list of the indices (into b)# at which x appears; junk and popular elements do not appear# fullbcount# for x in b, fullbcount[x] == the number of times x# appears in b; only materialized if really needed (used# only for computing quick_ratio())# matching_blocks# a list of (i, j, k) triples, where a[i:i+k] == b[j:j+k];# ascending & non-overlapping in i and in j; terminated by# a dummy (len(a), len(b), 0) sentinel# opcodes# a list of (tag, i1, i2, j1, j2) tuples, where tag is# one of# 'replace' a[i1:i2] should be replaced by b[j1:j2]# 'delete' a[i1:i2] should be deleted# 'insert' b[j1:j2] should be inserted# 'equal' a[i1:i2] == b[j1:j2]# isjunk# a user-supplied function taking a sequence element and# returning true iff the element is "junk" -- this has# subtle but helpful effects on the algorithm, which I'll# get around to writing up someday <0.9 wink>.# DON'T USE! Only __chain_b uses this. Use "in self.bjunk".# bjunk# the items in b for which isjunk is True.# bpopular# nonjunk items in b treated as junk by the heuristic (if used).# For each element x in b, set b2j[x] to a list of the indices in# b where x appears; the indices are in increasing order; note that# the number of times x appears in b is len(b2j[x]) ...# when self.isjunk is defined, junk elements don't show up in this# map at all, which stops the central find_longest_match method# from starting any matching block at a junk element ...# b2j also does not contain entries for "popular" elements, meaning# elements that account for more than 1 + 1% of the total elements, and# when the sequence is reasonably large (>= 200 elements); this can# be viewed as an adaptive notion of semi-junk, and yields an enormous# speedup when, e.g., comparing program files with hundreds of# instances of "return NULL;" ...# note that this is only called when b changes; so for cross-product# kinds of matches, it's best to call set_seq2 once, then set_seq1# repeatedly# Because isjunk is a user-defined (not C) function, and we test# for junk a LOT, it's important to minimize the number of calls.# Before the tricks described here, __chain_b was by far the most# time-consuming routine in the whole module! If anyone sees# Jim Roskind, thank him again for profile.py -- I never would# have guessed that.# The first trick is to build b2j ignoring the possibility# of junk. I.e., we don't call isjunk at all yet. Throwing# out the junk later is much cheaper than building b2j "right"# from the start.# Purge junk elements# separate loop avoids separate list of keys# Purge popular elements that are not junk# ditto; as fast for 1% deletion# CAUTION: stripping common prefix or suffix would be incorrect.# E.g.,# ab# acab# Longest matching block is "ab", but if common prefix is# stripped, it's "a" (tied with "b"). UNIX(tm) diff does so# strip, so ends up claiming that ab is changed to acab by# inserting "ca" in the middle. That's minimal but unintuitive:# "it's obvious" that someone inserted "ac" at the front.# Windiff ends up at the same place as diff, but by pairing up# the unique 'b's and then matching the first two 'a's.# find longest junk-free match# during an iteration of the loop, j2len[j] = length of longest# junk-free match ending with a[i-1] and b[j]# look at all instances of a[i] in b; note that because# b2j has no junk keys, the loop is skipped if a[i] is junk# a[i] matches b[j]# Extend the best by non-junk elements on each end. In particular,# "popular" non-junk elements aren't in b2j, which greatly speeds# the inner loop above, but also means "the best" match so far# doesn't contain any junk *or* popular non-junk elements.# Now that we have a wholly interesting match (albeit possibly# empty!), we may as well suck up the matching junk on each# side of it too. Can't think of a good reason not to, and it# saves post-processing the (possibly considerable) expense of# figuring out what to do with it. In the case of an empty# interesting match, this is clearly the right thing to do,# because no other kind of match is possible in the regions.# This is most naturally expressed as a recursive algorithm, but# at least one user bumped into extreme use cases that exceeded# the recursion limit on their box. So, now we maintain a list# ('queue`) of blocks we still need to look at, and append partial# results to `matching_blocks` in a loop; the matches are sorted# at the end.# a[alo:i] vs b[blo:j] unknown# a[i:i+k] same as b[j:j+k]# a[i+k:ahi] vs b[j+k:bhi] unknown# if k is 0, there was no matching block# It's possible that we have adjacent equal blocks in the# matching_blocks list now. Starting with 2.5, this code was added# to collapse them.# Is this block adjacent to i1, j1, k1?# Yes, so collapse them -- this just increases the length of# the first block by the length of the second, and the first# block so lengthened remains the block to compare against.# Not adjacent. Remember the first block (k1==0 means it's# the dummy we started with), and make the second block the# new block to compare against.# invariant: we've pumped out correct diffs to change# a[:i] into b[:j], and the next matching block is# a[ai:ai+size] == b[bj:bj+size]. So we need to pump# out a diff to change a[i:ai] into b[j:bj], pump out# the matching block, and move (i,j) beyond the match# the list of matching blocks is terminated by a# sentinel with size 0# Fixup leading and trailing groups if they show no changes.# End the current group and start a new one whenever# there is a large range with no changes.# viewing a and b as multisets, set matches to the cardinality# of their intersection; this counts the number of matches# without regard to order, so is clearly an upper bound# avail[x] is the number of times x appears in 'b' less the# number of times we've seen it in 'a' so far ... kinda# can't have more matches than the number of elements in the# shorter sequence# Move the best scorers to head of list# Strip scores for the best n matches# dump the shorter block first -- reduces the burden on short-term# memory if the blocks are of very different sizes# don't synch up unless the lines have a similarity score of at# least cutoff; best_ratio tracks the best score seen so far# 1st indices of equal lines (if any)# search for the pair that matches best without being identical# (identical lines must be junk lines, & we don't want to synch up# on junk -- unless we have to)# computing similarity is expensive, so use the quick# upper bounds first -- have seen this speed up messy# compares by a factor of 3.# note that ratio() is only expensive to compute the first# time it's called on a sequence pair; the expensive part# of the computation is cached by cruncher# no non-identical "pretty close" pair# no identical pair either -- treat it as a straight replace# no close pair, but an identical pair -- synch up on that# there's a close pair, so forget the identical pair (if any)# a[best_i] very similar to b[best_j]; eqi is None iff they're not# identical# pump out diffs from before the synch point# do intraline marking on the synch pair# pump out a '-', '?', '+', '?' quad for the synched lines# the synch pair is identical# pump out diffs from after the synch point# With respect to junk, an earlier version of ndiff simply refused to# *start* a match with a junk element. The result was cases like this:# before: private Thread currentThread;# after: private volatile Thread currentThread;# If you consider whitespace to be junk, the longest contiguous match# not starting with junk is "e Thread currentThread". So ndiff reported# that "e volatil" was inserted between the 't' and the 'e' in "private".# While an accurate view, to people that's absurd. The current version# looks for matching blocks that are entirely junk-free, then extends the# longest one of those as far as possible but only with matching junk.# So now "currentThread" is matched, then extended to suck up the# preceding blank; then "private" is matched, and extended to suck up the# following blank; then "Thread" is matched; and finally ndiff reports# that "volatile " was inserted before "Thread". The only quibble# remaining is that perhaps it was really the case that " volatile"# was inserted after "private". I can live with that .### Unified Diff# Per the diff spec at http://www.unix.org/single_unix_specification/# lines start numbering with one# empty ranges begin at line just before the range### Context Diff# See http://www.unix.org/single_unix_specification/# Checking types is weird, but the alternative is garbled output when# someone passes mixed bytes and str to {unified,context}_diff(). E.g.# without this check, passing filenames as bytes results in output like# --- b'oldfile.txt'# +++ b'newfile.txt'# because of how str.format() incorporates bytes objects.# regular expression for finding intraline change indices# create the difference iterator to generate the differences# Handle case where no user markup is to be added, just return line of# text with user's line format to allow for usage of the line number.# Handle case of intraline changes# find intraline changes (store change type and indices in tuples)# process each tuple inserting our special marks that won't be# noticed by an xml/html escaper.# Handle case of add/delete entire line# if line of text is just a newline, insert a space so there is# something for the user to highlight and see.# insert marks that won't be noticed by an xml/html escaper.# Return line of text, first allow user's line formatter to do its# thing (such as adding the line number) then replace the special# marks with what the user's change markup.# Load up next 4 lines so we can look ahead, create strings which# are a concatenation of the first character of each of the 4 lines# so we can do some very readable comparisons.# When no more lines, pump out any remaining blank lines so the# corresponding add/delete lines get a matching blank line so# all line pairs get yielded at the next level.# simple intraline change# in delete block, add block coming: we do NOT want to get# caught up on blank lines yet, just process the delete line# in delete block and see an intraline change or unchanged line# coming: yield the delete line and then blanks# intraline change# delete FROM line# in add block, delete block coming: we do NOT want to get# caught up on blank lines yet, just process the add line# will be leaving an add block: yield blanks then add line# inside an add block, yield the add line# unchanged text, yield it to both sides# Catch up on the blank lines so when we yield the next from/to# pair, they are lined up.# Collecting lines of text until we have a from/to pair# Once we have a pair, remove them from the collection and yield it# Handle case where user does not want context differencing, just yield# them up without doing anything else with them.# Handle case where user wants context differencing. We must do some# storage of lines until we know for sure that they are to be yielded.# Store lines up until we find a difference, note use of a# circular queue because we only need to keep around what# we need for context.# Yield lines that we have collected so far, but first yield# the user's separator.# Now yield the context lines after the change# If another change within the context, extend the context# Catch exception from next() and return normally# hide real spaces# expand tabs into spaces# replace spaces from expanded tabs back into tab characters# (we'll replace them with markup after we do differencing)# if blank line or context separator, just add it to the output list# if line text doesn't need wrapping, just add it to the output list# scan text looking for the wrap point, keeping track if the wrap# point is inside markers# wrap point is inside text, break it up into separate lines# if wrap point is inside markers, place end marker at end of first# line and start marker at beginning of second line because each# line will have its own table tag markup around it.# tack on first line onto the output list# use this routine again to wrap the remaining text# pull from/to data and flags from mdiff iterator# check for context separators and pass them through# for each from/to line split it at the wrap column to form# list of text lines.# yield from/to line in pairs inserting blank lines as# necessary when one side has more wrapped lines# pull from/to data and flags from mdiff style iterator# store HTML markup of the lines into the lists# exceptions occur for lines where context separators go# handle blank lines where linenum is '>' or ''# replace those things that would get confused with HTML symbols# make space non-breakable so they don't get compressed or line wrapped# Generate a unique anchor prefix so multiple tables# can exist on the same HTML page without conflicts.# store prefixes so line format method has access# all anchor names will be generated using the unique "to" prefix# process change flags, generating middle column of next anchors/links# at the beginning of a change, drop an anchor a few lines# (the context lines) before the change for the previous# link# at the beginning of a change, drop a link to the next# change# check for cases where there is no content to avoid exceptions# if not a change on first line, drop a link# redo the last link to link to the top# make unique anchor prefixes so that multiple tables may exist# on the same page without conflict.# change tabs to spaces before it gets more difficult after we insert# markup# create diffs iterator which generates side by side from/to data# set up iterator to wrap lines that exceed desired width# collect up from/to lines and flags into lists (also format the lines)# mdiff yields None on separator lines skip the bogus ones# generated for the first lineb' +Module difflib -- helpers for computing deltas between objects. + +Function get_close_matches(word, possibilities, n=3, cutoff=0.6): + Use SequenceMatcher to return list of the best "good enough" matches. + +Function context_diff(a, b): + For two lists of strings, return a delta in context diff format. + +Function ndiff(a, b): + Return a delta: the difference between `a` and `b` (lists of strings). + +Function restore(delta, which): + Return one of the two sequences that generated an ndiff delta. + +Function unified_diff(a, b): + For two lists of strings, return a delta in unified diff format. + +Class SequenceMatcher: + A flexible class for comparing pairs of sequences of any type. + +Class Differ: + For producing human-readable deltas from sequences of lines of text. + +Class HtmlDiff: + For producing HTML side by side comparison with change highlights. +'u' +Module difflib -- helpers for computing deltas between objects. + +Function get_close_matches(word, possibilities, n=3, cutoff=0.6): + Use SequenceMatcher to return list of the best "good enough" matches. + +Function context_diff(a, b): + For two lists of strings, return a delta in context diff format. + +Function ndiff(a, b): + Return a delta: the difference between `a` and `b` (lists of strings). + +Function restore(delta, which): + Return one of the two sequences that generated an ndiff delta. + +Function unified_diff(a, b): + For two lists of strings, return a delta in unified diff format. + +Class SequenceMatcher: + A flexible class for comparing pairs of sequences of any type. + +Class Differ: + For producing human-readable deltas from sequences of lines of text. + +Class HtmlDiff: + For producing HTML side by side comparison with change highlights. +'b'get_close_matches'u'get_close_matches'b'ndiff'u'ndiff'b'restore'u'restore'b'SequenceMatcher'u'SequenceMatcher'b'Differ'u'Differ'b'IS_CHARACTER_JUNK'u'IS_CHARACTER_JUNK'b'IS_LINE_JUNK'u'IS_LINE_JUNK'b'context_diff'u'context_diff'b'unified_diff'u'unified_diff'b'diff_bytes'u'diff_bytes'b'HtmlDiff'u'HtmlDiff'b'Match'u'Match'b'a b size'u'a b size'b' + SequenceMatcher is a flexible class for comparing pairs of sequences of + any type, so long as the sequence elements are hashable. The basic + algorithm predates, and is a little fancier than, an algorithm + published in the late 1980's by Ratcliff and Obershelp under the + hyperbolic name "gestalt pattern matching". The basic idea is to find + the longest contiguous matching subsequence that contains no "junk" + elements (R-O doesn't address junk). The same idea is then applied + recursively to the pieces of the sequences to the left and to the right + of the matching subsequence. This does not yield minimal edit + sequences, but does tend to yield matches that "look right" to people. + + SequenceMatcher tries to compute a "human-friendly diff" between two + sequences. Unlike e.g. UNIX(tm) diff, the fundamental notion is the + longest *contiguous* & junk-free matching subsequence. That's what + catches peoples' eyes. The Windows(tm) windiff has another interesting + notion, pairing up elements that appear uniquely in each sequence. + That, and the method here, appear to yield more intuitive difference + reports than does diff. This method appears to be the least vulnerable + to synching up on blocks of "junk lines", though (like blank lines in + ordinary text files, or maybe "

" lines in HTML files). That may be + because this is the only method of the 3 that has a *concept* of + "junk" . + + Example, comparing two strings, and considering blanks to be "junk": + + >>> s = SequenceMatcher(lambda x: x == " ", + ... "private Thread currentThread;", + ... "private volatile Thread currentThread;") + >>> + + .ratio() returns a float in [0, 1], measuring the "similarity" of the + sequences. As a rule of thumb, a .ratio() value over 0.6 means the + sequences are close matches: + + >>> print(round(s.ratio(), 3)) + 0.866 + >>> + + If you're only interested in where the sequences match, + .get_matching_blocks() is handy: + + >>> for block in s.get_matching_blocks(): + ... print("a[%d] and b[%d] match for %d elements" % block) + a[0] and b[0] match for 8 elements + a[8] and b[17] match for 21 elements + a[29] and b[38] match for 0 elements + + Note that the last tuple returned by .get_matching_blocks() is always a + dummy, (len(a), len(b), 0), and this is the only case in which the last + tuple element (number of elements matched) is 0. + + If you want to know how to change the first sequence into the second, + use .get_opcodes(): + + >>> for opcode in s.get_opcodes(): + ... print("%6s a[%d:%d] b[%d:%d]" % opcode) + equal a[0:8] b[0:8] + insert a[8:8] b[8:17] + equal a[8:29] b[17:38] + + See the Differ class for a fancy human-friendly file differencer, which + uses SequenceMatcher both to compare sequences of lines, and to compare + sequences of characters within similar (near-matching) lines. + + See also function get_close_matches() in this module, which shows how + simple code building on SequenceMatcher can be used to do useful work. + + Timing: Basic R-O is cubic time worst case and quadratic time expected + case. SequenceMatcher is quadratic time for the worst case and has + expected-case behavior dependent in a complicated way on how many + elements the sequences have in common; best case time is linear. + + Methods: + + __init__(isjunk=None, a='', b='') + Construct a SequenceMatcher. + + set_seqs(a, b) + Set the two sequences to be compared. + + set_seq1(a) + Set the first sequence to be compared. + + set_seq2(b) + Set the second sequence to be compared. + + find_longest_match(alo, ahi, blo, bhi) + Find longest matching block in a[alo:ahi] and b[blo:bhi]. + + get_matching_blocks() + Return list of triples describing matching subsequences. + + get_opcodes() + Return list of 5-tuples describing how to turn a into b. + + ratio() + Return a measure of the sequences' similarity (float in [0,1]). + + quick_ratio() + Return an upper bound on .ratio() relatively quickly. + + real_quick_ratio() + Return an upper bound on ratio() very quickly. + 'u' + SequenceMatcher is a flexible class for comparing pairs of sequences of + any type, so long as the sequence elements are hashable. The basic + algorithm predates, and is a little fancier than, an algorithm + published in the late 1980's by Ratcliff and Obershelp under the + hyperbolic name "gestalt pattern matching". The basic idea is to find + the longest contiguous matching subsequence that contains no "junk" + elements (R-O doesn't address junk). The same idea is then applied + recursively to the pieces of the sequences to the left and to the right + of the matching subsequence. This does not yield minimal edit + sequences, but does tend to yield matches that "look right" to people. + + SequenceMatcher tries to compute a "human-friendly diff" between two + sequences. Unlike e.g. UNIX(tm) diff, the fundamental notion is the + longest *contiguous* & junk-free matching subsequence. That's what + catches peoples' eyes. The Windows(tm) windiff has another interesting + notion, pairing up elements that appear uniquely in each sequence. + That, and the method here, appear to yield more intuitive difference + reports than does diff. This method appears to be the least vulnerable + to synching up on blocks of "junk lines", though (like blank lines in + ordinary text files, or maybe "

" lines in HTML files). That may be + because this is the only method of the 3 that has a *concept* of + "junk" . + + Example, comparing two strings, and considering blanks to be "junk": + + >>> s = SequenceMatcher(lambda x: x == " ", + ... "private Thread currentThread;", + ... "private volatile Thread currentThread;") + >>> + + .ratio() returns a float in [0, 1], measuring the "similarity" of the + sequences. As a rule of thumb, a .ratio() value over 0.6 means the + sequences are close matches: + + >>> print(round(s.ratio(), 3)) + 0.866 + >>> + + If you're only interested in where the sequences match, + .get_matching_blocks() is handy: + + >>> for block in s.get_matching_blocks(): + ... print("a[%d] and b[%d] match for %d elements" % block) + a[0] and b[0] match for 8 elements + a[8] and b[17] match for 21 elements + a[29] and b[38] match for 0 elements + + Note that the last tuple returned by .get_matching_blocks() is always a + dummy, (len(a), len(b), 0), and this is the only case in which the last + tuple element (number of elements matched) is 0. + + If you want to know how to change the first sequence into the second, + use .get_opcodes(): + + >>> for opcode in s.get_opcodes(): + ... print("%6s a[%d:%d] b[%d:%d]" % opcode) + equal a[0:8] b[0:8] + insert a[8:8] b[8:17] + equal a[8:29] b[17:38] + + See the Differ class for a fancy human-friendly file differencer, which + uses SequenceMatcher both to compare sequences of lines, and to compare + sequences of characters within similar (near-matching) lines. + + See also function get_close_matches() in this module, which shows how + simple code building on SequenceMatcher can be used to do useful work. + + Timing: Basic R-O is cubic time worst case and quadratic time expected + case. SequenceMatcher is quadratic time for the worst case and has + expected-case behavior dependent in a complicated way on how many + elements the sequences have in common; best case time is linear. + + Methods: + + __init__(isjunk=None, a='', b='') + Construct a SequenceMatcher. + + set_seqs(a, b) + Set the two sequences to be compared. + + set_seq1(a) + Set the first sequence to be compared. + + set_seq2(b) + Set the second sequence to be compared. + + find_longest_match(alo, ahi, blo, bhi) + Find longest matching block in a[alo:ahi] and b[blo:bhi]. + + get_matching_blocks() + Return list of triples describing matching subsequences. + + get_opcodes() + Return list of 5-tuples describing how to turn a into b. + + ratio() + Return a measure of the sequences' similarity (float in [0,1]). + + quick_ratio() + Return an upper bound on .ratio() relatively quickly. + + real_quick_ratio() + Return an upper bound on ratio() very quickly. + 'b'Construct a SequenceMatcher. + + Optional arg isjunk is None (the default), or a one-argument + function that takes a sequence element and returns true iff the + element is junk. None is equivalent to passing "lambda x: 0", i.e. + no elements are considered to be junk. For example, pass + lambda x: x in " \t" + if you're comparing lines as sequences of characters, and don't + want to synch up on blanks or hard tabs. + + Optional arg a is the first of two sequences to be compared. By + default, an empty string. The elements of a must be hashable. See + also .set_seqs() and .set_seq1(). + + Optional arg b is the second of two sequences to be compared. By + default, an empty string. The elements of b must be hashable. See + also .set_seqs() and .set_seq2(). + + Optional arg autojunk should be set to False to disable the + "automatic junk heuristic" that treats popular elements as junk + (see module documentation for more information). + 'u'Construct a SequenceMatcher. + + Optional arg isjunk is None (the default), or a one-argument + function that takes a sequence element and returns true iff the + element is junk. None is equivalent to passing "lambda x: 0", i.e. + no elements are considered to be junk. For example, pass + lambda x: x in " \t" + if you're comparing lines as sequences of characters, and don't + want to synch up on blanks or hard tabs. + + Optional arg a is the first of two sequences to be compared. By + default, an empty string. The elements of a must be hashable. See + also .set_seqs() and .set_seq1(). + + Optional arg b is the second of two sequences to be compared. By + default, an empty string. The elements of b must be hashable. See + also .set_seqs() and .set_seq2(). + + Optional arg autojunk should be set to False to disable the + "automatic junk heuristic" that treats popular elements as junk + (see module documentation for more information). + 'b'Set the two sequences to be compared. + + >>> s = SequenceMatcher() + >>> s.set_seqs("abcd", "bcde") + >>> s.ratio() + 0.75 + 'u'Set the two sequences to be compared. + + >>> s = SequenceMatcher() + >>> s.set_seqs("abcd", "bcde") + >>> s.ratio() + 0.75 + 'b'Set the first sequence to be compared. + + The second sequence to be compared is not changed. + + >>> s = SequenceMatcher(None, "abcd", "bcde") + >>> s.ratio() + 0.75 + >>> s.set_seq1("bcde") + >>> s.ratio() + 1.0 + >>> + + SequenceMatcher computes and caches detailed information about the + second sequence, so if you want to compare one sequence S against + many sequences, use .set_seq2(S) once and call .set_seq1(x) + repeatedly for each of the other sequences. + + See also set_seqs() and set_seq2(). + 'u'Set the first sequence to be compared. + + The second sequence to be compared is not changed. + + >>> s = SequenceMatcher(None, "abcd", "bcde") + >>> s.ratio() + 0.75 + >>> s.set_seq1("bcde") + >>> s.ratio() + 1.0 + >>> + + SequenceMatcher computes and caches detailed information about the + second sequence, so if you want to compare one sequence S against + many sequences, use .set_seq2(S) once and call .set_seq1(x) + repeatedly for each of the other sequences. + + See also set_seqs() and set_seq2(). + 'b'Set the second sequence to be compared. + + The first sequence to be compared is not changed. + + >>> s = SequenceMatcher(None, "abcd", "bcde") + >>> s.ratio() + 0.75 + >>> s.set_seq2("abcd") + >>> s.ratio() + 1.0 + >>> + + SequenceMatcher computes and caches detailed information about the + second sequence, so if you want to compare one sequence S against + many sequences, use .set_seq2(S) once and call .set_seq1(x) + repeatedly for each of the other sequences. + + See also set_seqs() and set_seq1(). + 'u'Set the second sequence to be compared. + + The first sequence to be compared is not changed. + + >>> s = SequenceMatcher(None, "abcd", "bcde") + >>> s.ratio() + 0.75 + >>> s.set_seq2("abcd") + >>> s.ratio() + 1.0 + >>> + + SequenceMatcher computes and caches detailed information about the + second sequence, so if you want to compare one sequence S against + many sequences, use .set_seq2(S) once and call .set_seq1(x) + repeatedly for each of the other sequences. + + See also set_seqs() and set_seq1(). + 'b'Find longest matching block in a[alo:ahi] and b[blo:bhi]. + + If isjunk is not defined: + + Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where + alo <= i <= i+k <= ahi + blo <= j <= j+k <= bhi + and for all (i',j',k') meeting those conditions, + k >= k' + i <= i' + and if i == i', j <= j' + + In other words, of all maximal matching blocks, return one that + starts earliest in a, and of all those maximal matching blocks that + start earliest in a, return the one that starts earliest in b. + + >>> s = SequenceMatcher(None, " abcd", "abcd abcd") + >>> s.find_longest_match(0, 5, 0, 9) + Match(a=0, b=4, size=5) + + If isjunk is defined, first the longest matching block is + determined as above, but with the additional restriction that no + junk element appears in the block. Then that block is extended as + far as possible by matching (only) junk elements on both sides. So + the resulting block never matches on junk except as identical junk + happens to be adjacent to an "interesting" match. + + Here's the same example as before, but considering blanks to be + junk. That prevents " abcd" from matching the " abcd" at the tail + end of the second sequence directly. Instead only the "abcd" can + match, and matches the leftmost "abcd" in the second sequence: + + >>> s = SequenceMatcher(lambda x: x==" ", " abcd", "abcd abcd") + >>> s.find_longest_match(0, 5, 0, 9) + Match(a=1, b=0, size=4) + + If no blocks match, return (alo, blo, 0). + + >>> s = SequenceMatcher(None, "ab", "c") + >>> s.find_longest_match(0, 2, 0, 1) + Match(a=0, b=0, size=0) + 'u'Find longest matching block in a[alo:ahi] and b[blo:bhi]. + + If isjunk is not defined: + + Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where + alo <= i <= i+k <= ahi + blo <= j <= j+k <= bhi + and for all (i',j',k') meeting those conditions, + k >= k' + i <= i' + and if i == i', j <= j' + + In other words, of all maximal matching blocks, return one that + starts earliest in a, and of all those maximal matching blocks that + start earliest in a, return the one that starts earliest in b. + + >>> s = SequenceMatcher(None, " abcd", "abcd abcd") + >>> s.find_longest_match(0, 5, 0, 9) + Match(a=0, b=4, size=5) + + If isjunk is defined, first the longest matching block is + determined as above, but with the additional restriction that no + junk element appears in the block. Then that block is extended as + far as possible by matching (only) junk elements on both sides. So + the resulting block never matches on junk except as identical junk + happens to be adjacent to an "interesting" match. + + Here's the same example as before, but considering blanks to be + junk. That prevents " abcd" from matching the " abcd" at the tail + end of the second sequence directly. Instead only the "abcd" can + match, and matches the leftmost "abcd" in the second sequence: + + >>> s = SequenceMatcher(lambda x: x==" ", " abcd", "abcd abcd") + >>> s.find_longest_match(0, 5, 0, 9) + Match(a=1, b=0, size=4) + + If no blocks match, return (alo, blo, 0). + + >>> s = SequenceMatcher(None, "ab", "c") + >>> s.find_longest_match(0, 2, 0, 1) + Match(a=0, b=0, size=0) + 'b'Return list of triples describing matching subsequences. + + Each triple is of the form (i, j, n), and means that + a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in + i and in j. New in Python 2.5, it's also guaranteed that if + (i, j, n) and (i', j', n') are adjacent triples in the list, and + the second is not the last triple in the list, then i+n != i' or + j+n != j'. IOW, adjacent triples never describe adjacent equal + blocks. + + The last triple is a dummy, (len(a), len(b), 0), and is the only + triple with n==0. + + >>> s = SequenceMatcher(None, "abxcd", "abcd") + >>> list(s.get_matching_blocks()) + [Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)] + 'u'Return list of triples describing matching subsequences. + + Each triple is of the form (i, j, n), and means that + a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in + i and in j. New in Python 2.5, it's also guaranteed that if + (i, j, n) and (i', j', n') are adjacent triples in the list, and + the second is not the last triple in the list, then i+n != i' or + j+n != j'. IOW, adjacent triples never describe adjacent equal + blocks. + + The last triple is a dummy, (len(a), len(b), 0), and is the only + triple with n==0. + + >>> s = SequenceMatcher(None, "abxcd", "abcd") + >>> list(s.get_matching_blocks()) + [Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)] + 'b'Return list of 5-tuples describing how to turn a into b. + + Each tuple is of the form (tag, i1, i2, j1, j2). The first tuple + has i1 == j1 == 0, and remaining tuples have i1 == the i2 from the + tuple preceding it, and likewise for j1 == the previous j2. + + The tags are strings, with these meanings: + + 'replace': a[i1:i2] should be replaced by b[j1:j2] + 'delete': a[i1:i2] should be deleted. + Note that j1==j2 in this case. + 'insert': b[j1:j2] should be inserted at a[i1:i1]. + Note that i1==i2 in this case. + 'equal': a[i1:i2] == b[j1:j2] + + >>> a = "qabxcd" + >>> b = "abycdf" + >>> s = SequenceMatcher(None, a, b) + >>> for tag, i1, i2, j1, j2 in s.get_opcodes(): + ... print(("%7s a[%d:%d] (%s) b[%d:%d] (%s)" % + ... (tag, i1, i2, a[i1:i2], j1, j2, b[j1:j2]))) + delete a[0:1] (q) b[0:0] () + equal a[1:3] (ab) b[0:2] (ab) + replace a[3:4] (x) b[2:3] (y) + equal a[4:6] (cd) b[3:5] (cd) + insert a[6:6] () b[5:6] (f) + 'u'Return list of 5-tuples describing how to turn a into b. + + Each tuple is of the form (tag, i1, i2, j1, j2). The first tuple + has i1 == j1 == 0, and remaining tuples have i1 == the i2 from the + tuple preceding it, and likewise for j1 == the previous j2. + + The tags are strings, with these meanings: + + 'replace': a[i1:i2] should be replaced by b[j1:j2] + 'delete': a[i1:i2] should be deleted. + Note that j1==j2 in this case. + 'insert': b[j1:j2] should be inserted at a[i1:i1]. + Note that i1==i2 in this case. + 'equal': a[i1:i2] == b[j1:j2] + + >>> a = "qabxcd" + >>> b = "abycdf" + >>> s = SequenceMatcher(None, a, b) + >>> for tag, i1, i2, j1, j2 in s.get_opcodes(): + ... print(("%7s a[%d:%d] (%s) b[%d:%d] (%s)" % + ... (tag, i1, i2, a[i1:i2], j1, j2, b[j1:j2]))) + delete a[0:1] (q) b[0:0] () + equal a[1:3] (ab) b[0:2] (ab) + replace a[3:4] (x) b[2:3] (y) + equal a[4:6] (cd) b[3:5] (cd) + insert a[6:6] () b[5:6] (f) + 'b'equal'u'equal'b' Isolate change clusters by eliminating ranges with no changes. + + Return a generator of groups with up to n lines of context. + Each group is in the same format as returned by get_opcodes(). + + >>> from pprint import pprint + >>> a = list(map(str, range(1,40))) + >>> b = a[:] + >>> b[8:8] = ['i'] # Make an insertion + >>> b[20] += 'x' # Make a replacement + >>> b[23:28] = [] # Make a deletion + >>> b[30] += 'y' # Make another replacement + >>> pprint(list(SequenceMatcher(None,a,b).get_grouped_opcodes())) + [[('equal', 5, 8, 5, 8), ('insert', 8, 8, 8, 9), ('equal', 8, 11, 9, 12)], + [('equal', 16, 19, 17, 20), + ('replace', 19, 20, 20, 21), + ('equal', 20, 22, 21, 23), + ('delete', 22, 27, 23, 23), + ('equal', 27, 30, 23, 26)], + [('equal', 31, 34, 27, 30), + ('replace', 34, 35, 30, 31), + ('equal', 35, 38, 31, 34)]] + 'u' Isolate change clusters by eliminating ranges with no changes. + + Return a generator of groups with up to n lines of context. + Each group is in the same format as returned by get_opcodes(). + + >>> from pprint import pprint + >>> a = list(map(str, range(1,40))) + >>> b = a[:] + >>> b[8:8] = ['i'] # Make an insertion + >>> b[20] += 'x' # Make a replacement + >>> b[23:28] = [] # Make a deletion + >>> b[30] += 'y' # Make another replacement + >>> pprint(list(SequenceMatcher(None,a,b).get_grouped_opcodes())) + [[('equal', 5, 8, 5, 8), ('insert', 8, 8, 8, 9), ('equal', 8, 11, 9, 12)], + [('equal', 16, 19, 17, 20), + ('replace', 19, 20, 20, 21), + ('equal', 20, 22, 21, 23), + ('delete', 22, 27, 23, 23), + ('equal', 27, 30, 23, 26)], + [('equal', 31, 34, 27, 30), + ('replace', 34, 35, 30, 31), + ('equal', 35, 38, 31, 34)]] + 'b'Return a measure of the sequences' similarity (float in [0,1]). + + Where T is the total number of elements in both sequences, and + M is the number of matches, this is 2.0*M / T. + Note that this is 1 if the sequences are identical, and 0 if + they have nothing in common. + + .ratio() is expensive to compute if you haven't already computed + .get_matching_blocks() or .get_opcodes(), in which case you may + want to try .quick_ratio() or .real_quick_ratio() first to get an + upper bound. + + >>> s = SequenceMatcher(None, "abcd", "bcde") + >>> s.ratio() + 0.75 + >>> s.quick_ratio() + 0.75 + >>> s.real_quick_ratio() + 1.0 + 'u'Return a measure of the sequences' similarity (float in [0,1]). + + Where T is the total number of elements in both sequences, and + M is the number of matches, this is 2.0*M / T. + Note that this is 1 if the sequences are identical, and 0 if + they have nothing in common. + + .ratio() is expensive to compute if you haven't already computed + .get_matching_blocks() or .get_opcodes(), in which case you may + want to try .quick_ratio() or .real_quick_ratio() first to get an + upper bound. + + >>> s = SequenceMatcher(None, "abcd", "bcde") + >>> s.ratio() + 0.75 + >>> s.quick_ratio() + 0.75 + >>> s.real_quick_ratio() + 1.0 + 'b'Return an upper bound on ratio() relatively quickly. + + This isn't defined beyond that it is an upper bound on .ratio(), and + is faster to compute. + 'u'Return an upper bound on ratio() relatively quickly. + + This isn't defined beyond that it is an upper bound on .ratio(), and + is faster to compute. + 'b'Return an upper bound on ratio() very quickly. + + This isn't defined beyond that it is an upper bound on .ratio(), and + is faster to compute than either .ratio() or .quick_ratio(). + 'u'Return an upper bound on ratio() very quickly. + + This isn't defined beyond that it is an upper bound on .ratio(), and + is faster to compute than either .ratio() or .quick_ratio(). + 'b'Use SequenceMatcher to return list of the best "good enough" matches. + + word is a sequence for which close matches are desired (typically a + string). + + possibilities is a list of sequences against which to match word + (typically a list of strings). + + Optional arg n (default 3) is the maximum number of close matches to + return. n must be > 0. + + Optional arg cutoff (default 0.6) is a float in [0, 1]. Possibilities + that don't score at least that similar to word are ignored. + + The best (no more than n) matches among the possibilities are returned + in a list, sorted by similarity score, most similar first. + + >>> get_close_matches("appel", ["ape", "apple", "peach", "puppy"]) + ['apple', 'ape'] + >>> import keyword as _keyword + >>> get_close_matches("wheel", _keyword.kwlist) + ['while'] + >>> get_close_matches("Apple", _keyword.kwlist) + [] + >>> get_close_matches("accept", _keyword.kwlist) + ['except'] + 'u'Use SequenceMatcher to return list of the best "good enough" matches. + + word is a sequence for which close matches are desired (typically a + string). + + possibilities is a list of sequences against which to match word + (typically a list of strings). + + Optional arg n (default 3) is the maximum number of close matches to + return. n must be > 0. + + Optional arg cutoff (default 0.6) is a float in [0, 1]. Possibilities + that don't score at least that similar to word are ignored. + + The best (no more than n) matches among the possibilities are returned + in a list, sorted by similarity score, most similar first. + + >>> get_close_matches("appel", ["ape", "apple", "peach", "puppy"]) + ['apple', 'ape'] + >>> import keyword as _keyword + >>> get_close_matches("wheel", _keyword.kwlist) + ['while'] + >>> get_close_matches("Apple", _keyword.kwlist) + [] + >>> get_close_matches("accept", _keyword.kwlist) + ['except'] + 'b'n must be > 0: %r'u'n must be > 0: %r'b'cutoff must be in [0.0, 1.0]: %r'u'cutoff must be in [0.0, 1.0]: %r'b'Replace whitespace with the original whitespace characters in `s`'u'Replace whitespace with the original whitespace characters in `s`'b' + Differ is a class for comparing sequences of lines of text, and + producing human-readable differences or deltas. Differ uses + SequenceMatcher both to compare sequences of lines, and to compare + sequences of characters within similar (near-matching) lines. + + Each line of a Differ delta begins with a two-letter code: + + '- ' line unique to sequence 1 + '+ ' line unique to sequence 2 + ' ' line common to both sequences + '? ' line not present in either input sequence + + Lines beginning with '? ' attempt to guide the eye to intraline + differences, and were not present in either input sequence. These lines + can be confusing if the sequences contain tab characters. + + Note that Differ makes no claim to produce a *minimal* diff. To the + contrary, minimal diffs are often counter-intuitive, because they synch + up anywhere possible, sometimes accidental matches 100 pages apart. + Restricting synch points to contiguous matches preserves some notion of + locality, at the occasional cost of producing a longer diff. + + Example: Comparing two texts. + + First we set up the texts, sequences of individual single-line strings + ending with newlines (such sequences can also be obtained from the + `readlines()` method of file-like objects): + + >>> text1 = ''' 1. Beautiful is better than ugly. + ... 2. Explicit is better than implicit. + ... 3. Simple is better than complex. + ... 4. Complex is better than complicated. + ... '''.splitlines(keepends=True) + >>> len(text1) + 4 + >>> text1[0][-1] + '\n' + >>> text2 = ''' 1. Beautiful is better than ugly. + ... 3. Simple is better than complex. + ... 4. Complicated is better than complex. + ... 5. Flat is better than nested. + ... '''.splitlines(keepends=True) + + Next we instantiate a Differ object: + + >>> d = Differ() + + Note that when instantiating a Differ object we may pass functions to + filter out line and character 'junk'. See Differ.__init__ for details. + + Finally, we compare the two: + + >>> result = list(d.compare(text1, text2)) + + 'result' is a list of strings, so let's pretty-print it: + + >>> from pprint import pprint as _pprint + >>> _pprint(result) + [' 1. Beautiful is better than ugly.\n', + '- 2. Explicit is better than implicit.\n', + '- 3. Simple is better than complex.\n', + '+ 3. Simple is better than complex.\n', + '? ++\n', + '- 4. Complex is better than complicated.\n', + '? ^ ---- ^\n', + '+ 4. Complicated is better than complex.\n', + '? ++++ ^ ^\n', + '+ 5. Flat is better than nested.\n'] + + As a single multi-line string it looks like this: + + >>> print(''.join(result), end="") + 1. Beautiful is better than ugly. + - 2. Explicit is better than implicit. + - 3. Simple is better than complex. + + 3. Simple is better than complex. + ? ++ + - 4. Complex is better than complicated. + ? ^ ---- ^ + + 4. Complicated is better than complex. + ? ++++ ^ ^ + + 5. Flat is better than nested. + + Methods: + + __init__(linejunk=None, charjunk=None) + Construct a text differencer, with optional filters. + + compare(a, b) + Compare two sequences of lines; generate the resulting delta. + 'u' + Differ is a class for comparing sequences of lines of text, and + producing human-readable differences or deltas. Differ uses + SequenceMatcher both to compare sequences of lines, and to compare + sequences of characters within similar (near-matching) lines. + + Each line of a Differ delta begins with a two-letter code: + + '- ' line unique to sequence 1 + '+ ' line unique to sequence 2 + ' ' line common to both sequences + '? ' line not present in either input sequence + + Lines beginning with '? ' attempt to guide the eye to intraline + differences, and were not present in either input sequence. These lines + can be confusing if the sequences contain tab characters. + + Note that Differ makes no claim to produce a *minimal* diff. To the + contrary, minimal diffs are often counter-intuitive, because they synch + up anywhere possible, sometimes accidental matches 100 pages apart. + Restricting synch points to contiguous matches preserves some notion of + locality, at the occasional cost of producing a longer diff. + + Example: Comparing two texts. + + First we set up the texts, sequences of individual single-line strings + ending with newlines (such sequences can also be obtained from the + `readlines()` method of file-like objects): + + >>> text1 = ''' 1. Beautiful is better than ugly. + ... 2. Explicit is better than implicit. + ... 3. Simple is better than complex. + ... 4. Complex is better than complicated. + ... '''.splitlines(keepends=True) + >>> len(text1) + 4 + >>> text1[0][-1] + '\n' + >>> text2 = ''' 1. Beautiful is better than ugly. + ... 3. Simple is better than complex. + ... 4. Complicated is better than complex. + ... 5. Flat is better than nested. + ... '''.splitlines(keepends=True) + + Next we instantiate a Differ object: + + >>> d = Differ() + + Note that when instantiating a Differ object we may pass functions to + filter out line and character 'junk'. See Differ.__init__ for details. + + Finally, we compare the two: + + >>> result = list(d.compare(text1, text2)) + + 'result' is a list of strings, so let's pretty-print it: + + >>> from pprint import pprint as _pprint + >>> _pprint(result) + [' 1. Beautiful is better than ugly.\n', + '- 2. Explicit is better than implicit.\n', + '- 3. Simple is better than complex.\n', + '+ 3. Simple is better than complex.\n', + '? ++\n', + '- 4. Complex is better than complicated.\n', + '? ^ ---- ^\n', + '+ 4. Complicated is better than complex.\n', + '? ++++ ^ ^\n', + '+ 5. Flat is better than nested.\n'] + + As a single multi-line string it looks like this: + + >>> print(''.join(result), end="") + 1. Beautiful is better than ugly. + - 2. Explicit is better than implicit. + - 3. Simple is better than complex. + + 3. Simple is better than complex. + ? ++ + - 4. Complex is better than complicated. + ? ^ ---- ^ + + 4. Complicated is better than complex. + ? ++++ ^ ^ + + 5. Flat is better than nested. + + Methods: + + __init__(linejunk=None, charjunk=None) + Construct a text differencer, with optional filters. + + compare(a, b) + Compare two sequences of lines; generate the resulting delta. + 'b' + Construct a text differencer, with optional filters. + + The two optional keyword parameters are for filter functions: + + - `linejunk`: A function that should accept a single string argument, + and return true iff the string is junk. The module-level function + `IS_LINE_JUNK` may be used to filter out lines without visible + characters, except for at most one splat ('#'). It is recommended + to leave linejunk None; the underlying SequenceMatcher class has + an adaptive notion of "noise" lines that's better than any static + definition the author has ever been able to craft. + + - `charjunk`: A function that should accept a string of length 1. The + module-level function `IS_CHARACTER_JUNK` may be used to filter out + whitespace characters (a blank or tab; **note**: bad idea to include + newline in this!). Use of IS_CHARACTER_JUNK is recommended. + 'u' + Construct a text differencer, with optional filters. + + The two optional keyword parameters are for filter functions: + + - `linejunk`: A function that should accept a single string argument, + and return true iff the string is junk. The module-level function + `IS_LINE_JUNK` may be used to filter out lines without visible + characters, except for at most one splat ('#'). It is recommended + to leave linejunk None; the underlying SequenceMatcher class has + an adaptive notion of "noise" lines that's better than any static + definition the author has ever been able to craft. + + - `charjunk`: A function that should accept a string of length 1. The + module-level function `IS_CHARACTER_JUNK` may be used to filter out + whitespace characters (a blank or tab; **note**: bad idea to include + newline in this!). Use of IS_CHARACTER_JUNK is recommended. + 'b' + Compare two sequences of lines; generate the resulting delta. + + Each sequence must contain individual single-line strings ending with + newlines. Such sequences can be obtained from the `readlines()` method + of file-like objects. The delta generated also consists of newline- + terminated strings, ready to be printed as-is via the writeline() + method of a file-like object. + + Example: + + >>> print(''.join(Differ().compare('one\ntwo\nthree\n'.splitlines(True), + ... 'ore\ntree\nemu\n'.splitlines(True))), + ... end="") + - one + ? ^ + + ore + ? ^ + - two + - three + ? - + + tree + + emu + 'u' + Compare two sequences of lines; generate the resulting delta. + + Each sequence must contain individual single-line strings ending with + newlines. Such sequences can be obtained from the `readlines()` method + of file-like objects. The delta generated also consists of newline- + terminated strings, ready to be printed as-is via the writeline() + method of a file-like object. + + Example: + + >>> print(''.join(Differ().compare('one\ntwo\nthree\n'.splitlines(True), + ... 'ore\ntree\nemu\n'.splitlines(True))), + ... end="") + - one + ? ^ + + ore + ? ^ + - two + - three + ? - + + tree + + emu + 'b'Generate comparison results for a same-tagged range.'u'Generate comparison results for a same-tagged range.'b' + When replacing one block of lines with another, search the blocks + for *similar* lines; the best-matching pair (if any) is used as a + synch point, and intraline difference marking is done on the + similar pair. Lots of work, but often worth it. + + Example: + + >>> d = Differ() + >>> results = d._fancy_replace(['abcDefghiJkl\n'], 0, 1, + ... ['abcdefGhijkl\n'], 0, 1) + >>> print(''.join(results), end="") + - abcDefghiJkl + ? ^ ^ ^ + + abcdefGhijkl + ? ^ ^ ^ + 'u' + When replacing one block of lines with another, search the blocks + for *similar* lines; the best-matching pair (if any) is used as a + synch point, and intraline difference marking is done on the + similar pair. Lots of work, but often worth it. + + Example: + + >>> d = Differ() + >>> results = d._fancy_replace(['abcDefghiJkl\n'], 0, 1, + ... ['abcdefGhijkl\n'], 0, 1) + >>> print(''.join(results), end="") + - abcDefghiJkl + ? ^ ^ ^ + + abcdefGhijkl + ? ^ ^ ^ + 'b' + Format "?" output and deal with tabs. + + Example: + + >>> d = Differ() + >>> results = d._qformat('\tabcDefghiJkl\n', '\tabcdefGhijkl\n', + ... ' ^ ^ ^ ', ' ^ ^ ^ ') + >>> for line in results: print(repr(line)) + ... + '- \tabcDefghiJkl\n' + '? \t ^ ^ ^\n' + '+ \tabcdefGhijkl\n' + '? \t ^ ^ ^\n' + 'u' + Format "?" output and deal with tabs. + + Example: + + >>> d = Differ() + >>> results = d._qformat('\tabcDefghiJkl\n', '\tabcdefGhijkl\n', + ... ' ^ ^ ^ ', ' ^ ^ ^ ') + >>> for line in results: print(repr(line)) + ... + '- \tabcDefghiJkl\n' + '? \t ^ ^ ^\n' + '+ \tabcdefGhijkl\n' + '? \t ^ ^ ^\n' + 'b'- 'u'- 'b'? 'u'? 'b'+ 'u'+ 'b'\s*(?:#\s*)?$'u'\s*(?:#\s*)?$'b' + Return True for ignorable line: iff `line` is blank or contains a single '#'. + + Examples: + + >>> IS_LINE_JUNK('\n') + True + >>> IS_LINE_JUNK(' # \n') + True + >>> IS_LINE_JUNK('hello\n') + False + 'u' + Return True for ignorable line: iff `line` is blank or contains a single '#'. + + Examples: + + >>> IS_LINE_JUNK('\n') + True + >>> IS_LINE_JUNK(' # \n') + True + >>> IS_LINE_JUNK('hello\n') + False + 'b' + Return True for ignorable character: iff `ch` is a space or tab. + + Examples: + + >>> IS_CHARACTER_JUNK(' ') + True + >>> IS_CHARACTER_JUNK('\t') + True + >>> IS_CHARACTER_JUNK('\n') + False + >>> IS_CHARACTER_JUNK('x') + False + 'u' + Return True for ignorable character: iff `ch` is a space or tab. + + Examples: + + >>> IS_CHARACTER_JUNK(' ') + True + >>> IS_CHARACTER_JUNK('\t') + True + >>> IS_CHARACTER_JUNK('\n') + False + >>> IS_CHARACTER_JUNK('x') + False + 'b'Convert range to the "ed" format'u'Convert range to the "ed" format'b'{},{}'u'{},{}'b' + Compare two sequences of lines; generate the delta as a unified diff. + + Unified diffs are a compact way of showing line changes and a few + lines of context. The number of context lines is set by 'n' which + defaults to three. + + By default, the diff control lines (those with ---, +++, or @@) are + created with a trailing newline. This is helpful so that inputs + created from file.readlines() result in diffs that are suitable for + file.writelines() since both the inputs and outputs have trailing + newlines. + + For inputs that do not have trailing newlines, set the lineterm + argument to "" so that the output will be uniformly newline free. + + The unidiff format normally has a header for filenames and modification + times. Any or all of these may be specified using strings for + 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'. + The modification times are normally expressed in the ISO 8601 format. + + Example: + + >>> for line in unified_diff('one two three four'.split(), + ... 'zero one tree four'.split(), 'Original', 'Current', + ... '2005-01-26 23:30:50', '2010-04-02 10:20:52', + ... lineterm=''): + ... print(line) # doctest: +NORMALIZE_WHITESPACE + --- Original 2005-01-26 23:30:50 + +++ Current 2010-04-02 10:20:52 + @@ -1,4 +1,4 @@ + +zero + one + -two + -three + +tree + four + 'u' + Compare two sequences of lines; generate the delta as a unified diff. + + Unified diffs are a compact way of showing line changes and a few + lines of context. The number of context lines is set by 'n' which + defaults to three. + + By default, the diff control lines (those with ---, +++, or @@) are + created with a trailing newline. This is helpful so that inputs + created from file.readlines() result in diffs that are suitable for + file.writelines() since both the inputs and outputs have trailing + newlines. + + For inputs that do not have trailing newlines, set the lineterm + argument to "" so that the output will be uniformly newline free. + + The unidiff format normally has a header for filenames and modification + times. Any or all of these may be specified using strings for + 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'. + The modification times are normally expressed in the ISO 8601 format. + + Example: + + >>> for line in unified_diff('one two three four'.split(), + ... 'zero one tree four'.split(), 'Original', 'Current', + ... '2005-01-26 23:30:50', '2010-04-02 10:20:52', + ... lineterm=''): + ... print(line) # doctest: +NORMALIZE_WHITESPACE + --- Original 2005-01-26 23:30:50 + +++ Current 2010-04-02 10:20:52 + @@ -1,4 +1,4 @@ + +zero + one + -two + -three + +tree + four + 'b' {}'u' {}'b'--- {}{}{}'u'--- {}{}{}'b'+++ {}{}{}'u'+++ {}{}{}'b'@@ -{} +{} @@{}'u'@@ -{} +{} @@{}'b' + Compare two sequences of lines; generate the delta as a context diff. + + Context diffs are a compact way of showing line changes and a few + lines of context. The number of context lines is set by 'n' which + defaults to three. + + By default, the diff control lines (those with *** or ---) are + created with a trailing newline. This is helpful so that inputs + created from file.readlines() result in diffs that are suitable for + file.writelines() since both the inputs and outputs have trailing + newlines. + + For inputs that do not have trailing newlines, set the lineterm + argument to "" so that the output will be uniformly newline free. + + The context diff format normally has a header for filenames and + modification times. Any or all of these may be specified using + strings for 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'. + The modification times are normally expressed in the ISO 8601 format. + If not specified, the strings default to blanks. + + Example: + + >>> print(''.join(context_diff('one\ntwo\nthree\nfour\n'.splitlines(True), + ... 'zero\none\ntree\nfour\n'.splitlines(True), 'Original', 'Current')), + ... end="") + *** Original + --- Current + *************** + *** 1,4 **** + one + ! two + ! three + four + --- 1,4 ---- + + zero + one + ! tree + four + 'u' + Compare two sequences of lines; generate the delta as a context diff. + + Context diffs are a compact way of showing line changes and a few + lines of context. The number of context lines is set by 'n' which + defaults to three. + + By default, the diff control lines (those with *** or ---) are + created with a trailing newline. This is helpful so that inputs + created from file.readlines() result in diffs that are suitable for + file.writelines() since both the inputs and outputs have trailing + newlines. + + For inputs that do not have trailing newlines, set the lineterm + argument to "" so that the output will be uniformly newline free. + + The context diff format normally has a header for filenames and + modification times. Any or all of these may be specified using + strings for 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'. + The modification times are normally expressed in the ISO 8601 format. + If not specified, the strings default to blanks. + + Example: + + >>> print(''.join(context_diff('one\ntwo\nthree\nfour\n'.splitlines(True), + ... 'zero\none\ntree\nfour\n'.splitlines(True), 'Original', 'Current')), + ... end="") + *** Original + --- Current + *************** + *** 1,4 **** + one + ! two + ! three + four + --- 1,4 ---- + + zero + one + ! tree + four + 'b'! 'u'! 'b'*** {}{}{}'u'*** {}{}{}'b'***************'u'***************'b'*** {} ****{}'u'*** {} ****{}'b'--- {} ----{}'u'--- {} ----{}'b'lines to compare must be str, not %s (%r)'u'lines to compare must be str, not %s (%r)'b'all arguments must be str, not: %r'u'all arguments must be str, not: %r'b' + Compare `a` and `b`, two sequences of lines represented as bytes rather + than str. This is a wrapper for `dfunc`, which is typically either + unified_diff() or context_diff(). Inputs are losslessly converted to + strings so that `dfunc` only has to worry about strings, and encoded + back to bytes on return. This is necessary to compare files with + unknown or inconsistent encoding. All other inputs (except `n`) must be + bytes rather than str. + 'u' + Compare `a` and `b`, two sequences of lines represented as bytes rather + than str. This is a wrapper for `dfunc`, which is typically either + unified_diff() or context_diff(). Inputs are losslessly converted to + strings so that `dfunc` only has to worry about strings, and encoded + back to bytes on return. This is necessary to compare files with + unknown or inconsistent encoding. All other inputs (except `n`) must be + bytes rather than str. + 'b'all arguments must be bytes, not %s (%r)'u'all arguments must be bytes, not %s (%r)'b' + Compare `a` and `b` (lists of strings); return a `Differ`-style delta. + + Optional keyword parameters `linejunk` and `charjunk` are for filter + functions, or can be None: + + - linejunk: A function that should accept a single string argument and + return true iff the string is junk. The default is None, and is + recommended; the underlying SequenceMatcher class has an adaptive + notion of "noise" lines. + + - charjunk: A function that accepts a character (string of length + 1), and returns true iff the character is junk. The default is + the module-level function IS_CHARACTER_JUNK, which filters out + whitespace characters (a blank or tab; note: it's a bad idea to + include newline in this!). + + Tools/scripts/ndiff.py is a command-line front-end to this function. + + Example: + + >>> diff = ndiff('one\ntwo\nthree\n'.splitlines(keepends=True), + ... 'ore\ntree\nemu\n'.splitlines(keepends=True)) + >>> print(''.join(diff), end="") + - one + ? ^ + + ore + ? ^ + - two + - three + ? - + + tree + + emu + 'u' + Compare `a` and `b` (lists of strings); return a `Differ`-style delta. + + Optional keyword parameters `linejunk` and `charjunk` are for filter + functions, or can be None: + + - linejunk: A function that should accept a single string argument and + return true iff the string is junk. The default is None, and is + recommended; the underlying SequenceMatcher class has an adaptive + notion of "noise" lines. + + - charjunk: A function that accepts a character (string of length + 1), and returns true iff the character is junk. The default is + the module-level function IS_CHARACTER_JUNK, which filters out + whitespace characters (a blank or tab; note: it's a bad idea to + include newline in this!). + + Tools/scripts/ndiff.py is a command-line front-end to this function. + + Example: + + >>> diff = ndiff('one\ntwo\nthree\n'.splitlines(keepends=True), + ... 'ore\ntree\nemu\n'.splitlines(keepends=True)) + >>> print(''.join(diff), end="") + - one + ? ^ + + ore + ? ^ + - two + - three + ? - + + tree + + emu + 'b'Returns generator yielding marked up from/to side by side differences. + + Arguments: + fromlines -- list of text lines to compared to tolines + tolines -- list of text lines to be compared to fromlines + context -- number of context lines to display on each side of difference, + if None, all from/to text lines will be generated. + linejunk -- passed on to ndiff (see ndiff documentation) + charjunk -- passed on to ndiff (see ndiff documentation) + + This function returns an iterator which returns a tuple: + (from line tuple, to line tuple, boolean flag) + + from/to line tuple -- (line num, line text) + line num -- integer or None (to indicate a context separation) + line text -- original line text with following markers inserted: + '\0+' -- marks start of added text + '\0-' -- marks start of deleted text + '\0^' -- marks start of changed text + '\1' -- marks end of added/deleted/changed text + + boolean flag -- None indicates context separation, True indicates + either "from" or "to" line contains a change, otherwise False. + + This function/iterator was originally developed to generate side by side + file difference for making HTML pages (see HtmlDiff class for example + usage). + + Note, this function utilizes the ndiff function to generate the side by + side difference markup. Optional ndiff arguments may be passed to this + function and they in turn will be passed to ndiff. + 'u'Returns generator yielding marked up from/to side by side differences. + + Arguments: + fromlines -- list of text lines to compared to tolines + tolines -- list of text lines to be compared to fromlines + context -- number of context lines to display on each side of difference, + if None, all from/to text lines will be generated. + linejunk -- passed on to ndiff (see ndiff documentation) + charjunk -- passed on to ndiff (see ndiff documentation) + + This function returns an iterator which returns a tuple: + (from line tuple, to line tuple, boolean flag) + + from/to line tuple -- (line num, line text) + line num -- integer or None (to indicate a context separation) + line text -- original line text with following markers inserted: + '\0+' -- marks start of added text + '\0-' -- marks start of deleted text + '\0^' -- marks start of changed text + '\1' -- marks end of added/deleted/changed text + + boolean flag -- None indicates context separation, True indicates + either "from" or "to" line contains a change, otherwise False. + + This function/iterator was originally developed to generate side by side + file difference for making HTML pages (see HtmlDiff class for example + usage). + + Note, this function utilizes the ndiff function to generate the side by + side difference markup. Optional ndiff arguments may be passed to this + function and they in turn will be passed to ndiff. + 'b'(\++|\-+|\^+)'u'(\++|\-+|\^+)'b'Returns line of text with user's change markup and line formatting. + + lines -- list of lines from the ndiff generator to produce a line of + text from. When producing the line of text to return, the + lines used are removed from this list. + format_key -- '+' return first line in list with "add" markup around + the entire line. + '-' return first line in list with "delete" markup around + the entire line. + '?' return first line in list with add/delete/change + intraline markup (indices obtained from second line) + None return first line in list with no markup + side -- indice into the num_lines list (0=from,1=to) + num_lines -- from/to current line number. This is NOT intended to be a + passed parameter. It is present as a keyword argument to + maintain memory of the current line numbers between calls + of this function. + + Note, this function is purposefully not defined at the module scope so + that data it needs from its parent function (within whose context it + is defined) does not need to be of module scope. + 'u'Returns line of text with user's change markup and line formatting. + + lines -- list of lines from the ndiff generator to produce a line of + text from. When producing the line of text to return, the + lines used are removed from this list. + format_key -- '+' return first line in list with "add" markup around + the entire line. + '-' return first line in list with "delete" markup around + the entire line. + '?' return first line in list with add/delete/change + intraline markup (indices obtained from second line) + None return first line in list with no markup + side -- indice into the num_lines list (0=from,1=to) + num_lines -- from/to current line number. This is NOT intended to be a + passed parameter. It is present as a keyword argument to + maintain memory of the current line numbers between calls + of this function. + + Note, this function is purposefully not defined at the module scope so + that data it needs from its parent function (within whose context it + is defined) does not need to be of module scope. + 'b''u''b'Yields from/to lines of text with a change indication. + + This function is an iterator. It itself pulls lines from a + differencing iterator, processes them and yields them. When it can + it yields both a "from" and a "to" line, otherwise it will yield one + or the other. In addition to yielding the lines of from/to text, a + boolean flag is yielded to indicate if the text line(s) have + differences in them. + + Note, this function is purposefully not defined at the module scope so + that data it needs from its parent function (within whose context it + is defined) does not need to be of module scope. + 'u'Yields from/to lines of text with a change indication. + + This function is an iterator. It itself pulls lines from a + differencing iterator, processes them and yields them. When it can + it yields both a "from" and a "to" line, otherwise it will yield one + or the other. In addition to yielding the lines of from/to text, a + boolean flag is yielded to indicate if the text line(s) have + differences in them. + + Note, this function is purposefully not defined at the module scope so + that data it needs from its parent function (within whose context it + is defined) does not need to be of module scope. + 'b'X'u'X'b'-?+?'u'-?+?'b'--++'u'--++'b'--?+'u'--?+'b'--+'u'--+'b'-+?'u'-+?'b'-?+'u'-?+'b'+--'u'+--'b'+-'u'+-'b'Yields from/to lines of text with a change indication. + + This function is an iterator. It itself pulls lines from the line + iterator. Its difference from that iterator is that this function + always yields a pair of from/to text lines (with the change + indication). If necessary it will collect single from/to lines + until it has a matching pair from/to pair to yield. + + Note, this function is purposefully not defined at the module scope so + that data it needs from its parent function (within whose context it + is defined) does not need to be of module scope. + 'u'Yields from/to lines of text with a change indication. + + This function is an iterator. It itself pulls lines from the line + iterator. Its difference from that iterator is that this function + always yields a pair of from/to text lines (with the change + indication). If necessary it will collect single from/to lines + until it has a matching pair from/to pair to yield. + + Note, this function is purposefully not defined at the module scope so + that data it needs from its parent function (within whose context it + is defined) does not need to be of module scope. + 'b' + + + + + + + + + + + + %(table)s%(legend)s + + +'u' + + + + + + + + + + + + %(table)s%(legend)s + + +'b' + table.diff {font-family:Courier; border:medium;} + .diff_header {background-color:#e0e0e0} + td.diff_header {text-align:right} + .diff_next {background-color:#c0c0c0} + .diff_add {background-color:#aaffaa} + .diff_chg {background-color:#ffff77} + .diff_sub {background-color:#ffaaaa}'u' + table.diff {font-family:Courier; border:medium;} + .diff_header {background-color:#e0e0e0} + td.diff_header {text-align:right} + .diff_next {background-color:#c0c0c0} + .diff_add {background-color:#aaffaa} + .diff_chg {background-color:#ffff77} + .diff_sub {background-color:#ffaaaa}'b' + + + + %(header_row)s + +%(data_rows)s +
'u' + + + + %(header_row)s + +%(data_rows)s +
'b' + + + + +
Legends
+ + + + +
Colors
 Added 
Changed
Deleted
+ + + + +
Links
(f)irst change
(n)ext change
(t)op
'u' + + + + +
Legends
+ + + + +
Colors
 Added 
Changed
Deleted
+ + + + +
Links
(f)irst change
(n)ext change
(t)op
'b'For producing HTML side by side comparison with change highlights. + + This class can be used to create an HTML table (or a complete HTML file + containing the table) showing a side by side, line by line comparison + of text with inter-line and intra-line change highlights. The table can + be generated in either full or contextual difference mode. + + The following methods are provided for HTML generation: + + make_table -- generates HTML for a single side by side table + make_file -- generates complete HTML file with a single side by side table + + See tools/scripts/diff.py for an example usage of this class. + 'u'For producing HTML side by side comparison with change highlights. + + This class can be used to create an HTML table (or a complete HTML file + containing the table) showing a side by side, line by line comparison + of text with inter-line and intra-line change highlights. The table can + be generated in either full or contextual difference mode. + + The following methods are provided for HTML generation: + + make_table -- generates HTML for a single side by side table + make_file -- generates complete HTML file with a single side by side table + + See tools/scripts/diff.py for an example usage of this class. + 'b'HtmlDiff instance initializer + + Arguments: + tabsize -- tab stop spacing, defaults to 8. + wrapcolumn -- column number where lines are broken and wrapped, + defaults to None where lines are not wrapped. + linejunk,charjunk -- keyword arguments passed into ndiff() (used by + HtmlDiff() to generate the side by side HTML differences). See + ndiff() documentation for argument default values and descriptions. + 'u'HtmlDiff instance initializer + + Arguments: + tabsize -- tab stop spacing, defaults to 8. + wrapcolumn -- column number where lines are broken and wrapped, + defaults to None where lines are not wrapped. + linejunk,charjunk -- keyword arguments passed into ndiff() (used by + HtmlDiff() to generate the side by side HTML differences). See + ndiff() documentation for argument default values and descriptions. + 'b'Returns HTML file of side by side comparison with change highlights + + Arguments: + fromlines -- list of "from" lines + tolines -- list of "to" lines + fromdesc -- "from" file column header string + todesc -- "to" file column header string + context -- set to True for contextual differences (defaults to False + which shows full differences). + numlines -- number of context lines. When context is set True, + controls number of lines displayed before and after the change. + When context is False, controls the number of lines to place + the "next" link anchors before the next change (so click of + "next" link jumps to just before the change). + charset -- charset of the HTML document + 'u'Returns HTML file of side by side comparison with change highlights + + Arguments: + fromlines -- list of "from" lines + tolines -- list of "to" lines + fromdesc -- "from" file column header string + todesc -- "to" file column header string + context -- set to True for contextual differences (defaults to False + which shows full differences). + numlines -- number of context lines. When context is set True, + controls number of lines displayed before and after the change. + When context is False, controls the number of lines to place + the "next" link anchors before the next change (so click of + "next" link jumps to just before the change). + charset -- charset of the HTML document + 'b'Returns from/to line lists with tabs expanded and newlines removed. + + Instead of tab characters being replaced by the number of spaces + needed to fill in to the next tab stop, this function will fill + the space with tab characters. This is done so that the difference + algorithms can identify changes in a file when tabs are replaced by + spaces and vice versa. At the end of the HTML generation, the tab + characters will be replaced with a nonbreakable space. + 'u'Returns from/to line lists with tabs expanded and newlines removed. + + Instead of tab characters being replaced by the number of spaces + needed to fill in to the next tab stop, this function will fill + the space with tab characters. This is done so that the difference + algorithms can identify changes in a file when tabs are replaced by + spaces and vice versa. At the end of the HTML generation, the tab + characters will be replaced with a nonbreakable space. + 'b'Builds list of text lines by splitting text lines at wrap point + + This function will determine if the input text line needs to be + wrapped (split) into separate lines. If so, the first wrap point + will be determined and the first line appended to the output + text line list. This function is used recursively to handle + the second part of the split line to further split it. + 'u'Builds list of text lines by splitting text lines at wrap point + + This function will determine if the input text line needs to be + wrapped (split) into separate lines. If so, the first wrap point + will be determined and the first line appended to the output + text line list. This function is used recursively to handle + the second part of the split line to further split it. + 'b'Returns iterator that splits (wraps) mdiff text lines'u'Returns iterator that splits (wraps) mdiff text lines'b'Collects mdiff output into separate lists + + Before storing the mdiff from/to data into a list, it is converted + into a single line of text with HTML markup. + 'u'Collects mdiff output into separate lists + + Before storing the mdiff from/to data into a list, it is converted + into a single line of text with HTML markup. + 'b'Returns HTML markup of "from" / "to" text lines + + side -- 0 or 1 indicating "from" or "to" text + flag -- indicates if difference on line + linenum -- line number (used for line number column) + text -- line text to be marked up + 'u'Returns HTML markup of "from" / "to" text lines + + side -- 0 or 1 indicating "from" or "to" text + flag -- indicates if difference on line + linenum -- line number (used for line number column) + text -- line text to be marked up + 'b'%d'u'%d'b' id="%s%s"'u' id="%s%s"'b' 'u' 'b'%s%s'u'%s%s'b'Create unique anchor prefixes'u'Create unique anchor prefixes'b'from%d_'u'from%d_'b'to%d_'u'to%d_'b'Makes list of "next" links'u'Makes list of "next" links'b' id="difflib_chg_%s_%d"'u' id="difflib_chg_%s_%d"'b'n'u'n'b' No Differences Found 'u' No Differences Found 'b' Empty File 'u' Empty File 'b'f'u'f'b't'u't'b'Returns HTML table of side by side comparison with change highlights + + Arguments: + fromlines -- list of "from" lines + tolines -- list of "to" lines + fromdesc -- "from" file column header string + todesc -- "to" file column header string + context -- set to True for contextual differences (defaults to False + which shows full differences). + numlines -- number of context lines. When context is set True, + controls number of lines displayed before and after the change. + When context is False, controls the number of lines to place + the "next" link anchors before the next change (so click of + "next" link jumps to just before the change). + 'u'Returns HTML table of side by side comparison with change highlights + + Arguments: + fromlines -- list of "from" lines + tolines -- list of "to" lines + fromdesc -- "from" file column header string + todesc -- "to" file column header string + context -- set to True for contextual differences (defaults to False + which shows full differences). + numlines -- number of context lines. When context is set True, + controls number of lines displayed before and after the change. + When context is False, controls the number of lines to place + the "next" link anchors before the next change (so click of + "next" link jumps to just before the change). + 'b' %s%s'u' %s%s'b'%s%s +'u'%s%s +'b' + +'u' + +'b'%s%s%s%s'u'%s%s%s%s'b'
'u'
'b'%s'u'%s'b'+'u'+'b''u''b'-'u'-'b''u''b'^'u'^'b''u''b''u''b' + Generate one of the two sequences that generated a delta. + + Given a `delta` produced by `Differ.compare()` or `ndiff()`, extract + lines originating from file 1 or 2 (parameter `which`), stripping off line + prefixes. + + Examples: + + >>> diff = ndiff('one\ntwo\nthree\n'.splitlines(keepends=True), + ... 'ore\ntree\nemu\n'.splitlines(keepends=True)) + >>> diff = list(diff) + >>> print(''.join(restore(diff, 1)), end="") + one + two + three + >>> print(''.join(restore(diff, 2)), end="") + ore + tree + emu + 'u' + Generate one of the two sequences that generated a delta. + + Given a `delta` produced by `Differ.compare()` or `ndiff()`, extract + lines originating from file 1 or 2 (parameter `which`), stripping off line + prefixes. + + Examples: + + >>> diff = ndiff('one\ntwo\nthree\n'.splitlines(keepends=True), + ... 'ore\ntree\nemu\n'.splitlines(keepends=True)) + >>> diff = list(diff) + >>> print(''.join(restore(diff, 1)), end="") + one + two + three + >>> print(''.join(restore(diff, 2)), end="") + ore + tree + emu + 'b'unknown delta choice (must be 1 or 2): %r'u'unknown delta choice (must be 1 or 2): %r'u'difflib'distutils.dir_util + +Utility functions for manipulating directories and directory trees.DistutilsInternalError_path_createdCreate a directory and any missing ancestor directories. + + If the directory already exists (or if 'name' is the empty string, which + means the current directory, which of course exists), then do nothing. + Raise DistutilsFileError if unable to create some directory along the way + (eg. some sub-path exists, but is a file rather than a directory). + If 'verbose' is true, print a one-line summary of each mkdir to stdout. + Return the list of directories actually created. + mkpath: 'name' must be a string (got %r)normpathcreated_dirstailsabs_headcreating %sEEXISTcould not create '%s': %screate_treebase_dirfilesCreate all the empty directories under 'base_dir' needed to put 'files' + there. + + 'base_dir' is just the name of a directory which doesn't necessarily + exist yet; 'files' is a list of filenames to be interpreted relative to + 'base_dir'. 'base_dir' + the directory portion of every file in 'files' + will be created if it doesn't already exist. 'mode', 'verbose' and + 'dry_run' flags are as for 'mkpath()'. + need_dircopy_treepreserve_modepreserve_timespreserve_symlinksCopy an entire directory tree 'src' to a new location 'dst'. + + Both 'src' and 'dst' must be directory names. If 'src' is not a + directory, raise DistutilsFileError. If 'dst' does not exist, it is + created with 'mkpath()'. The end result of the copy is that every + file in 'src' is copied to 'dst', and directories under 'src' are + recursively copied to 'dst'. Return the list of files that were + copied or might have been copied, using their output name. The + return value is unaffected by 'update' or 'dry_run': it is simply + the list of all files under 'src', with the names changed to be + under 'dst'. + + 'preserve_mode' and 'preserve_times' are the same as for + 'copy_file'; note that they only apply to regular files, not to + directories. If 'preserve_symlinks' is true, symlinks will be + copied as symlinks (on platforms that support them!); otherwise + (the default), the destination of the symlink will be copied. + 'update' and 'verbose' are the same as for 'copy_file'. + copy_filecannot copy tree '%s': not a directoryerror listing files in '%s': %soutputsdst_name.nfsislinkreadlinklink_destlinking %s -> %s_build_cmdtuplecmdtuplesHelper for remove_tree().real_fremove_treeRecursively remove an entire directory tree. + + Any errors are ignored (apart from being reported to stdout if 'verbose' + is true). + removing '%s' (and everything under it)error removing %s: %sensure_relativeTake the full path 'path', and make it a relative path. + + This is useful to make 'path' the second argument to os.path.join(). + drive# cache for by mkpath() -- in addition to cheapening redundant calls,# eliminates redundant "creating /foo/bar/baz" messages in dry-run mode# I don't use os.makedirs because a) it's new to Python 1.5.2, and# b) it blows up if the directory already exists (I want to silently# succeed in that case).# Detect a common bug -- name is None# XXX what's the better way to handle verbosity? print as we create# each directory in the path (the current behaviour), or only announce# the creation of the whole path? (quite easy to do the latter since# we're not using a recursive algorithm)# stack of lone dirs to create# push next higher dir onto stack# now 'head' contains the deepest directory that already exists# (that is, the child of 'head' in 'name' is the highest directory# that does *not* exist)#print "head = %s, d = %s: " % (head, d),# First get the list of directories to create# Now create them# skip NFS rename files# remove dir from cache if it's already thereb'distutils.dir_util + +Utility functions for manipulating directories and directory trees.'u'distutils.dir_util + +Utility functions for manipulating directories and directory trees.'b'Create a directory and any missing ancestor directories. + + If the directory already exists (or if 'name' is the empty string, which + means the current directory, which of course exists), then do nothing. + Raise DistutilsFileError if unable to create some directory along the way + (eg. some sub-path exists, but is a file rather than a directory). + If 'verbose' is true, print a one-line summary of each mkdir to stdout. + Return the list of directories actually created. + 'u'Create a directory and any missing ancestor directories. + + If the directory already exists (or if 'name' is the empty string, which + means the current directory, which of course exists), then do nothing. + Raise DistutilsFileError if unable to create some directory along the way + (eg. some sub-path exists, but is a file rather than a directory). + If 'verbose' is true, print a one-line summary of each mkdir to stdout. + Return the list of directories actually created. + 'b'mkpath: 'name' must be a string (got %r)'u'mkpath: 'name' must be a string (got %r)'b'creating %s'u'creating %s'b'could not create '%s': %s'u'could not create '%s': %s'b'Create all the empty directories under 'base_dir' needed to put 'files' + there. + + 'base_dir' is just the name of a directory which doesn't necessarily + exist yet; 'files' is a list of filenames to be interpreted relative to + 'base_dir'. 'base_dir' + the directory portion of every file in 'files' + will be created if it doesn't already exist. 'mode', 'verbose' and + 'dry_run' flags are as for 'mkpath()'. + 'u'Create all the empty directories under 'base_dir' needed to put 'files' + there. + + 'base_dir' is just the name of a directory which doesn't necessarily + exist yet; 'files' is a list of filenames to be interpreted relative to + 'base_dir'. 'base_dir' + the directory portion of every file in 'files' + will be created if it doesn't already exist. 'mode', 'verbose' and + 'dry_run' flags are as for 'mkpath()'. + 'b'Copy an entire directory tree 'src' to a new location 'dst'. + + Both 'src' and 'dst' must be directory names. If 'src' is not a + directory, raise DistutilsFileError. If 'dst' does not exist, it is + created with 'mkpath()'. The end result of the copy is that every + file in 'src' is copied to 'dst', and directories under 'src' are + recursively copied to 'dst'. Return the list of files that were + copied or might have been copied, using their output name. The + return value is unaffected by 'update' or 'dry_run': it is simply + the list of all files under 'src', with the names changed to be + under 'dst'. + + 'preserve_mode' and 'preserve_times' are the same as for + 'copy_file'; note that they only apply to regular files, not to + directories. If 'preserve_symlinks' is true, symlinks will be + copied as symlinks (on platforms that support them!); otherwise + (the default), the destination of the symlink will be copied. + 'update' and 'verbose' are the same as for 'copy_file'. + 'u'Copy an entire directory tree 'src' to a new location 'dst'. + + Both 'src' and 'dst' must be directory names. If 'src' is not a + directory, raise DistutilsFileError. If 'dst' does not exist, it is + created with 'mkpath()'. The end result of the copy is that every + file in 'src' is copied to 'dst', and directories under 'src' are + recursively copied to 'dst'. Return the list of files that were + copied or might have been copied, using their output name. The + return value is unaffected by 'update' or 'dry_run': it is simply + the list of all files under 'src', with the names changed to be + under 'dst'. + + 'preserve_mode' and 'preserve_times' are the same as for + 'copy_file'; note that they only apply to regular files, not to + directories. If 'preserve_symlinks' is true, symlinks will be + copied as symlinks (on platforms that support them!); otherwise + (the default), the destination of the symlink will be copied. + 'update' and 'verbose' are the same as for 'copy_file'. + 'b'cannot copy tree '%s': not a directory'u'cannot copy tree '%s': not a directory'b'error listing files in '%s': %s'u'error listing files in '%s': %s'b'.nfs'u'.nfs'b'linking %s -> %s'u'linking %s -> %s'b'Helper for remove_tree().'u'Helper for remove_tree().'b'Recursively remove an entire directory tree. + + Any errors are ignored (apart from being reported to stdout if 'verbose' + is true). + 'u'Recursively remove an entire directory tree. + + Any errors are ignored (apart from being reported to stdout if 'verbose' + is true). + 'b'removing '%s' (and everything under it)'u'removing '%s' (and everything under it)'b'error removing %s: %s'u'error removing %s: %s'b'Take the full path 'path', and make it a relative path. + + This is useful to make 'path' the second argument to os.path.join(). + 'u'Take the full path 'path', and make it a relative path. + + This is useful to make 'path' the second argument to os.path.join(). + 'u'distutils.dir_util'u'dir_util'Disassembler of Python byte code into mnemonics.opcode_opcodes_allcode_infodisdisassembledistbdiscofindlinestartsfindlabelsshow_codeget_instructionsInstructionBytecode_have_codeFORMAT_VALUEFORMAT_VALUE_CONVERTERSMAKE_FUNCTIONkwdefaultsclosureMAKE_FUNCTION_FLAGS_try_compileAttempts to compile the given source, first as an expression and + then as a statement if the first approach fails. + + Utility function to accept strings in functions that otherwise + expect code objects + Disassemble classes, methods, functions, and other compiled objects. + + With no argument, disassemble the last traceback. + + Compiled objects currently include generator objects, async generator + objects, and coroutine objects, all of which store their code object + in a special attribute. + ag_codeDisassembly of %s:Sorry:co_code_disassemble_recursive_disassemble_bytes_disassemble_strdon't know how to disassemble %s objectsDisassemble a traceback (default: last traceback).no last traceback to disassembletb_lastiOPTIMIZEDNEWLOCALSVARARGSVARKEYWORDSNESTEDGENERATORNOFREECOROUTINEITERABLE_COROUTINEASYNC_GENERATORCOMPILER_FLAG_NAMESpretty_flagsReturn pretty representation of code flags._get_code_objectHelper to handle methods, compiled or raw code objects, and strings.Formatted details of methods, functions, or code._format_code_infoName: %sFilename: %sArgument count: %sco_argcountPositional-only arguments: %sco_posonlyargcountKw-only arguments: %sco_kwonlyargcountNumber of locals: %sco_nlocalsStack size: %sco_stacksizeFlags: %sco_constsConstants:i_c%4d: %rco_namesNames:i_n%4d: %sco_varnamesVariable names:co_freevarsFree variables:co_cellvarsCell variables:Print details of methods, functions, or code to *file*. + + If *file* is not provided, the output is printed on stdout. + _Instructionopname opcode arg argval argrepr offset starts_line is_jump_targetHuman readable name for operationopnameNumeric code for operationNumeric argument to operation (if any), otherwise NoneResolved arg value (if known), otherwise same as argargvalHuman readable description of operation argumentargreprStart index of operation within bytecode sequenceLine started by this opcode (if any), otherwise Nonestarts_lineTrue if other code jumps to here, otherwise Falseis_jump_target_OPNAME_WIDTH_OPARG_WIDTHDetails for a bytecode operation + + Defined fields: + opname - human readable name for operation + opcode - numeric code for operation + arg - numeric argument to operation (if any), otherwise None + argval - resolved arg value (if known), otherwise same as arg + argrepr - human readable description of operation argument + offset - start index of operation within bytecode sequence + starts_line - line started by this opcode (if any), otherwise None + is_jump_target - True if other code jumps to here, otherwise False + _disassemblelineno_widthmark_as_currentoffset_widthFormat instruction details for inclusion in disassembly output + + *lineno_width* sets the width of the line number field (0 omits it) + *mark_as_current* inserts a '-->' marker arrow as part of the line + *offset_width* sets the width of the instruction offset field + %%%ddlineno_fmt >>first_lineIterator for the opcodes in methods, functions or code + + Generates a series of Instruction named tuples giving the details of + each operations in the supplied code. + + If *first_line* is not None, it indicates the line number that should + be reported for the first source line in the disassembled code. + Otherwise, the source line information (if any) is taken directly from + the disassembled code object. + cell_nameslinestartsline_offset_get_instructions_bytes_get_const_infoconst_indexconst_listHelper to get optional details about const references + + Returns the dereferenced constant and its repr if the constant + list is defined. + Otherwise returns the constant index and its repr(). + _get_name_infoname_indexname_listHelper to get optional details about named references + + Returns the dereferenced name as both value and repr if the name + list is defined. + Otherwise returns the name index and its repr(). + varnamesIterate over the instructions in a bytecode string. + + Generates a sequence of Instruction namedtuples giving the details of each + opcode. Additional information about the code's runtime environment + (e.g. variable names, constants) can be specified using optional + arguments. + + labels_unpack_opargshasconsthasnamehasjrelto haslocalhascomparecmp_ophasfreewith formatlastiDisassemble a code object.Disassembly of %r:show_linenomaxlinenomaxoffset10000instrnew_source_lineis_current_instrCompile the source string, then disassemble the code object.extended_argHAVE_ARGUMENTEXTENDED_ARGDetect all offsets in a byte code which are jump targets. + + Return the list of offsets. + + hasjabsFind the offsets in a byte code which are start of lines in the source. + + Generate pairs (offset, lineno) as described in Python/compile.c. + + co_lnotabbyte_incrementsline_incrementsbytecode_lenlastlinenobyte_incrline_incr0x100The bytecode operations of a piece of code + + Instantiate this with a function, method, other compiled object, string of + code, or a code object (as returned by compile()). + + Iterating over this yields the bytecode operations as Instruction instances. + current_offsetcodeobj_line_offset_cell_names_linestarts_original_object{}({!r})from_traceback Construct a Bytecode from the given traceback Return formatted information about the code object.Return a formatted view of the bytecode operations.Simple test program to disassemble a file.infile# Extract functions from methods.# Extract compiled code objects from...# ...a function, or#...a generator object, or#...an asynchronous generator object, or#...a coroutine.# Perform the disassembly.# Class or module# Code object# Raw bytecode# Source code# The inspect module interrogates this dictionary to build its# list of CO_* constants. It is also used by pretty_flags to# turn the co_flags field into a human readable list.# Handle source code.# By now, if we don't have a code object, we can't disassemble x.# Column: Source code line number# Column: Current instruction indicator# Column: Jump target marker# Column: Instruction offset from start of code sequence# Column: Opcode name# Column: Opcode argument# Column: Opcode argument details# Set argval to the dereferenced value of the argument when# available, and argrepr to the string representation of argval.# _disassemble_bytes needs the string repr of the# raw name index for LOAD_GLOBAL, LOAD_CONST, etc.# Omit the line number column entirely if we have no line number info# XXX For backwards compatibility# The rest of the lnotab byte offsets are past the end of# the bytecode, so the lines were optimized away.# line_increments is an array of 8-bit signed integersb'Disassembler of Python byte code into mnemonics.'u'Disassembler of Python byte code into mnemonics.'b'code_info'u'code_info'b'dis'u'dis'b'disassemble'u'disassemble'b'distb'u'distb'b'disco'u'disco'b'findlinestarts'u'findlinestarts'b'findlabels'u'findlabels'b'show_code'u'show_code'b'get_instructions'u'get_instructions'b'Instruction'u'Instruction'b'Bytecode'u'Bytecode'b'FORMAT_VALUE'u'FORMAT_VALUE'b'MAKE_FUNCTION'u'MAKE_FUNCTION'b'defaults'b'kwdefaults'u'kwdefaults'b'closure'u'closure'b'Attempts to compile the given source, first as an expression and + then as a statement if the first approach fails. + + Utility function to accept strings in functions that otherwise + expect code objects + 'u'Attempts to compile the given source, first as an expression and + then as a statement if the first approach fails. + + Utility function to accept strings in functions that otherwise + expect code objects + 'b'Disassemble classes, methods, functions, and other compiled objects. + + With no argument, disassemble the last traceback. + + Compiled objects currently include generator objects, async generator + objects, and coroutine objects, all of which store their code object + in a special attribute. + 'u'Disassemble classes, methods, functions, and other compiled objects. + + With no argument, disassemble the last traceback. + + Compiled objects currently include generator objects, async generator + objects, and coroutine objects, all of which store their code object + in a special attribute. + 'b'__func__'u'__func__'b'__code__'u'__code__'b'ag_code'u'ag_code'b'Disassembly of %s:'u'Disassembly of %s:'b'Sorry:'u'Sorry:'b'co_code'u'co_code'b'don't know how to disassemble %s objects'u'don't know how to disassemble %s objects'b'Disassemble a traceback (default: last traceback).'u'Disassemble a traceback (default: last traceback).'b'no last traceback to disassemble'u'no last traceback to disassemble'b'OPTIMIZED'u'OPTIMIZED'b'NEWLOCALS'u'NEWLOCALS'b'VARARGS'u'VARARGS'b'VARKEYWORDS'u'VARKEYWORDS'b'NESTED'u'NESTED'b'GENERATOR'u'GENERATOR'b'NOFREE'u'NOFREE'b'COROUTINE'u'COROUTINE'b'ITERABLE_COROUTINE'u'ITERABLE_COROUTINE'b'ASYNC_GENERATOR'u'ASYNC_GENERATOR'b'Return pretty representation of code flags.'u'Return pretty representation of code flags.'b'Helper to handle methods, compiled or raw code objects, and strings.'u'Helper to handle methods, compiled or raw code objects, and strings.'b''u''b'Formatted details of methods, functions, or code.'u'Formatted details of methods, functions, or code.'b'Name: %s'u'Name: %s'b'Filename: %s'u'Filename: %s'b'Argument count: %s'u'Argument count: %s'b'Positional-only arguments: %s'u'Positional-only arguments: %s'b'Kw-only arguments: %s'u'Kw-only arguments: %s'b'Number of locals: %s'u'Number of locals: %s'b'Stack size: %s'u'Stack size: %s'b'Flags: %s'u'Flags: %s'b'Constants:'u'Constants:'b'%4d: %r'u'%4d: %r'b'Names:'u'Names:'b'%4d: %s'u'%4d: %s'b'Variable names:'u'Variable names:'b'Free variables:'u'Free variables:'b'Cell variables:'u'Cell variables:'b'Print details of methods, functions, or code to *file*. + + If *file* is not provided, the output is printed on stdout. + 'u'Print details of methods, functions, or code to *file*. + + If *file* is not provided, the output is printed on stdout. + 'b'_Instruction'u'_Instruction'b'opname opcode arg argval argrepr offset starts_line is_jump_target'u'opname opcode arg argval argrepr offset starts_line is_jump_target'b'Human readable name for operation'u'Human readable name for operation'b'Numeric code for operation'u'Numeric code for operation'b'Numeric argument to operation (if any), otherwise None'u'Numeric argument to operation (if any), otherwise None'b'Resolved arg value (if known), otherwise same as arg'u'Resolved arg value (if known), otherwise same as arg'b'Human readable description of operation argument'u'Human readable description of operation argument'b'Start index of operation within bytecode sequence'u'Start index of operation within bytecode sequence'b'Line started by this opcode (if any), otherwise None'u'Line started by this opcode (if any), otherwise None'b'True if other code jumps to here, otherwise False'u'True if other code jumps to here, otherwise False'b'Details for a bytecode operation + + Defined fields: + opname - human readable name for operation + opcode - numeric code for operation + arg - numeric argument to operation (if any), otherwise None + argval - resolved arg value (if known), otherwise same as arg + argrepr - human readable description of operation argument + offset - start index of operation within bytecode sequence + starts_line - line started by this opcode (if any), otherwise None + is_jump_target - True if other code jumps to here, otherwise False + 'u'Details for a bytecode operation + + Defined fields: + opname - human readable name for operation + opcode - numeric code for operation + arg - numeric argument to operation (if any), otherwise None + argval - resolved arg value (if known), otherwise same as arg + argrepr - human readable description of operation argument + offset - start index of operation within bytecode sequence + starts_line - line started by this opcode (if any), otherwise None + is_jump_target - True if other code jumps to here, otherwise False + 'b'Format instruction details for inclusion in disassembly output + + *lineno_width* sets the width of the line number field (0 omits it) + *mark_as_current* inserts a '-->' marker arrow as part of the line + *offset_width* sets the width of the instruction offset field + 'u'Format instruction details for inclusion in disassembly output + + *lineno_width* sets the width of the line number field (0 omits it) + *mark_as_current* inserts a '-->' marker arrow as part of the line + *offset_width* sets the width of the instruction offset field + 'b'%%%dd'u'%%%dd'b' 'u' 'b'>>'u'>>'b'Iterator for the opcodes in methods, functions or code + + Generates a series of Instruction named tuples giving the details of + each operations in the supplied code. + + If *first_line* is not None, it indicates the line number that should + be reported for the first source line in the disassembled code. + Otherwise, the source line information (if any) is taken directly from + the disassembled code object. + 'u'Iterator for the opcodes in methods, functions or code + + Generates a series of Instruction named tuples giving the details of + each operations in the supplied code. + + If *first_line* is not None, it indicates the line number that should + be reported for the first source line in the disassembled code. + Otherwise, the source line information (if any) is taken directly from + the disassembled code object. + 'b'Helper to get optional details about const references + + Returns the dereferenced constant and its repr if the constant + list is defined. + Otherwise returns the constant index and its repr(). + 'u'Helper to get optional details about const references + + Returns the dereferenced constant and its repr if the constant + list is defined. + Otherwise returns the constant index and its repr(). + 'b'Helper to get optional details about named references + + Returns the dereferenced name as both value and repr if the name + list is defined. + Otherwise returns the name index and its repr(). + 'u'Helper to get optional details about named references + + Returns the dereferenced name as both value and repr if the name + list is defined. + Otherwise returns the name index and its repr(). + 'b'Iterate over the instructions in a bytecode string. + + Generates a sequence of Instruction namedtuples giving the details of each + opcode. Additional information about the code's runtime environment + (e.g. variable names, constants) can be specified using optional + arguments. + + 'u'Iterate over the instructions in a bytecode string. + + Generates a sequence of Instruction namedtuples giving the details of each + opcode. Additional information about the code's runtime environment + (e.g. variable names, constants) can be specified using optional + arguments. + + 'b'to 'u'to 'b'with format'u'with format'b'Disassemble a code object.'u'Disassemble a code object.'b'Disassembly of %r:'u'Disassembly of %r:'b'Compile the source string, then disassemble the code object.'u'Compile the source string, then disassemble the code object.'b''u''b'Detect all offsets in a byte code which are jump targets. + + Return the list of offsets. + + 'u'Detect all offsets in a byte code which are jump targets. + + Return the list of offsets. + + 'b'Find the offsets in a byte code which are start of lines in the source. + + Generate pairs (offset, lineno) as described in Python/compile.c. + + 'u'Find the offsets in a byte code which are start of lines in the source. + + Generate pairs (offset, lineno) as described in Python/compile.c. + + 'b'The bytecode operations of a piece of code + + Instantiate this with a function, method, other compiled object, string of + code, or a code object (as returned by compile()). + + Iterating over this yields the bytecode operations as Instruction instances. + 'u'The bytecode operations of a piece of code + + Instantiate this with a function, method, other compiled object, string of + code, or a code object (as returned by compile()). + + Iterating over this yields the bytecode operations as Instruction instances. + 'b'{}({!r})'u'{}({!r})'b' Construct a Bytecode from the given traceback 'u' Construct a Bytecode from the given traceback 'b'Return formatted information about the code object.'u'Return formatted information about the code object.'b'Return a formatted view of the bytecode operations.'u'Return a formatted view of the bytecode operations.'b'Simple test program to disassemble a file.'u'Simple test program to disassemble a file.'b'infile'u'infile'Module doctest -- a framework for running examples in docstrings. + +In simplest use, end each module M to be tested with: + +def _test(): + import doctest + doctest.testmod() + +if __name__ == "__main__": + _test() + +Then running the module as a script will cause the examples in the +docstrings to get executed and verified: + +python M.py + +This won't display anything unless an example fails, in which case the +failing example(s) and the cause(s) of the failure(s) are printed to stdout +(why not stderr? because stderr is a lame hack <0.2 wink>), and the final +line of output is "Test failed.". + +Run it with the -v switch instead: + +python M.py -v + +and a detailed report of all examples tried is printed to stdout, along +with assorted summaries at the end. + +You can force verbose mode by passing "verbose=True" to testmod, or prohibit +it by passing "verbose=False". In either of those cases, sys.argv is not +examined by testmod. + +There are a variety of other ways to run doctests, including integration +with the unittest framework, and support for running non-Python text +files containing doctests. There are also many ways to override parts +of doctest's default behaviors. See the Library Reference Manual for +details. +reStructuredText en__docformat__register_optionflagDONT_ACCEPT_TRUE_FOR_1DONT_ACCEPT_BLANKLINENORMALIZE_WHITESPACEELLIPSISSKIPIGNORE_EXCEPTION_DETAILCOMPARISON_FLAGSREPORT_UDIFFREPORT_CDIFFREPORT_NDIFFREPORT_ONLY_FIRST_FAILUREREPORTING_FLAGSFAIL_FASTExampleDocTestDocTestParserDocTestFinderDocTestRunnerOutputCheckerDocTestFailureUnexpectedExceptionDebugRunnertestfilerun_docstring_examplesDocTestSuiteDocFileSuiteset_unittest_reportflagsscript_from_examplestestsourcedebug_srcpdbTestResultsfailed attemptedOPTIONFLAGS_BY_NAMEBLANKLINE_MARKERELLIPSIS_MARKER_extract_future_flagsglobs + Return the compiler-flags associated with the future features that + have been imported into the given namespace (globs). + _normalize_module + Return the module specified by `module`. In particular: + - If `module` is a module, then return module. + - If `module` is a string, then import and return the + module with that name. + - If `module` is None, then return the calling module. + The calling module is assumed to be the module of + the stack frame at the given depth in the call stack. + ismoduleExpected a module, string, or None_newline_convert_load_testfilemodule_relative_module_relative_pathfile_contents + Add the given number of space characters to the beginning of + every non-blank line in `s`, and return the result. + (?m)^(?!$)_exception_traceback + Return a string containing a traceback message for the given + exc_info tuple (as returned by sys.exc_info()). + excout_SpoofOut_ellipsis_matchwantgot + Essentially the only subtle case: + >>> _ellipsis_match('aa...aa', 'aaa') + False + startposendpos_comment_lineReturn a commented form of the given line_strip_exception_details_OutputRedirectingPdbPdb + A specialized version of the python debugger that redirects stdout + to a given stream when interacting with the user. Stdout is *not* + redirected when traced code is executed. + __out__debugger_usednosigintsave_stdouttest_pathExpected a module: %rModule-relative files may not have absolute pathsbasedirfullpathCan't resolve paths relative to the module %r (it has no __file__)"Can't resolve paths relative to the module ""%r (it has no __file__)" + A single doctest example, consisting of source code and expected + output. `Example` defines the following attributes: + + - source: A single Python statement, always ending with a newline. + The constructor adds a newline if needed. + + - want: The expected output from running the source code (either + from stdout, or a traceback in case of exception). `want` ends + with a newline unless it's empty, in which case it's an empty + string. The constructor adds a newline if needed. + + - exc_msg: The exception message generated by the example, if + the example is expected to generate an exception; or `None` if + it is not expected to generate an exception. This exception + message is compared against the return value of + `traceback.format_exception_only()`. `exc_msg` ends with a + newline unless it's `None`. The constructor adds a newline + if needed. + + - lineno: The line number within the DocTest string containing + this Example where the Example begins. This line number is + zero-based, with respect to the beginning of the DocTest. + + - indent: The example's indentation in the DocTest string. + I.e., the number of space characters that precede the + example's first prompt. + + - options: A dictionary mapping from option flags to True or + False, which is used to override default options for this + example. Any option flags not contained in this dictionary + are left at their default value (as specified by the + DocTestRunner's optionflags). By default, no options are set. + exc_msg + A collection of doctest examples that should be run in a single + namespace. Each `DocTest` defines the following attributes: + + - examples: the list of examples. + + - globs: The namespace (aka globals) that the examples should + be run in. + + - name: A name identifying the DocTest (typically, the name of + the object whose docstring this DocTest was extracted from). + + - filename: The name of the file that this DocTest was extracted + from, or `None` if the filename is unknown. + + - lineno: The line number within filename where this DocTest + begins, or `None` if the line number is unavailable. This + line number is zero-based, with respect to the beginning of + the file. + + - docstring: The string that the examples were extracted from, + or `None` if the string is unavailable. + examplesdocstring + Create a new DocTest containing the given examples. The + DocTest's globals are initialized with a copy of `globs`. + DocTest no longer accepts str; use DocTestParser insteadno examples1 example%d examples<%s %s from %s:%s (%s)> + A class used to parse strings containing doctest examples. + + # Source consists of a PS1 line followed by zero or more PS2 lines. + (?P + (?:^(?P [ ]*) >>> .*) # PS1 line + (?:\n [ ]* \.\.\. .*)*) # PS2 lines + \n? + # Want consists of any non-blank lines that do not start with PS1. + (?P (?:(?![ ]*$) # Not a blank line + (?![ ]*>>>) # Not a line starting with PS1 + .+$\n? # But any other line + )*) + r'''MULTILINE_EXAMPLE_RE + # Grab the traceback header. Different versions of Python have + # said different things on the first traceback line. + ^(?P Traceback\ \( + (?: most\ recent\ call\ last + | innermost\ last + ) \) : + ) + \s* $ # toss trailing whitespace on the header. + (?P .*?) # don't blink: absorb stuff until... + ^ (?P \w+ .*) # a line *starts* with alphanum. + _EXCEPTION_RE^[ ]*(#.*)?$_IS_BLANK_OR_COMMENT + Divide the given string into examples and intervening text, + and return them as a list of alternating Examples and strings. + Line numbers for the Examples are 0-based. The optional + argument `name` is a name identifying this string, and is only + used for error messages. + _min_indentmin_indentcharno_parse_exampleget_doctest + Extract all doctest examples from the given string, and + collect them into a `DocTest` object. + + `globs`, `name`, `filename`, and `lineno` are attributes for + the new `DocTest` object. See the documentation for `DocTest` + for more information. + get_examples + Extract all doctest examples from the given string, and return + them as a list of `Example` objects. Line numbers are + 0-based, because it's most common in doctests that nothing + interesting appears on the same line as opening triple-quote, + and so the first interesting line is called "line 1" then. + + The optional argument `name` is a name identifying this + string, and is only used for error messages. + + Given a regular expression match from `_EXAMPLE_RE` (`m`), + return a pair `(source, want)`, where `source` is the matched + example's source code (with prompts and indentation stripped); + and `want` is the example's expected output (with indentation + stripped). + + `name` is the string's name, and `lineno` is the line number + where the example starts; both are used for error messages. + source_lines_check_prompt_blank_check_prefixslwant_lines *$wl_find_options#\s*doctest:\s*([^\n\'"]*)$_OPTION_DIRECTIVE_RE + Return a dictionary containing option overrides extracted from + option directives in the given source string. + + `name` is the string's name, and `lineno` is the line number + where the example starts; both are used for error messages. + line %r of the doctest for %s has an invalid option: %r'line %r of the doctest for %s ''has an invalid option: %r'line %r of the doctest for %s has an option directive on a line with no example: %r'line %r of the doctest for %s has an option ''directive on a line with no example: %r'^([ ]*)(?=\S)_INDENT_REReturn the minimum indentation of any non-blank line in `s`indents + Given the lines of a source string (including prompts and + leading indentation), check to make sure that every prompt is + followed by a space character. If any line is not followed by + a space character, then raise ValueError. + line %r of the docstring for %s lacks blank after %s: %r'line %r of the docstring for %s ''lacks blank after %s: %r' + Check that every line in the given list starts with the given + prefix; if any line does not, then raise a ValueError. + line %r of the docstring for %s has inconsistent leading whitespace: %r'line %r of the docstring for %s has ''inconsistent leading whitespace: %r' + A class used to extract the DocTests that are relevant to a given + object, from its docstring and the docstrings of its contained + objects. Doctests can currently be extracted from the following + object types: modules, functions, classes, methods, staticmethods, + classmethods, and properties. + recurseexclude_empty + Create a new doctest finder. + + The optional argument `parser` specifies a class or + function that should be used to create new DocTest objects (or + objects that implement the same interface as DocTest). The + signature for this factory function should match the signature + of the DocTest constructor. + + If the optional argument `recurse` is false, then `find` will + only examine the given object, and not any contained objects. + + If the optional argument `exclude_empty` is false, then `find` + will include tests for objects with empty docstrings. + _verbose_recurse_exclude_emptyextraglobs + Return a list of the DocTests that are defined by the given + object's docstring, or by any of its contained objects' + docstrings. + + The optional parameter `module` is the module that contains + the given object. If the module is not specified or is None, then + the test finder will attempt to automatically determine the + correct module. The object's module is used: + + - As a default namespace, if `globs` is not specified. + - To prevent the DocTestFinder from extracting DocTests + from objects that are imported from other modules. + - To find the name of the file containing the object. + - To help find the line number of the object within its + file. + + Contained objects whose module does not match `module` are ignored. + + If `module` is False, no attempt to find the module will be made. + This is obscure, of use mostly in tests: if `module` is False, or + is None but cannot be found automatically, then all objects are + considered to belong to the (non-existent) module, so all contained + objects will (recursively) be searched for doctests. + + The globals for each DocTest is formed by combining `globs` + and `extraglobs` (bindings in `extraglobs` override bindings + in `globs`). A new copy of the globals dictionary is created + for each DocTest. If `globs` is not specified, then it + defaults to the module's `__dict__`, if specified, or {} + otherwise. If `extraglobs` is not specified, then it defaults + to {}. + + DocTestFinder.find: name must be given when obj.__name__ doesn't exist: %r"DocTestFinder.find: name must be given ""when obj.__name__ doesn't exist: %r"getmodulegetsourcefilegetfile<]>getlines_find_from_module + Return true if the given object is defined in the given + module. + isfunctionismethoddescriptorobj_modisclassobject must be a class or function + Find tests for the given object and any contained objects, and + add them to `tests`. + Finding tests in %s_get_testvalnameisroutineunwrap__test__DocTestFinder.find: __test__ keys must be strings: %r"DocTestFinder.find: __test__ keys ""must be strings: %r"DocTestFinder.find: __test__ values must be strings, functions, methods, classes, or modules: %r"DocTestFinder.find: __test__ values ""must be strings, functions, methods, ""classes, or modules: %r"%s.__test__.%s + Return a DocTest for the given object, if it defines a docstring; + otherwise, return None. + _find_lineno + Return a line number of the given object's docstring. Note: + this method assumes that the object has a docstring. + ^\s*class\s*%s\bismethodistracebackisframeiscode(^|.*:)\s*\w*("|\') + A class used to run DocTest test cases, and accumulate statistics. + The `run` method is used to process a single DocTest case. It + returns a tuple `(f, t)`, where `t` is the number of test cases + tried, and `f` is the number of test cases that failed. + + >>> tests = DocTestFinder().find(_TestClass) + >>> runner = DocTestRunner(verbose=False) + >>> tests.sort(key = lambda test: test.name) + >>> for test in tests: + ... print(test.name, '->', runner.run(test)) + _TestClass -> TestResults(failed=0, attempted=2) + _TestClass.__init__ -> TestResults(failed=0, attempted=2) + _TestClass.get -> TestResults(failed=0, attempted=2) + _TestClass.square -> TestResults(failed=0, attempted=1) + + The `summarize` method prints a summary of all the test cases that + have been run by the runner, and returns an aggregated `(f, t)` + tuple: + + >>> runner.summarize(verbose=1) + 4 items passed all tests: + 2 tests in _TestClass + 2 tests in _TestClass.__init__ + 2 tests in _TestClass.get + 1 tests in _TestClass.square + 7 tests in 4 items. + 7 passed and 0 failed. + Test passed. + TestResults(failed=0, attempted=7) + + The aggregated number of tried examples and failed examples is + also available via the `tries` and `failures` attributes: + + >>> runner.tries + 7 + >>> runner.failures + 0 + + The comparison between expected outputs and actual outputs is done + by an `OutputChecker`. This comparison may be customized with a + number of option flags; see the documentation for `testmod` for + more information. If the option flags are insufficient, then the + comparison may also be customized by passing a subclass of + `OutputChecker` to the constructor. + + The test runner's display output can be controlled in two ways. + First, an output function (`out) can be passed to + `TestRunner.run`; this function will be called with strings that + should be displayed. It defaults to `sys.stdout.write`. If + capturing the output is not sufficient, then the display output + can be also customized by subclassing DocTestRunner, and + overriding the methods `report_start`, `report_success`, + `report_unexpected_exception`, and `report_failure`. + DIVIDERchecker + Create a new test runner. + + Optional keyword arg `checker` is the `OutputChecker` that + should be used to compare the expected outputs and actual + outputs of doctest examples. + + Optional keyword arg 'verbose' prints lots of stuff if true, + only failures if false; by default, it's true iff '-v' is in + sys.argv. + + Optional argument `optionflags` can be used to control how the + test runner compares expected output to actual output, and how + it displays failures. See the documentation for `testmod` for + more information. + _checker-voriginal_optionflagstries_name2ft_fakeoutreport_startexample + Report that the test runner is about to process the given + example. (Only displays a message if verbose=True) + Trying: +Expecting: +Expecting nothing +report_success + Report that the given example ran successfully. (Only + displays a message if verbose=True) + ok +report_failure + Report that the given example failed. + _failure_headeroutput_differencereport_unexpected_exception + Report that the given example raised an unexpected exception. + Exception raised: +File "%s", line %s, in %sLine %s, in %sFailed example:__runcompileflags + Run the examples in `test`. Write the outcome of each example + with one of the `DocTestRunner.report_*` methods, using the + writer function `out`. `compileflags` is the set of compiler + flags that should be used to execute examples. Return a tuple + `(f, t)`, where `t` is the number of examples tried, and `f` + is the number of examples that failed. The examples are run + in the namespace `test.globs`. + SUCCESSBOOMexamplenumoptionflagdebuggerunknown outcome__record_outcome + Record the fact that the given DocTest (`test`) generated `f` + failures out of `t` tried examples. + f2.+)\[(?P\d+)\]>$r'.+)'r'\[(?P\d+)\]>$'__LINECACHE_FILENAME_RE__patched_linecache_getlinesmodule_globalssave_linecache_getlinesclear_globs + Run the examples in `test`, and display the results using the + writer function `out`. + + The examples are run in the namespace `test.globs`. If + `clear_globs` is true (the default), then this namespace will + be cleared after the test runs, to help with garbage + collection. If you would like to examine the namespace after + the test completes, then use `clear_globs=False`. + + `compileflags` gives the set of flags that should be used by + the Python compiler when running the examples. If not + specified, then it will default to the set of future-import + flags that apply to `globs`. + + The output of each example is checked using + `DocTestRunner.check_output`, and the results are formatted by + the `DocTestRunner.report_*` methods. + save_tracesave_set_tracesave_displayhooksummarize + Print a summary of all the test cases that have been run by + this DocTestRunner, and return a tuple `(f, t)`, where `f` is + the total number of failed examples, and `t` is the total + number of tried examples. + + The optional `verbose` argument controls how detailed the + summary is. If the verbosity is not specified, then the + DocTestRunner's verbosity is used. + notestspassedfailedtotalttotalfitems had no tests:items passed all tests: %3d tests in %sitems had failures: %3d of %3d in %stests initems.passed andfailed.***Test Failed***failures.Test passed.merge + A class used to check the whether the actual output from a doctest + example matches the expected output. `OutputChecker` defines two + methods: `check_output`, which compares a given pair of outputs, + and returns true if they match; and `output_difference`, which + returns a string describing the differences between two outputs. + _toAscii + Convert string to hex-escaped ASCII string. + + Return True iff the actual output from an example (`got`) + matches the expected output (`want`). These strings are + always considered to match if they are identical; but + depending on what option flags the test runner is using, + several non-exact match types are also possible. See the + documentation for `TestRunner` for more information about + option flags. + True +1 +False +0 +(?m)^%s\s*?$(?m)^[^\S\n]+$_do_a_fancy_diff + Return a string describing the differences between the + expected output for a given example (`example`) and the actual + output (`got`). `optionflags` is the set of option flags used + to compare `want` and `got`. + (?m)^[ ]*(?= +)got_linesunified diff with -expected +actualcontext diff with expected followed by actualenginendiff with -expected +actualBad diff optionDifferences (%s): +Expected: +%sGot: +%sExpected: +%sGot nothing +Expected nothing +Got: +%sExpected nothing +Got nothing +A DocTest example has failed in debugging mode. + + The exception instance has variables: + + - test: the DocTest object being run + + - example: the Example object that failed + + - got: the actual output + A DocTest example has encountered an unexpected exception + + The exception instance has variables: + + - test: the DocTest object being run + + - example: the Example object that failed + + - exc_info: the exception info + Run doc tests but raise an exception as soon as there is a failure. + + If an unexpected exception occurs, an UnexpectedException is raised. + It contains the test, the example, and the original exception: + + >>> runner = DebugRunner(verbose=False) + >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', + ... {}, 'foo', 'foo.py', 0) + >>> try: + ... runner.run(test) + ... except UnexpectedException as f: + ... failure = f + + >>> failure.test is test + True + + >>> failure.example.want + '42\n' + + >>> exc_info = failure.exc_info + >>> raise exc_info[1] # Already has the traceback + Traceback (most recent call last): + ... + KeyError + + We wrap the original exception to give the calling application + access to the test and example information. + + If the output doesn't match, then a DocTestFailure is raised: + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 1 + ... >>> x + ... 2 + ... ''', {}, 'foo', 'foo.py', 0) + + >>> try: + ... runner.run(test) + ... except DocTestFailure as f: + ... failure = f + + DocTestFailure objects provide access to the test: + + >>> failure.test is test + True + + As well as to the example: + + >>> failure.example.want + '2\n' + + and the actual output: + + >>> failure.got + '1\n' + + If a failure or error occurs, the globals are left intact: + + >>> del test.globs['__builtins__'] + >>> test.globs + {'x': 1} + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 2 + ... >>> raise KeyError + ... ''', {}, 'foo', 'foo.py', 0) + + >>> runner.run(test) + Traceback (most recent call last): + ... + doctest.UnexpectedException: + + >>> del test.globs['__builtins__'] + >>> test.globs + {'x': 2} + + But the globals are cleared if there is no error: + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 2 + ... ''', {}, 'foo', 'foo.py', 0) + + >>> runner.run(test) + TestResults(failed=0, attempted=1) + + >>> test.globs + {} + + reportraise_on_errorm=None, name=None, globs=None, verbose=None, report=True, + optionflags=0, extraglobs=None, raise_on_error=False, + exclude_empty=False + + Test examples in docstrings in functions and classes reachable + from module m (or the current module if m is not supplied), starting + with m.__doc__. + + Also test examples reachable from dict m.__test__ if it exists and is + not None. m.__test__ maps names to functions, classes and strings; + function and class docstrings are tested even if the name is private; + strings are tested directly, as if they were docstrings. + + Return (#failures, #tests). + + See help(doctest) for an overview. + + Optional keyword arg "name" gives the name of the module; by default + use m.__name__. + + Optional keyword arg "globs" gives a dict to be used as the globals + when executing examples; by default, use m.__dict__. A copy of this + dict is actually used for each docstring, so that each docstring's + examples start with a clean slate. + + Optional keyword arg "extraglobs" gives a dictionary that should be + merged into the globals that are used to execute examples. By + default, no extra globals are used. This is new in 2.4. + + Optional keyword arg "verbose" prints lots of stuff if true, prints + only failures if false; by default, it's true iff "-v" is in sys.argv. + + Optional keyword arg "report" prints a summary at the end when true, + else prints nothing at the end. In verbose mode, the summary is + detailed, else very brief (in fact, empty if all tests passed). + + Optional keyword arg "optionflags" or's together module constants, + and defaults to 0. This is new in 2.3. Possible values (see the + docs for details): + + DONT_ACCEPT_TRUE_FOR_1 + DONT_ACCEPT_BLANKLINE + NORMALIZE_WHITESPACE + ELLIPSIS + SKIP + IGNORE_EXCEPTION_DETAIL + REPORT_UDIFF + REPORT_CDIFF + REPORT_NDIFF + REPORT_ONLY_FIRST_FAILURE + + Optional keyword arg "raise_on_error" raises an exception on the + first unexpected exception or failure. This allows failures to be + post-mortem debugged. + + Advanced tomfoolery: testmod runs methods of a local instance of + class doctest.Tester, then merges the results into (or creates) + global Tester instance doctest.master. Methods of doctest.master + can be called directly too, if you want to do something unusual. + Passing report=0 to testmod is especially useful then, to delay + displaying a summary. Invoke doctest.master.summarize(verbose) + when you're done fiddling. + testmod: module required; %r + Test examples in the given file. Return (#failures, #tests). + + Optional keyword arg "module_relative" specifies how filenames + should be interpreted: + + - If "module_relative" is True (the default), then "filename" + specifies a module-relative path. By default, this path is + relative to the calling module's directory; but if the + "package" argument is specified, then it is relative to that + package. To ensure os-independence, "filename" should use + "/" characters to separate path segments, and should not + be an absolute path (i.e., it may not begin with "/"). + + - If "module_relative" is False, then "filename" specifies an + os-specific path. The path may be absolute or relative (to + the current working directory). + + Optional keyword arg "name" gives the name of the test; by default + use the file's basename. + + Optional keyword argument "package" is a Python package or the + name of a Python package whose directory should be used as the + base directory for a module relative filename. If no package is + specified, then the calling module's directory is used as the base + directory for module relative filenames. It is an error to + specify "package" if "module_relative" is False. + + Optional keyword arg "globs" gives a dict to be used as the globals + when executing examples; by default, use {}. A copy of this dict + is actually used for each docstring, so that each docstring's + examples start with a clean slate. + + Optional keyword arg "extraglobs" gives a dictionary that should be + merged into the globals that are used to execute examples. By + default, no extra globals are used. + + Optional keyword arg "verbose" prints lots of stuff if true, prints + only failures if false; by default, it's true iff "-v" is in sys.argv. + + Optional keyword arg "report" prints a summary at the end when true, + else prints nothing at the end. In verbose mode, the summary is + detailed, else very brief (in fact, empty if all tests passed). + + Optional keyword arg "optionflags" or's together module constants, + and defaults to 0. Possible values (see the docs for details): + + DONT_ACCEPT_TRUE_FOR_1 + DONT_ACCEPT_BLANKLINE + NORMALIZE_WHITESPACE + ELLIPSIS + SKIP + IGNORE_EXCEPTION_DETAIL + REPORT_UDIFF + REPORT_CDIFF + REPORT_NDIFF + REPORT_ONLY_FIRST_FAILURE + + Optional keyword arg "raise_on_error" raises an exception on the + first unexpected exception or failure. This allows failures to be + post-mortem debugged. + + Optional keyword arg "parser" specifies a DocTestParser (or + subclass) that should be used to extract tests from the files. + + Optional keyword arg "encoding" specifies an encoding that should + be used to convert the file to unicode. + + Advanced tomfoolery: testmod runs methods of a local instance of + class doctest.Tester, then merges the results into (or creates) + global Tester instance doctest.master. Methods of doctest.master + can be called directly too, if you want to do something unusual. + Passing report=0 to testmod is especially useful then, to delay + displaying a summary. Invoke doctest.master.summarize(verbose) + when you're done fiddling. + Package may only be specified for module-relative paths."Package may only be specified for module-""relative paths."NoName + Test examples in the given object's docstring (`f`), using `globs` + as globals. Optional argument `name` is used in failure messages. + If the optional argument `verbose` is true, then generate output + even if there are no failures. + + `compileflags` gives the set of flags that should be used by the + Python compiler when running the examples. If not specified, then + it will default to the set of future-import flags that apply to + `globs`. + + Optional keyword arg `optionflags` specifies options for the + testing and output. See the documentation for `testmod` for more + information. + _unittest_reportflagsSets the unittest option flags. + + The old flag is returned so that a runner could restore the old + value if it wished to: + + >>> import doctest + >>> old = doctest._unittest_reportflags + >>> doctest.set_unittest_reportflags(REPORT_NDIFF | + ... REPORT_ONLY_FIRST_FAILURE) == old + True + + >>> doctest._unittest_reportflags == (REPORT_NDIFF | + ... REPORT_ONLY_FIRST_FAILURE) + True + + Only reporting flags can be set: + + >>> doctest.set_unittest_reportflags(ELLIPSIS) + Traceback (most recent call last): + ... + ValueError: ('Only reporting flags allowed', 8) + + >>> doctest.set_unittest_reportflags(old) == (REPORT_NDIFF | + ... REPORT_ONLY_FIRST_FAILURE) + True + Only reporting flags allowedDocTestCase_dt_optionflags_dt_checker_dt_test_dt_setUp_dt_tearDownformat_failureunknown line numberlnameFailed doctest test for %s + File "%s", line %s, in %s + +%s'Failed doctest test for %s\n'' File "%s", line %s, in %s\n\n%s'Run the test case without results and without catching exceptions + + The unit test framework includes a debug method on test cases + and test suites to support post-mortem debugging. The test code + is run in such a way that errors are not caught. This way a + caller can catch the errors and initiate post-mortem debugging. + + The DocTestCase provides a debug method that raises + UnexpectedException errors if there is an unexpected + exception: + + >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', + ... {}, 'foo', 'foo.py', 0) + >>> case = DocTestCase(test) + >>> try: + ... case.debug() + ... except UnexpectedException as f: + ... failure = f + + The UnexpectedException contains the test, the example, and + the original exception: + + >>> failure.test is test + True + + >>> failure.example.want + '42\n' + + >>> exc_info = failure.exc_info + >>> raise exc_info[1] # Already has the traceback + Traceback (most recent call last): + ... + KeyError + + If the output doesn't match, then a DocTestFailure is raised: + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 1 + ... >>> x + ... 2 + ... ''', {}, 'foo', 'foo.py', 0) + >>> case = DocTestCase(test) + + >>> try: + ... case.debug() + ... except DocTestFailure as f: + ... failure = f + + DocTestFailure objects provide access to the test: + + >>> failure.test is test + True + + As well as to the example: + + >>> failure.example.want + '2\n' + + and the actual output: + + >>> failure.got + '1\n' + + Doctest: SkipDocTestCaseDocTestSuite will not work with -O2 and abovetest_skipSkipping tests from %s_DocTestSuite_removeTestAtIndextest_finder + Convert doctest tests for a module to a unittest test suite. + + This converts each documentation string in a module that + contains doctest tests to a unittest test case. If any of the + tests in a doc string fail, then the test case fails. An exception + is raised showing the name of the file containing the test and a + (sometimes approximate) line number. + + The `module` argument provides the module to be tested. The argument + can be either a module or a module name. + + If no argument is given, the calling module is used. + + A number of options may be provided as keyword arguments: + + setUp + A set-up function. This is called before running the + tests in each file. The setUp function will be passed a DocTest + object. The setUp function can access the test globals as the + globs attribute of the test passed. + + tearDown + A tear-down function. This is called after running the + tests in each file. The tearDown function will be passed a DocTest + object. The tearDown function can access the test globals as the + globs attribute of the test passed. + + globs + A dictionary containing initial global variables for the tests. + + optionflags + A set of doctest option flags expressed as an integer. + DocFileCaseFailed doctest test for %s + File "%s", line 0 + +%sDocFileTestA unittest suite for one or more doctest files. + + The path to each doctest file is given as a string; the + interpretation of that string depends on the keyword argument + "module_relative". + + A number of options may be provided as keyword arguments: + + module_relative + If "module_relative" is True, then the given file paths are + interpreted as os-independent module-relative paths. By + default, these paths are relative to the calling module's + directory; but if the "package" argument is specified, then + they are relative to that package. To ensure os-independence, + "filename" should use "/" characters to separate path + segments, and may not be an absolute path (i.e., it may not + begin with "/"). + + If "module_relative" is False, then the given file paths are + interpreted as os-specific paths. These paths may be absolute + or relative (to the current working directory). + + package + A Python package or the name of a Python package whose directory + should be used as the base directory for module relative paths. + If "package" is not specified, then the calling module's + directory is used as the base directory for module relative + filenames. It is an error to specify "package" if + "module_relative" is False. + + setUp + A set-up function. This is called before running the + tests in each file. The setUp function will be passed a DocTest + object. The setUp function can access the test globals as the + globs attribute of the test passed. + + tearDown + A tear-down function. This is called after running the + tests in each file. The tearDown function will be passed a DocTest + object. The tearDown function can access the test globals as the + globs attribute of the test passed. + + globs + A dictionary containing initial global variables for the tests. + + optionflags + A set of doctest option flags expressed as an integer. + + parser + A DocTestParser (or subclass) that should be used to extract + tests from the files. + + encoding + An encoding that will be used to convert the files to unicode. + Extract script from text with examples. + + Converts text with examples to a Python script. Example input is + converted to regular code. Example output and all other words + are converted to comments: + + >>> text = ''' + ... Here are examples of simple math. + ... + ... Python has super accurate integer addition + ... + ... >>> 2 + 2 + ... 5 + ... + ... And very friendly error messages: + ... + ... >>> 1/0 + ... To Infinity + ... And + ... Beyond + ... + ... You can use logic if you want: + ... + ... >>> if 0: + ... ... blah + ... ... blah + ... ... + ... + ... Ho hum + ... ''' + + >>> print(script_from_examples(text)) + # Here are examples of simple math. + # + # Python has super accurate integer addition + # + 2 + 2 + # Expected: + ## 5 + # + # And very friendly error messages: + # + 1/0 + # Expected: + ## To Infinity + ## And + ## Beyond + # + # You can use logic if you want: + # + if 0: + blah + blah + # + # Ho hum + + piece# Expected:## Extract the test sources from a doctest docstring as a script. + + Provide the module (or dotted name of the module) containing the + test to be debugged and the name (within the module) of the object + with the doc string with tests to be debugged. + not found in teststestsrcpmDebug a single doctest docstring, in argument `src`'debug_scriptDebug a test script. `src` is the script, as a string.interactionexec(%r)Debug a single doctest docstring. + + Provide the module (or dotted name of the module) containing the + test to be debugged and the name (within the module) of the object + with the docstring with tests to be debugged. + _TestClass + A pointless class, for sanity-checking of docstring testing. + + Methods: + square() + get() + + >>> _TestClass(13).get() + _TestClass(-12).get() + 1 + >>> hex(_TestClass(13).square().get()) + '0xa9' + val -> _TestClass object with associated value val. + + >>> t = _TestClass(123) + >>> print(t.get()) + 123 + squaresquare() -> square TestClass's associated value + + >>> _TestClass(13).square().get() + 169 + get() -> return TestClass's associated value. + + >>> x = _TestClass(-42) + >>> print(x.get()) + -42 + + Example of a string object, searched as-is. + >>> x = 1; y = 2 + >>> x + y, x * y + (3, 2) + + In 2.2, boolean expressions displayed + 0 or 1. By default, we still accept + them. This can be disabled by passing + DONT_ACCEPT_TRUE_FOR_1 to the new + optionflags argument. + >>> 4 == 4 + 1 + >>> 4 == 4 + True + >>> 4 > 4 + 0 + >>> 4 > 4 + False + bool-int equivalence + Blank lines can be marked with : + >>> print('foo\n\nbar\n') + foo + + bar + + blank lines + If the ellipsis flag is used, then '...' can be used to + elide substrings in the desired output: + >>> print(list(range(1000))) #doctest: +ELLIPSIS + [0, 1, 2, ..., 999] + + If the whitespace normalization flag is used, then + differences in whitespace are ignored. + >>> print(list(range(30))) #doctest: +NORMALIZE_WHITESPACE + [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, + 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, + 27, 28, 29] + whitespace normalizationdoctest runner--verboseprint very verbose output for all tests-o--optionspecify a doctest option flag to apply to the test run; may be specified more than once to apply multiple options'specify a doctest option flag to apply'' to the test run; may be specified more'' than once to apply multiple options'-f--fail-faststop running tests after first failure (this is a shorthand for -o FAIL_FAST, and is in addition to any other -o options)'stop running tests after first failure (this'' is a shorthand for -o FAIL_FAST, and is'' in addition to any other -o options)'file containing the tests to runtestfilesfail_fast# Module doctest.# Released to the public domain 16-Jan-2001, by Tim Peters (tim@python.org).# Major enhancements and refactoring by:# Jim Fulton# Edward Loper# Provided as-is; use at your own risk; no warranty; no promises; enjoy!# 0, Option Flags# 1. Utility Functions# 2. Example & DocTest# 3. Doctest Parser# 4. Doctest Finder# 5. Doctest Runner# 6. Test Functions# 7. Unittest Support# 8. Debugging Support# There are 4 basic classes:# - Example: a pair, plus an intra-docstring line number.# - DocTest: a collection of examples, parsed from a docstring, plus# info about where the docstring came from (name, filename, lineno).# - DocTestFinder: extracts DocTests from a given object's docstring and# its contained objects' docstrings.# - DocTestRunner: runs DocTest cases, and accumulates statistics.# So the basic picture is:# list of:# +------+ +---------+ +-------+# |object| --DocTestFinder-> | DocTest | --DocTestRunner-> |results|# | Example |# | ... |# +---------+# Option constants.# Create a new flag unless `name` is already known.# Special string markers for use in `want` strings:######################################################################## Table of Contents# 1. Utility Functions# 2. Example & DocTest -- store test cases# 3. DocTest Parser -- extracts examples from strings# 4. DocTest Finder -- extracts test cases from objects# 5. DocTest Runner -- runs test cases# 6. Test Functions -- convenient wrappers for testing# 7. Unittest Support# 8. Debugging Support# 9. Example Usage## 1. Utility Functions# We have two cases to cover and we need to make sure we do# them in the right order# get_data() opens files as 'rb', so one must do the equivalent# conversion as universal newlines would do.# This regexp matches the start of non-blank lines:# Get a traceback message.# Override some StringIO methods.# If anything at all was written, make sure there's a trailing# newline. There's no way for the expected output to indicate# that a trailing newline is missing.# Worst-case linear-time ellipsis matching.# Find "the real" strings.# Deal with exact matches possibly needed at one or both ends.# starts with exact match# ends with exact match# Exact end matches required more characters than we have, as in# _ellipsis_match('aa...aa', 'aaa')# For the rest, we only need to find the leftmost non-overlapping# match for each piece. If there's no overall match that way alone,# there's no overall match period.# w may be '' at times, if there are consecutive ellipses, or# due to an ellipsis at the start or end of `want`. That's OK.# Search for an empty string succeeds, and doesn't change startpos.# Support for IGNORE_EXCEPTION_DETAIL.# Get rid of everything except the exception name; in particular, drop# the possibly dotted module path (if any) and the exception message (if# any). We assume that a colon is never part of a dotted name, or of an# exception name.# E.g., given# "foo.bar.MyError: la di da"# return "MyError"# Or for "abc.def" or "abc.def:\n" return "def".# The exception name must appear on the first line.# retain up to the first colon (if any)# retain just the exception name# do not play signal games in the pdb# still use input() to get user input# Calling set_continue unconditionally would break unit test# coverage reporting, as Bdb.set_continue calls sys.settrace(None).# Redirect stdout to the given stream.# Call Pdb's trace dispatch method.# [XX] Normalize with respect to os.path.pardir?# Normalize the path. On Windows, replace "/" with "\".# Find the base directory for the path.# A normal module/package# An interactive session.# A module w/o __file__ (this includes builtins)# Combine the base directory and the test path.## 2. Example & DocTest## - An "example" is a pair, where "source" is a## fragment of source code, and "want" is the expected output for## "source." The Example class also includes information about## where the example was extracted from.## - A "doctest" is a collection of examples, typically extracted from## a string (such as an object's docstring). The DocTest class also## includes information about where the string was extracted from.# Normalize inputs.# Store properties.# This lets us sort tests by name:## 3. DocTestParser# This regular expression is used to find doctest examples in a# string. It defines three groups: `source` is the source code# (including leading indentation and prompts); `indent` is the# indentation of the first (PS1) line of the source code; and# `want` is the expected output (including leading indentation).# A regular expression for handling `want` strings that contain# expected exceptions. It divides `want` into three pieces:# - the traceback header line (`hdr`)# - the traceback stack (`stack`)# - the exception message (`msg`), as generated by# traceback.format_exception_only()# `msg` may have multiple lines. We assume/require that the# exception message is the first non-indented line starting with a word# character following the traceback header line.# A callable returning a true value iff its argument is a blank line# or contains a single comment.# If all lines begin with the same indentation, then strip it.# Find all doctest examples in the string:# Add the pre-example text to `output`.# Update lineno (lines before this example)# Extract info from the regexp match.# Create an Example, and add it to the list.# Update lineno (lines inside this example)# Update charno.# Add any remaining post-example text to `output`.# Get the example's indentation level.# Divide source into lines; check that they're properly# indented; and then strip their indentation & prompts.# Divide want into lines; check that it's properly indented; and# then strip the indentation. Spaces before the last newline should# be preserved, so plain rstrip() isn't good enough.# forget final newline & spaces after it# If `want` contains a traceback message, then extract it.# Extract options from the source.# This regular expression looks for option directives in the# source code of an example. Option directives are comments# starting with "doctest:". Warning: this may give false# positives for string-literals that contain the string# "#doctest:". Eliminating these false positives would require# actually parsing the string; but we limit them by ignoring any# line containing "#doctest:" that is *followed* by a quote mark.# (note: with the current regexp, this will match at most once:)# This regular expression finds the indentation of every non-blank# line in a string.## 4. DocTest Finder# If name was not specified, then extract it from the object.# Find the module that contains the given object (if obj is# a module, then module=obj.). Note: this may fail, in which# case module will be None.# Read the module's source code. This is used by# DocTestFinder._find_lineno to find the line number for a# given object's docstring.# Check to see if it's one of our special internal "files"# (see __patched_linecache_getlines).# Supply the module globals in case the module was# originally loaded via a PEP 302 loader and# file is not a valid filesystem path# No access to a loader, so assume it's a normal# filesystem path# Initialize globals, and merge in extraglobs.# provide a default module name# Recursively explore `obj`, extracting DocTests.# Sort the tests by alpha order of names, for consistency in# verbose-mode output. This was a feature of doctest in Pythons# <= 2.3 that got lost by accident in 2.4. It was repaired in# 2.4.4 and 2.5.# [XX] no easy way to tell otherwise# [XX] no way not be sure.# If we've already processed this object, then ignore it.# Find a test for this object, and add it to the list of tests.# Look for tests in a module's contained objects.# Recurse to functions & classes.# Look for tests in a module's __test__ dictionary.# Look for tests in a class's contained objects.# Special handling for staticmethod/classmethod.# Recurse to methods, properties, and nested classes.# Extract the object's docstring. If it doesn't have one,# then return None (no test for this object).# Find the docstring's location in the file.# Don't bother if the docstring is empty.# Return a DocTest for this object.# __file__ can be None for namespace packages.# Find the line number for modules.# Find the line number for classes.# Note: this could be fooled if a class is defined multiple# times in a single file.# Find the line number for functions & methods.# Find the line number where the docstring starts. Assume# that it's the first line that begins with a quote mark.# Note: this could be fooled by a multiline function# signature, where a continuation line begins with a quote# mark.# We couldn't find the line number.## 5. DocTest Runner# This divider string is used to separate failure messages, and to# separate sections of the summary.# Keep track of the examples we've run.# Create a fake output target for capturing doctest output.#/////////////////////////////////////////////////////////////////# Reporting methods# DocTest Running# Keep track of the number of failures and tries.# Save the option flags (since option directives can be used# to modify them).# `outcome` state# Process each example.# If REPORT_ONLY_FIRST_FAILURE is set, then suppress# reporting after the first failure.# Merge in the example's options.# If 'SKIP' is set, then skip this example.# Record that we started this example.# Use a special filename for compile(), so we can retrieve# the source code during interactive debugging (see# __patched_linecache_getlines).# Run the example in the given context (globs), and record# any exception that gets raised. (But don't intercept# keyboard interrupts.)# Don't blink! This is where the user's code gets run.# ==== Example Finished ====# the actual output# guilty until proved innocent or insane# If the example executed without raising any exceptions,# verify its output.# The example raised an exception: check if it was expected.# If `example.exc_msg` is None, then we weren't expecting# an exception.# We expected an exception: see whether it matches.# Another chance if they didn't care about the detail.# Report the outcome.# Restore the option flags (in case they were modified)# Record and return the number of failures and tries.# Use backslashreplace error handling on write# Patch pdb.set_trace to restore sys.stdout during interactive# debugging (so it's not still redirected to self._fakeout).# Note that the interactive output will go to *our*# save_stdout, even if that's not the real sys.stdout; this# allows us to write test cases for the set_trace behavior.# Patch linecache.getlines, so we can see the example's source# when we're inside the debugger.# Make sure sys.displayhook just prints the value to stdout# Summarization# Backward compatibility cruft to maintain doctest.master.# Don't print here by default, since doing# so breaks some of the buildbots#print("*** DocTestRunner.merge: '" + name + "' in both" \# " testers; summing outcomes.")# If `want` contains hex-escaped character such as "\u1234",# then `want` is a string of six characters(e.g. [\,u,1,2,3,4]).# On the other hand, `got` could be another sequence of# characters such as [\u1234], so `want` and `got` should# be folded to hex-escaped ASCII string to compare.# Handle the common case first, for efficiency:# if they're string-identical, always return true.# The values True and False replaced 1 and 0 as the return# value for boolean comparisons in Python 2.3.# can be used as a special sequence to signify a# blank line, unless the DONT_ACCEPT_BLANKLINE flag is used.# Replace in want with a blank line.# If a line in got contains only spaces, then remove the# spaces.# This flag causes doctest to ignore any differences in the# contents of whitespace strings. Note that this can be used# in conjunction with the ELLIPSIS flag.# The ELLIPSIS flag says to let the sequence "..." in `want`# match any substring in `got`.# We didn't find any match; return false.# Should we do a fancy diff?# Not unless they asked for a fancy diff.# If expected output uses ellipsis, a meaningful fancy diff is# too hard ... or maybe not. In two real-life failures Tim saw,# a diff was a major help anyway, so this is commented out.# [todo] _ellipsis_match() knows which pieces do and don't match,# and could be the basis for a kick-ass diff in this case.##if optionflags & ELLIPSIS and ELLIPSIS_MARKER in want:## return False# ndiff does intraline difference marking, so can be useful even# for 1-line differences.# The other diff types need at least a few lines to be helpful.# If s are being used, then replace blank lines# with in the actual output string.# Check if we should use diff.# Split want & got into lines.# Use difflib to find their differences.# strip the diff header# If we're not using diff, then simply list the expected# output followed by the actual output.## 6. Test Functions# These should be backwards compatible.# For backward compatibility, a global instance of a DocTestRunner# class, updated by testmod.# If no module was given, then use __main__.# DWA - m will still be None if this wasn't invoked from the command# line, in which case the following TypeError is about as good an error# as we should expect# Check that we were actually given a module.# If no name was given, then use the module's name.# Find, parse, and run all tests in the given module.# Relativize the path# If no name was given, then use the file's name.# Assemble the globals.# Read the file, convert it to a test, and run it.## 7. Unittest Support# The option flags don't include any reporting flags,# so add the default reporting flags# Skip doctests when running with -O2# Relativize the path.# Find the file and read it.# Convert it to a test, and wrap it in a DocFileCase.# We do this here so that _normalize_module is called at the right# level. If it were called in DocFileTest, then this function# would be the caller and we might guess the package incorrectly.## 8. Debugging Support# Add the example's source code (strip trailing NL)# Add the expected output:# Add non-example text.# Trim junk on both ends.# Combine the output, and return it.# Add a courtesy newline to prevent exec from choking (see bug #1172785)## 9. Example Usage# Verbose used to be handled by the "inspect argv" magic in DocTestRunner,# but since we are using argparse we are passing it manually now.# It is a module -- insert its dir into sys.path and try to# import it. If it is part of a package, that possibly# won't work because of package imports.b'Module doctest -- a framework for running examples in docstrings. + +In simplest use, end each module M to be tested with: + +def _test(): + import doctest + doctest.testmod() + +if __name__ == "__main__": + _test() + +Then running the module as a script will cause the examples in the +docstrings to get executed and verified: + +python M.py + +This won't display anything unless an example fails, in which case the +failing example(s) and the cause(s) of the failure(s) are printed to stdout +(why not stderr? because stderr is a lame hack <0.2 wink>), and the final +line of output is "Test failed.". + +Run it with the -v switch instead: + +python M.py -v + +and a detailed report of all examples tried is printed to stdout, along +with assorted summaries at the end. + +You can force verbose mode by passing "verbose=True" to testmod, or prohibit +it by passing "verbose=False". In either of those cases, sys.argv is not +examined by testmod. + +There are a variety of other ways to run doctests, including integration +with the unittest framework, and support for running non-Python text +files containing doctests. There are also many ways to override parts +of doctest's default behaviors. See the Library Reference Manual for +details. +'u'Module doctest -- a framework for running examples in docstrings. + +In simplest use, end each module M to be tested with: + +def _test(): + import doctest + doctest.testmod() + +if __name__ == "__main__": + _test() + +Then running the module as a script will cause the examples in the +docstrings to get executed and verified: + +python M.py + +This won't display anything unless an example fails, in which case the +failing example(s) and the cause(s) of the failure(s) are printed to stdout +(why not stderr? because stderr is a lame hack <0.2 wink>), and the final +line of output is "Test failed.". + +Run it with the -v switch instead: + +python M.py -v + +and a detailed report of all examples tried is printed to stdout, along +with assorted summaries at the end. + +You can force verbose mode by passing "verbose=True" to testmod, or prohibit +it by passing "verbose=False". In either of those cases, sys.argv is not +examined by testmod. + +There are a variety of other ways to run doctests, including integration +with the unittest framework, and support for running non-Python text +files containing doctests. There are also many ways to override parts +of doctest's default behaviors. See the Library Reference Manual for +details. +'b'reStructuredText en'u'reStructuredText en'b'register_optionflag'u'register_optionflag'b'DONT_ACCEPT_TRUE_FOR_1'u'DONT_ACCEPT_TRUE_FOR_1'b'DONT_ACCEPT_BLANKLINE'u'DONT_ACCEPT_BLANKLINE'b'NORMALIZE_WHITESPACE'u'NORMALIZE_WHITESPACE'b'ELLIPSIS'u'ELLIPSIS'b'SKIP'u'SKIP'b'IGNORE_EXCEPTION_DETAIL'u'IGNORE_EXCEPTION_DETAIL'b'COMPARISON_FLAGS'u'COMPARISON_FLAGS'b'REPORT_UDIFF'u'REPORT_UDIFF'b'REPORT_CDIFF'u'REPORT_CDIFF'b'REPORT_NDIFF'u'REPORT_NDIFF'b'REPORT_ONLY_FIRST_FAILURE'u'REPORT_ONLY_FIRST_FAILURE'b'REPORTING_FLAGS'u'REPORTING_FLAGS'b'FAIL_FAST'u'FAIL_FAST'b'Example'u'Example'b'DocTest'u'DocTest'b'DocTestParser'u'DocTestParser'b'DocTestFinder'u'DocTestFinder'b'DocTestRunner'u'DocTestRunner'b'OutputChecker'u'OutputChecker'b'DocTestFailure'u'DocTestFailure'b'UnexpectedException'u'UnexpectedException'b'DebugRunner'u'DebugRunner'b'testmod'u'testmod'b'testfile'u'testfile'b'run_docstring_examples'u'run_docstring_examples'b'DocTestSuite'u'DocTestSuite'b'DocFileSuite'u'DocFileSuite'b'set_unittest_reportflags'u'set_unittest_reportflags'b'script_from_examples'u'script_from_examples'b'testsource'u'testsource'b'debug_src'u'debug_src'b'TestResults'u'TestResults'b'failed attempted'u'failed attempted'b''u''b' + Return the compiler-flags associated with the future features that + have been imported into the given namespace (globs). + 'u' + Return the compiler-flags associated with the future features that + have been imported into the given namespace (globs). + 'b' + Return the module specified by `module`. In particular: + - If `module` is a module, then return module. + - If `module` is a string, then import and return the + module with that name. + - If `module` is None, then return the calling module. + The calling module is assumed to be the module of + the stack frame at the given depth in the call stack. + 'u' + Return the module specified by `module`. In particular: + - If `module` is a module, then return module. + - If `module` is a string, then import and return the + module with that name. + - If `module` is None, then return the calling module. + The calling module is assumed to be the module of + the stack frame at the given depth in the call stack. + 'b'Expected a module, string, or None'u'Expected a module, string, or None'b'get_data'u'get_data'b' + Add the given number of space characters to the beginning of + every non-blank line in `s`, and return the result. + 'u' + Add the given number of space characters to the beginning of + every non-blank line in `s`, and return the result. + 'b'(?m)^(?!$)'u'(?m)^(?!$)'b' + Return a string containing a traceback message for the given + exc_info tuple (as returned by sys.exc_info()). + 'u' + Return a string containing a traceback message for the given + exc_info tuple (as returned by sys.exc_info()). + 'b' + Essentially the only subtle case: + >>> _ellipsis_match('aa...aa', 'aaa') + False + 'u' + Essentially the only subtle case: + >>> _ellipsis_match('aa...aa', 'aaa') + False + 'b'Return a commented form of the given line'u'Return a commented form of the given line'b' + A specialized version of the python debugger that redirects stdout + to a given stream when interacting with the user. Stdout is *not* + redirected when traced code is executed. + 'u' + A specialized version of the python debugger that redirects stdout + to a given stream when interacting with the user. Stdout is *not* + redirected when traced code is executed. + 'b'Expected a module: %r'u'Expected a module: %r'b'Module-relative files may not have absolute paths'u'Module-relative files may not have absolute paths'b'Can't resolve paths relative to the module %r (it has no __file__)'u'Can't resolve paths relative to the module %r (it has no __file__)'b' + A single doctest example, consisting of source code and expected + output. `Example` defines the following attributes: + + - source: A single Python statement, always ending with a newline. + The constructor adds a newline if needed. + + - want: The expected output from running the source code (either + from stdout, or a traceback in case of exception). `want` ends + with a newline unless it's empty, in which case it's an empty + string. The constructor adds a newline if needed. + + - exc_msg: The exception message generated by the example, if + the example is expected to generate an exception; or `None` if + it is not expected to generate an exception. This exception + message is compared against the return value of + `traceback.format_exception_only()`. `exc_msg` ends with a + newline unless it's `None`. The constructor adds a newline + if needed. + + - lineno: The line number within the DocTest string containing + this Example where the Example begins. This line number is + zero-based, with respect to the beginning of the DocTest. + + - indent: The example's indentation in the DocTest string. + I.e., the number of space characters that precede the + example's first prompt. + + - options: A dictionary mapping from option flags to True or + False, which is used to override default options for this + example. Any option flags not contained in this dictionary + are left at their default value (as specified by the + DocTestRunner's optionflags). By default, no options are set. + 'u' + A single doctest example, consisting of source code and expected + output. `Example` defines the following attributes: + + - source: A single Python statement, always ending with a newline. + The constructor adds a newline if needed. + + - want: The expected output from running the source code (either + from stdout, or a traceback in case of exception). `want` ends + with a newline unless it's empty, in which case it's an empty + string. The constructor adds a newline if needed. + + - exc_msg: The exception message generated by the example, if + the example is expected to generate an exception; or `None` if + it is not expected to generate an exception. This exception + message is compared against the return value of + `traceback.format_exception_only()`. `exc_msg` ends with a + newline unless it's `None`. The constructor adds a newline + if needed. + + - lineno: The line number within the DocTest string containing + this Example where the Example begins. This line number is + zero-based, with respect to the beginning of the DocTest. + + - indent: The example's indentation in the DocTest string. + I.e., the number of space characters that precede the + example's first prompt. + + - options: A dictionary mapping from option flags to True or + False, which is used to override default options for this + example. Any option flags not contained in this dictionary + are left at their default value (as specified by the + DocTestRunner's optionflags). By default, no options are set. + 'b' + A collection of doctest examples that should be run in a single + namespace. Each `DocTest` defines the following attributes: + + - examples: the list of examples. + + - globs: The namespace (aka globals) that the examples should + be run in. + + - name: A name identifying the DocTest (typically, the name of + the object whose docstring this DocTest was extracted from). + + - filename: The name of the file that this DocTest was extracted + from, or `None` if the filename is unknown. + + - lineno: The line number within filename where this DocTest + begins, or `None` if the line number is unavailable. This + line number is zero-based, with respect to the beginning of + the file. + + - docstring: The string that the examples were extracted from, + or `None` if the string is unavailable. + 'u' + A collection of doctest examples that should be run in a single + namespace. Each `DocTest` defines the following attributes: + + - examples: the list of examples. + + - globs: The namespace (aka globals) that the examples should + be run in. + + - name: A name identifying the DocTest (typically, the name of + the object whose docstring this DocTest was extracted from). + + - filename: The name of the file that this DocTest was extracted + from, or `None` if the filename is unknown. + + - lineno: The line number within filename where this DocTest + begins, or `None` if the line number is unavailable. This + line number is zero-based, with respect to the beginning of + the file. + + - docstring: The string that the examples were extracted from, + or `None` if the string is unavailable. + 'b' + Create a new DocTest containing the given examples. The + DocTest's globals are initialized with a copy of `globs`. + 'u' + Create a new DocTest containing the given examples. The + DocTest's globals are initialized with a copy of `globs`. + 'b'DocTest no longer accepts str; use DocTestParser instead'u'DocTest no longer accepts str; use DocTestParser instead'b'no examples'u'no examples'b'1 example'u'1 example'b'%d examples'u'%d examples'b'<%s %s from %s:%s (%s)>'u'<%s %s from %s:%s (%s)>'b' + A class used to parse strings containing doctest examples. + 'u' + A class used to parse strings containing doctest examples. + 'b' + # Source consists of a PS1 line followed by zero or more PS2 lines. + (?P + (?:^(?P [ ]*) >>> .*) # PS1 line + (?:\n [ ]* \.\.\. .*)*) # PS2 lines + \n? + # Want consists of any non-blank lines that do not start with PS1. + (?P (?:(?![ ]*$) # Not a blank line + (?![ ]*>>>) # Not a line starting with PS1 + .+$\n? # But any other line + )*) + 'u' + # Source consists of a PS1 line followed by zero or more PS2 lines. + (?P + (?:^(?P [ ]*) >>> .*) # PS1 line + (?:\n [ ]* \.\.\. .*)*) # PS2 lines + \n? + # Want consists of any non-blank lines that do not start with PS1. + (?P (?:(?![ ]*$) # Not a blank line + (?![ ]*>>>) # Not a line starting with PS1 + .+$\n? # But any other line + )*) + 'b' + # Grab the traceback header. Different versions of Python have + # said different things on the first traceback line. + ^(?P Traceback\ \( + (?: most\ recent\ call\ last + | innermost\ last + ) \) : + ) + \s* $ # toss trailing whitespace on the header. + (?P .*?) # don't blink: absorb stuff until... + ^ (?P \w+ .*) # a line *starts* with alphanum. + 'u' + # Grab the traceback header. Different versions of Python have + # said different things on the first traceback line. + ^(?P Traceback\ \( + (?: most\ recent\ call\ last + | innermost\ last + ) \) : + ) + \s* $ # toss trailing whitespace on the header. + (?P .*?) # don't blink: absorb stuff until... + ^ (?P \w+ .*) # a line *starts* with alphanum. + 'b'^[ ]*(#.*)?$'u'^[ ]*(#.*)?$'b' + Divide the given string into examples and intervening text, + and return them as a list of alternating Examples and strings. + Line numbers for the Examples are 0-based. The optional + argument `name` is a name identifying this string, and is only + used for error messages. + 'u' + Divide the given string into examples and intervening text, + and return them as a list of alternating Examples and strings. + Line numbers for the Examples are 0-based. The optional + argument `name` is a name identifying this string, and is only + used for error messages. + 'b'indent'u'indent'b' + Extract all doctest examples from the given string, and + collect them into a `DocTest` object. + + `globs`, `name`, `filename`, and `lineno` are attributes for + the new `DocTest` object. See the documentation for `DocTest` + for more information. + 'u' + Extract all doctest examples from the given string, and + collect them into a `DocTest` object. + + `globs`, `name`, `filename`, and `lineno` are attributes for + the new `DocTest` object. See the documentation for `DocTest` + for more information. + 'b' + Extract all doctest examples from the given string, and return + them as a list of `Example` objects. Line numbers are + 0-based, because it's most common in doctests that nothing + interesting appears on the same line as opening triple-quote, + and so the first interesting line is called "line 1" then. + + The optional argument `name` is a name identifying this + string, and is only used for error messages. + 'u' + Extract all doctest examples from the given string, and return + them as a list of `Example` objects. Line numbers are + 0-based, because it's most common in doctests that nothing + interesting appears on the same line as opening triple-quote, + and so the first interesting line is called "line 1" then. + + The optional argument `name` is a name identifying this + string, and is only used for error messages. + 'b' + Given a regular expression match from `_EXAMPLE_RE` (`m`), + return a pair `(source, want)`, where `source` is the matched + example's source code (with prompts and indentation stripped); + and `want` is the example's expected output (with indentation + stripped). + + `name` is the string's name, and `lineno` is the line number + where the example starts; both are used for error messages. + 'u' + Given a regular expression match from `_EXAMPLE_RE` (`m`), + return a pair `(source, want)`, where `source` is the matched + example's source code (with prompts and indentation stripped); + and `want` is the example's expected output (with indentation + stripped). + + `name` is the string's name, and `lineno` is the line number + where the example starts; both are used for error messages. + 'b'want'u'want'b' *$'u' *$'b'#\s*doctest:\s*([^\n\'"]*)$'u'#\s*doctest:\s*([^\n\'"]*)$'b' + Return a dictionary containing option overrides extracted from + option directives in the given source string. + + `name` is the string's name, and `lineno` is the line number + where the example starts; both are used for error messages. + 'u' + Return a dictionary containing option overrides extracted from + option directives in the given source string. + + `name` is the string's name, and `lineno` is the line number + where the example starts; both are used for error messages. + 'b'line %r of the doctest for %s has an invalid option: %r'u'line %r of the doctest for %s has an invalid option: %r'b'line %r of the doctest for %s has an option directive on a line with no example: %r'u'line %r of the doctest for %s has an option directive on a line with no example: %r'b'^([ ]*)(?=\S)'u'^([ ]*)(?=\S)'b'Return the minimum indentation of any non-blank line in `s`'u'Return the minimum indentation of any non-blank line in `s`'b' + Given the lines of a source string (including prompts and + leading indentation), check to make sure that every prompt is + followed by a space character. If any line is not followed by + a space character, then raise ValueError. + 'u' + Given the lines of a source string (including prompts and + leading indentation), check to make sure that every prompt is + followed by a space character. If any line is not followed by + a space character, then raise ValueError. + 'b'line %r of the docstring for %s lacks blank after %s: %r'u'line %r of the docstring for %s lacks blank after %s: %r'b' + Check that every line in the given list starts with the given + prefix; if any line does not, then raise a ValueError. + 'u' + Check that every line in the given list starts with the given + prefix; if any line does not, then raise a ValueError. + 'b'line %r of the docstring for %s has inconsistent leading whitespace: %r'u'line %r of the docstring for %s has inconsistent leading whitespace: %r'b' + A class used to extract the DocTests that are relevant to a given + object, from its docstring and the docstrings of its contained + objects. Doctests can currently be extracted from the following + object types: modules, functions, classes, methods, staticmethods, + classmethods, and properties. + 'u' + A class used to extract the DocTests that are relevant to a given + object, from its docstring and the docstrings of its contained + objects. Doctests can currently be extracted from the following + object types: modules, functions, classes, methods, staticmethods, + classmethods, and properties. + 'b' + Create a new doctest finder. + + The optional argument `parser` specifies a class or + function that should be used to create new DocTest objects (or + objects that implement the same interface as DocTest). The + signature for this factory function should match the signature + of the DocTest constructor. + + If the optional argument `recurse` is false, then `find` will + only examine the given object, and not any contained objects. + + If the optional argument `exclude_empty` is false, then `find` + will include tests for objects with empty docstrings. + 'u' + Create a new doctest finder. + + The optional argument `parser` specifies a class or + function that should be used to create new DocTest objects (or + objects that implement the same interface as DocTest). The + signature for this factory function should match the signature + of the DocTest constructor. + + If the optional argument `recurse` is false, then `find` will + only examine the given object, and not any contained objects. + + If the optional argument `exclude_empty` is false, then `find` + will include tests for objects with empty docstrings. + 'b' + Return a list of the DocTests that are defined by the given + object's docstring, or by any of its contained objects' + docstrings. + + The optional parameter `module` is the module that contains + the given object. If the module is not specified or is None, then + the test finder will attempt to automatically determine the + correct module. The object's module is used: + + - As a default namespace, if `globs` is not specified. + - To prevent the DocTestFinder from extracting DocTests + from objects that are imported from other modules. + - To find the name of the file containing the object. + - To help find the line number of the object within its + file. + + Contained objects whose module does not match `module` are ignored. + + If `module` is False, no attempt to find the module will be made. + This is obscure, of use mostly in tests: if `module` is False, or + is None but cannot be found automatically, then all objects are + considered to belong to the (non-existent) module, so all contained + objects will (recursively) be searched for doctests. + + The globals for each DocTest is formed by combining `globs` + and `extraglobs` (bindings in `extraglobs` override bindings + in `globs`). A new copy of the globals dictionary is created + for each DocTest. If `globs` is not specified, then it + defaults to the module's `__dict__`, if specified, or {} + otherwise. If `extraglobs` is not specified, then it defaults + to {}. + + 'u' + Return a list of the DocTests that are defined by the given + object's docstring, or by any of its contained objects' + docstrings. + + The optional parameter `module` is the module that contains + the given object. If the module is not specified or is None, then + the test finder will attempt to automatically determine the + correct module. The object's module is used: + + - As a default namespace, if `globs` is not specified. + - To prevent the DocTestFinder from extracting DocTests + from objects that are imported from other modules. + - To find the name of the file containing the object. + - To help find the line number of the object within its + file. + + Contained objects whose module does not match `module` are ignored. + + If `module` is False, no attempt to find the module will be made. + This is obscure, of use mostly in tests: if `module` is False, or + is None but cannot be found automatically, then all objects are + considered to belong to the (non-existent) module, so all contained + objects will (recursively) be searched for doctests. + + The globals for each DocTest is formed by combining `globs` + and `extraglobs` (bindings in `extraglobs` override bindings + in `globs`). A new copy of the globals dictionary is created + for each DocTest. If `globs` is not specified, then it + defaults to the module's `__dict__`, if specified, or {} + otherwise. If `extraglobs` is not specified, then it defaults + to {}. + + 'b'DocTestFinder.find: name must be given when obj.__name__ doesn't exist: %r'u'DocTestFinder.find: name must be given when obj.__name__ doesn't exist: %r'b'<]>'u'<]>'b' + Return true if the given object is defined in the given + module. + 'u' + Return true if the given object is defined in the given + module. + 'b'__objclass__'u'__objclass__'b'object must be a class or function'u'object must be a class or function'b' + Find tests for the given object and any contained objects, and + add them to `tests`. + 'u' + Find tests for the given object and any contained objects, and + add them to `tests`. + 'b'Finding tests in %s'u'Finding tests in %s'b'__test__'u'__test__'b'DocTestFinder.find: __test__ keys must be strings: %r'u'DocTestFinder.find: __test__ keys must be strings: %r'b'DocTestFinder.find: __test__ values must be strings, functions, methods, classes, or modules: %r'u'DocTestFinder.find: __test__ values must be strings, functions, methods, classes, or modules: %r'b'%s.__test__.%s'u'%s.__test__.%s'b' + Return a DocTest for the given object, if it defines a docstring; + otherwise, return None. + 'u' + Return a DocTest for the given object, if it defines a docstring; + otherwise, return None. + 'b' + Return a line number of the given object's docstring. Note: + this method assumes that the object has a docstring. + 'u' + Return a line number of the given object's docstring. Note: + this method assumes that the object has a docstring. + 'b'^\s*class\s*%s\b'u'^\s*class\s*%s\b'b'co_firstlineno'u'co_firstlineno'b'(^|.*:)\s*\w*("|\')'u'(^|.*:)\s*\w*("|\')'b' + A class used to run DocTest test cases, and accumulate statistics. + The `run` method is used to process a single DocTest case. It + returns a tuple `(f, t)`, where `t` is the number of test cases + tried, and `f` is the number of test cases that failed. + + >>> tests = DocTestFinder().find(_TestClass) + >>> runner = DocTestRunner(verbose=False) + >>> tests.sort(key = lambda test: test.name) + >>> for test in tests: + ... print(test.name, '->', runner.run(test)) + _TestClass -> TestResults(failed=0, attempted=2) + _TestClass.__init__ -> TestResults(failed=0, attempted=2) + _TestClass.get -> TestResults(failed=0, attempted=2) + _TestClass.square -> TestResults(failed=0, attempted=1) + + The `summarize` method prints a summary of all the test cases that + have been run by the runner, and returns an aggregated `(f, t)` + tuple: + + >>> runner.summarize(verbose=1) + 4 items passed all tests: + 2 tests in _TestClass + 2 tests in _TestClass.__init__ + 2 tests in _TestClass.get + 1 tests in _TestClass.square + 7 tests in 4 items. + 7 passed and 0 failed. + Test passed. + TestResults(failed=0, attempted=7) + + The aggregated number of tried examples and failed examples is + also available via the `tries` and `failures` attributes: + + >>> runner.tries + 7 + >>> runner.failures + 0 + + The comparison between expected outputs and actual outputs is done + by an `OutputChecker`. This comparison may be customized with a + number of option flags; see the documentation for `testmod` for + more information. If the option flags are insufficient, then the + comparison may also be customized by passing a subclass of + `OutputChecker` to the constructor. + + The test runner's display output can be controlled in two ways. + First, an output function (`out) can be passed to + `TestRunner.run`; this function will be called with strings that + should be displayed. It defaults to `sys.stdout.write`. If + capturing the output is not sufficient, then the display output + can be also customized by subclassing DocTestRunner, and + overriding the methods `report_start`, `report_success`, + `report_unexpected_exception`, and `report_failure`. + 'u' + A class used to run DocTest test cases, and accumulate statistics. + The `run` method is used to process a single DocTest case. It + returns a tuple `(f, t)`, where `t` is the number of test cases + tried, and `f` is the number of test cases that failed. + + >>> tests = DocTestFinder().find(_TestClass) + >>> runner = DocTestRunner(verbose=False) + >>> tests.sort(key = lambda test: test.name) + >>> for test in tests: + ... print(test.name, '->', runner.run(test)) + _TestClass -> TestResults(failed=0, attempted=2) + _TestClass.__init__ -> TestResults(failed=0, attempted=2) + _TestClass.get -> TestResults(failed=0, attempted=2) + _TestClass.square -> TestResults(failed=0, attempted=1) + + The `summarize` method prints a summary of all the test cases that + have been run by the runner, and returns an aggregated `(f, t)` + tuple: + + >>> runner.summarize(verbose=1) + 4 items passed all tests: + 2 tests in _TestClass + 2 tests in _TestClass.__init__ + 2 tests in _TestClass.get + 1 tests in _TestClass.square + 7 tests in 4 items. + 7 passed and 0 failed. + Test passed. + TestResults(failed=0, attempted=7) + + The aggregated number of tried examples and failed examples is + also available via the `tries` and `failures` attributes: + + >>> runner.tries + 7 + >>> runner.failures + 0 + + The comparison between expected outputs and actual outputs is done + by an `OutputChecker`. This comparison may be customized with a + number of option flags; see the documentation for `testmod` for + more information. If the option flags are insufficient, then the + comparison may also be customized by passing a subclass of + `OutputChecker` to the constructor. + + The test runner's display output can be controlled in two ways. + First, an output function (`out) can be passed to + `TestRunner.run`; this function will be called with strings that + should be displayed. It defaults to `sys.stdout.write`. If + capturing the output is not sufficient, then the display output + can be also customized by subclassing DocTestRunner, and + overriding the methods `report_start`, `report_success`, + `report_unexpected_exception`, and `report_failure`. + 'b' + Create a new test runner. + + Optional keyword arg `checker` is the `OutputChecker` that + should be used to compare the expected outputs and actual + outputs of doctest examples. + + Optional keyword arg 'verbose' prints lots of stuff if true, + only failures if false; by default, it's true iff '-v' is in + sys.argv. + + Optional argument `optionflags` can be used to control how the + test runner compares expected output to actual output, and how + it displays failures. See the documentation for `testmod` for + more information. + 'u' + Create a new test runner. + + Optional keyword arg `checker` is the `OutputChecker` that + should be used to compare the expected outputs and actual + outputs of doctest examples. + + Optional keyword arg 'verbose' prints lots of stuff if true, + only failures if false; by default, it's true iff '-v' is in + sys.argv. + + Optional argument `optionflags` can be used to control how the + test runner compares expected output to actual output, and how + it displays failures. See the documentation for `testmod` for + more information. + 'b'-v'b' + Report that the test runner is about to process the given + example. (Only displays a message if verbose=True) + 'u' + Report that the test runner is about to process the given + example. (Only displays a message if verbose=True) + 'b'Trying: +'u'Trying: +'b'Expecting: +'u'Expecting: +'b'Expecting nothing +'u'Expecting nothing +'b' + Report that the given example ran successfully. (Only + displays a message if verbose=True) + 'u' + Report that the given example ran successfully. (Only + displays a message if verbose=True) + 'b'ok +'u'ok +'b' + Report that the given example failed. + 'u' + Report that the given example failed. + 'b' + Report that the given example raised an unexpected exception. + 'u' + Report that the given example raised an unexpected exception. + 'b'Exception raised: +'u'Exception raised: +'b'File "%s", line %s, in %s'u'File "%s", line %s, in %s'b'Line %s, in %s'u'Line %s, in %s'b'Failed example:'u'Failed example:'b' + Run the examples in `test`. Write the outcome of each example + with one of the `DocTestRunner.report_*` methods, using the + writer function `out`. `compileflags` is the set of compiler + flags that should be used to execute examples. Return a tuple + `(f, t)`, where `t` is the number of examples tried, and `f` + is the number of examples that failed. The examples are run + in the namespace `test.globs`. + 'u' + Run the examples in `test`. Write the outcome of each example + with one of the `DocTestRunner.report_*` methods, using the + writer function `out`. `compileflags` is the set of compiler + flags that should be used to execute examples. Return a tuple + `(f, t)`, where `t` is the number of examples tried, and `f` + is the number of examples that failed. The examples are run + in the namespace `test.globs`. + 'b''u''b'unknown outcome'u'unknown outcome'b' + Record the fact that the given DocTest (`test`) generated `f` + failures out of `t` tried examples. + 'u' + Record the fact that the given DocTest (`test`) generated `f` + failures out of `t` tried examples. + 'b'.+)\[(?P\d+)\]>$'u'.+)\[(?P\d+)\]>$'b'examplenum'u'examplenum'b' + Run the examples in `test`, and display the results using the + writer function `out`. + + The examples are run in the namespace `test.globs`. If + `clear_globs` is true (the default), then this namespace will + be cleared after the test runs, to help with garbage + collection. If you would like to examine the namespace after + the test completes, then use `clear_globs=False`. + + `compileflags` gives the set of flags that should be used by + the Python compiler when running the examples. If not + specified, then it will default to the set of future-import + flags that apply to `globs`. + + The output of each example is checked using + `DocTestRunner.check_output`, and the results are formatted by + the `DocTestRunner.report_*` methods. + 'u' + Run the examples in `test`, and display the results using the + writer function `out`. + + The examples are run in the namespace `test.globs`. If + `clear_globs` is true (the default), then this namespace will + be cleared after the test runs, to help with garbage + collection. If you would like to examine the namespace after + the test completes, then use `clear_globs=False`. + + `compileflags` gives the set of flags that should be used by + the Python compiler when running the examples. If not + specified, then it will default to the set of future-import + flags that apply to `globs`. + + The output of each example is checked using + `DocTestRunner.check_output`, and the results are formatted by + the `DocTestRunner.report_*` methods. + 'b' + Print a summary of all the test cases that have been run by + this DocTestRunner, and return a tuple `(f, t)`, where `f` is + the total number of failed examples, and `t` is the total + number of tried examples. + + The optional `verbose` argument controls how detailed the + summary is. If the verbosity is not specified, then the + DocTestRunner's verbosity is used. + 'u' + Print a summary of all the test cases that have been run by + this DocTestRunner, and return a tuple `(f, t)`, where `f` is + the total number of failed examples, and `t` is the total + number of tried examples. + + The optional `verbose` argument controls how detailed the + summary is. If the verbosity is not specified, then the + DocTestRunner's verbosity is used. + 'b'items had no tests:'u'items had no tests:'b'items passed all tests:'u'items passed all tests:'b' %3d tests in %s'u' %3d tests in %s'b'items had failures:'u'items had failures:'b' %3d of %3d in %s'u' %3d of %3d in %s'b'tests in'u'tests in'b'items.'u'items.'b'passed and'u'passed and'b'failed.'u'failed.'b'***Test Failed***'u'***Test Failed***'b'failures.'u'failures.'b'Test passed.'u'Test passed.'b' + A class used to check the whether the actual output from a doctest + example matches the expected output. `OutputChecker` defines two + methods: `check_output`, which compares a given pair of outputs, + and returns true if they match; and `output_difference`, which + returns a string describing the differences between two outputs. + 'u' + A class used to check the whether the actual output from a doctest + example matches the expected output. `OutputChecker` defines two + methods: `check_output`, which compares a given pair of outputs, + and returns true if they match; and `output_difference`, which + returns a string describing the differences between two outputs. + 'b' + Convert string to hex-escaped ASCII string. + 'u' + Convert string to hex-escaped ASCII string. + 'b'ASCII'u'ASCII'b' + Return True iff the actual output from an example (`got`) + matches the expected output (`want`). These strings are + always considered to match if they are identical; but + depending on what option flags the test runner is using, + several non-exact match types are also possible. See the + documentation for `TestRunner` for more information about + option flags. + 'u' + Return True iff the actual output from an example (`got`) + matches the expected output (`want`). These strings are + always considered to match if they are identical; but + depending on what option flags the test runner is using, + several non-exact match types are also possible. See the + documentation for `TestRunner` for more information about + option flags. + 'b'True +'u'True +'b'1 +'u'1 +'b'False +'u'False +'b'0 +'u'0 +'b'(?m)^%s\s*?$'u'(?m)^%s\s*?$'b'(?m)^[^\S\n]+$'u'(?m)^[^\S\n]+$'b' + Return a string describing the differences between the + expected output for a given example (`example`) and the actual + output (`got`). `optionflags` is the set of option flags used + to compare `want` and `got`. + 'u' + Return a string describing the differences between the + expected output for a given example (`example`) and the actual + output (`got`). `optionflags` is the set of option flags used + to compare `want` and `got`. + 'b'(?m)^[ ]*(?= +)'u'(?m)^[ ]*(?= +)'b'unified diff with -expected +actual'u'unified diff with -expected +actual'b'context diff with expected followed by actual'u'context diff with expected followed by actual'b'ndiff with -expected +actual'u'ndiff with -expected +actual'b'Bad diff option'u'Bad diff option'b'Differences (%s): +'u'Differences (%s): +'b'Expected: +%sGot: +%s'u'Expected: +%sGot: +%s'b'Expected: +%sGot nothing +'u'Expected: +%sGot nothing +'b'Expected nothing +Got: +%s'u'Expected nothing +Got: +%s'b'Expected nothing +Got nothing +'u'Expected nothing +Got nothing +'b'A DocTest example has failed in debugging mode. + + The exception instance has variables: + + - test: the DocTest object being run + + - example: the Example object that failed + + - got: the actual output + 'u'A DocTest example has failed in debugging mode. + + The exception instance has variables: + + - test: the DocTest object being run + + - example: the Example object that failed + + - got: the actual output + 'b'A DocTest example has encountered an unexpected exception + + The exception instance has variables: + + - test: the DocTest object being run + + - example: the Example object that failed + + - exc_info: the exception info + 'u'A DocTest example has encountered an unexpected exception + + The exception instance has variables: + + - test: the DocTest object being run + + - example: the Example object that failed + + - exc_info: the exception info + 'b'Run doc tests but raise an exception as soon as there is a failure. + + If an unexpected exception occurs, an UnexpectedException is raised. + It contains the test, the example, and the original exception: + + >>> runner = DebugRunner(verbose=False) + >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', + ... {}, 'foo', 'foo.py', 0) + >>> try: + ... runner.run(test) + ... except UnexpectedException as f: + ... failure = f + + >>> failure.test is test + True + + >>> failure.example.want + '42\n' + + >>> exc_info = failure.exc_info + >>> raise exc_info[1] # Already has the traceback + Traceback (most recent call last): + ... + KeyError + + We wrap the original exception to give the calling application + access to the test and example information. + + If the output doesn't match, then a DocTestFailure is raised: + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 1 + ... >>> x + ... 2 + ... ''', {}, 'foo', 'foo.py', 0) + + >>> try: + ... runner.run(test) + ... except DocTestFailure as f: + ... failure = f + + DocTestFailure objects provide access to the test: + + >>> failure.test is test + True + + As well as to the example: + + >>> failure.example.want + '2\n' + + and the actual output: + + >>> failure.got + '1\n' + + If a failure or error occurs, the globals are left intact: + + >>> del test.globs['__builtins__'] + >>> test.globs + {'x': 1} + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 2 + ... >>> raise KeyError + ... ''', {}, 'foo', 'foo.py', 0) + + >>> runner.run(test) + Traceback (most recent call last): + ... + doctest.UnexpectedException: + + >>> del test.globs['__builtins__'] + >>> test.globs + {'x': 2} + + But the globals are cleared if there is no error: + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 2 + ... ''', {}, 'foo', 'foo.py', 0) + + >>> runner.run(test) + TestResults(failed=0, attempted=1) + + >>> test.globs + {} + + 'u'Run doc tests but raise an exception as soon as there is a failure. + + If an unexpected exception occurs, an UnexpectedException is raised. + It contains the test, the example, and the original exception: + + >>> runner = DebugRunner(verbose=False) + >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', + ... {}, 'foo', 'foo.py', 0) + >>> try: + ... runner.run(test) + ... except UnexpectedException as f: + ... failure = f + + >>> failure.test is test + True + + >>> failure.example.want + '42\n' + + >>> exc_info = failure.exc_info + >>> raise exc_info[1] # Already has the traceback + Traceback (most recent call last): + ... + KeyError + + We wrap the original exception to give the calling application + access to the test and example information. + + If the output doesn't match, then a DocTestFailure is raised: + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 1 + ... >>> x + ... 2 + ... ''', {}, 'foo', 'foo.py', 0) + + >>> try: + ... runner.run(test) + ... except DocTestFailure as f: + ... failure = f + + DocTestFailure objects provide access to the test: + + >>> failure.test is test + True + + As well as to the example: + + >>> failure.example.want + '2\n' + + and the actual output: + + >>> failure.got + '1\n' + + If a failure or error occurs, the globals are left intact: + + >>> del test.globs['__builtins__'] + >>> test.globs + {'x': 1} + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 2 + ... >>> raise KeyError + ... ''', {}, 'foo', 'foo.py', 0) + + >>> runner.run(test) + Traceback (most recent call last): + ... + doctest.UnexpectedException: + + >>> del test.globs['__builtins__'] + >>> test.globs + {'x': 2} + + But the globals are cleared if there is no error: + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 2 + ... ''', {}, 'foo', 'foo.py', 0) + + >>> runner.run(test) + TestResults(failed=0, attempted=1) + + >>> test.globs + {} + + 'b'm=None, name=None, globs=None, verbose=None, report=True, + optionflags=0, extraglobs=None, raise_on_error=False, + exclude_empty=False + + Test examples in docstrings in functions and classes reachable + from module m (or the current module if m is not supplied), starting + with m.__doc__. + + Also test examples reachable from dict m.__test__ if it exists and is + not None. m.__test__ maps names to functions, classes and strings; + function and class docstrings are tested even if the name is private; + strings are tested directly, as if they were docstrings. + + Return (#failures, #tests). + + See help(doctest) for an overview. + + Optional keyword arg "name" gives the name of the module; by default + use m.__name__. + + Optional keyword arg "globs" gives a dict to be used as the globals + when executing examples; by default, use m.__dict__. A copy of this + dict is actually used for each docstring, so that each docstring's + examples start with a clean slate. + + Optional keyword arg "extraglobs" gives a dictionary that should be + merged into the globals that are used to execute examples. By + default, no extra globals are used. This is new in 2.4. + + Optional keyword arg "verbose" prints lots of stuff if true, prints + only failures if false; by default, it's true iff "-v" is in sys.argv. + + Optional keyword arg "report" prints a summary at the end when true, + else prints nothing at the end. In verbose mode, the summary is + detailed, else very brief (in fact, empty if all tests passed). + + Optional keyword arg "optionflags" or's together module constants, + and defaults to 0. This is new in 2.3. Possible values (see the + docs for details): + + DONT_ACCEPT_TRUE_FOR_1 + DONT_ACCEPT_BLANKLINE + NORMALIZE_WHITESPACE + ELLIPSIS + SKIP + IGNORE_EXCEPTION_DETAIL + REPORT_UDIFF + REPORT_CDIFF + REPORT_NDIFF + REPORT_ONLY_FIRST_FAILURE + + Optional keyword arg "raise_on_error" raises an exception on the + first unexpected exception or failure. This allows failures to be + post-mortem debugged. + + Advanced tomfoolery: testmod runs methods of a local instance of + class doctest.Tester, then merges the results into (or creates) + global Tester instance doctest.master. Methods of doctest.master + can be called directly too, if you want to do something unusual. + Passing report=0 to testmod is especially useful then, to delay + displaying a summary. Invoke doctest.master.summarize(verbose) + when you're done fiddling. + 'u'm=None, name=None, globs=None, verbose=None, report=True, + optionflags=0, extraglobs=None, raise_on_error=False, + exclude_empty=False + + Test examples in docstrings in functions and classes reachable + from module m (or the current module if m is not supplied), starting + with m.__doc__. + + Also test examples reachable from dict m.__test__ if it exists and is + not None. m.__test__ maps names to functions, classes and strings; + function and class docstrings are tested even if the name is private; + strings are tested directly, as if they were docstrings. + + Return (#failures, #tests). + + See help(doctest) for an overview. + + Optional keyword arg "name" gives the name of the module; by default + use m.__name__. + + Optional keyword arg "globs" gives a dict to be used as the globals + when executing examples; by default, use m.__dict__. A copy of this + dict is actually used for each docstring, so that each docstring's + examples start with a clean slate. + + Optional keyword arg "extraglobs" gives a dictionary that should be + merged into the globals that are used to execute examples. By + default, no extra globals are used. This is new in 2.4. + + Optional keyword arg "verbose" prints lots of stuff if true, prints + only failures if false; by default, it's true iff "-v" is in sys.argv. + + Optional keyword arg "report" prints a summary at the end when true, + else prints nothing at the end. In verbose mode, the summary is + detailed, else very brief (in fact, empty if all tests passed). + + Optional keyword arg "optionflags" or's together module constants, + and defaults to 0. This is new in 2.3. Possible values (see the + docs for details): + + DONT_ACCEPT_TRUE_FOR_1 + DONT_ACCEPT_BLANKLINE + NORMALIZE_WHITESPACE + ELLIPSIS + SKIP + IGNORE_EXCEPTION_DETAIL + REPORT_UDIFF + REPORT_CDIFF + REPORT_NDIFF + REPORT_ONLY_FIRST_FAILURE + + Optional keyword arg "raise_on_error" raises an exception on the + first unexpected exception or failure. This allows failures to be + post-mortem debugged. + + Advanced tomfoolery: testmod runs methods of a local instance of + class doctest.Tester, then merges the results into (or creates) + global Tester instance doctest.master. Methods of doctest.master + can be called directly too, if you want to do something unusual. + Passing report=0 to testmod is especially useful then, to delay + displaying a summary. Invoke doctest.master.summarize(verbose) + when you're done fiddling. + 'b'testmod: module required; %r'u'testmod: module required; %r'b' + Test examples in the given file. Return (#failures, #tests). + + Optional keyword arg "module_relative" specifies how filenames + should be interpreted: + + - If "module_relative" is True (the default), then "filename" + specifies a module-relative path. By default, this path is + relative to the calling module's directory; but if the + "package" argument is specified, then it is relative to that + package. To ensure os-independence, "filename" should use + "/" characters to separate path segments, and should not + be an absolute path (i.e., it may not begin with "/"). + + - If "module_relative" is False, then "filename" specifies an + os-specific path. The path may be absolute or relative (to + the current working directory). + + Optional keyword arg "name" gives the name of the test; by default + use the file's basename. + + Optional keyword argument "package" is a Python package or the + name of a Python package whose directory should be used as the + base directory for a module relative filename. If no package is + specified, then the calling module's directory is used as the base + directory for module relative filenames. It is an error to + specify "package" if "module_relative" is False. + + Optional keyword arg "globs" gives a dict to be used as the globals + when executing examples; by default, use {}. A copy of this dict + is actually used for each docstring, so that each docstring's + examples start with a clean slate. + + Optional keyword arg "extraglobs" gives a dictionary that should be + merged into the globals that are used to execute examples. By + default, no extra globals are used. + + Optional keyword arg "verbose" prints lots of stuff if true, prints + only failures if false; by default, it's true iff "-v" is in sys.argv. + + Optional keyword arg "report" prints a summary at the end when true, + else prints nothing at the end. In verbose mode, the summary is + detailed, else very brief (in fact, empty if all tests passed). + + Optional keyword arg "optionflags" or's together module constants, + and defaults to 0. Possible values (see the docs for details): + + DONT_ACCEPT_TRUE_FOR_1 + DONT_ACCEPT_BLANKLINE + NORMALIZE_WHITESPACE + ELLIPSIS + SKIP + IGNORE_EXCEPTION_DETAIL + REPORT_UDIFF + REPORT_CDIFF + REPORT_NDIFF + REPORT_ONLY_FIRST_FAILURE + + Optional keyword arg "raise_on_error" raises an exception on the + first unexpected exception or failure. This allows failures to be + post-mortem debugged. + + Optional keyword arg "parser" specifies a DocTestParser (or + subclass) that should be used to extract tests from the files. + + Optional keyword arg "encoding" specifies an encoding that should + be used to convert the file to unicode. + + Advanced tomfoolery: testmod runs methods of a local instance of + class doctest.Tester, then merges the results into (or creates) + global Tester instance doctest.master. Methods of doctest.master + can be called directly too, if you want to do something unusual. + Passing report=0 to testmod is especially useful then, to delay + displaying a summary. Invoke doctest.master.summarize(verbose) + when you're done fiddling. + 'u' + Test examples in the given file. Return (#failures, #tests). + + Optional keyword arg "module_relative" specifies how filenames + should be interpreted: + + - If "module_relative" is True (the default), then "filename" + specifies a module-relative path. By default, this path is + relative to the calling module's directory; but if the + "package" argument is specified, then it is relative to that + package. To ensure os-independence, "filename" should use + "/" characters to separate path segments, and should not + be an absolute path (i.e., it may not begin with "/"). + + - If "module_relative" is False, then "filename" specifies an + os-specific path. The path may be absolute or relative (to + the current working directory). + + Optional keyword arg "name" gives the name of the test; by default + use the file's basename. + + Optional keyword argument "package" is a Python package or the + name of a Python package whose directory should be used as the + base directory for a module relative filename. If no package is + specified, then the calling module's directory is used as the base + directory for module relative filenames. It is an error to + specify "package" if "module_relative" is False. + + Optional keyword arg "globs" gives a dict to be used as the globals + when executing examples; by default, use {}. A copy of this dict + is actually used for each docstring, so that each docstring's + examples start with a clean slate. + + Optional keyword arg "extraglobs" gives a dictionary that should be + merged into the globals that are used to execute examples. By + default, no extra globals are used. + + Optional keyword arg "verbose" prints lots of stuff if true, prints + only failures if false; by default, it's true iff "-v" is in sys.argv. + + Optional keyword arg "report" prints a summary at the end when true, + else prints nothing at the end. In verbose mode, the summary is + detailed, else very brief (in fact, empty if all tests passed). + + Optional keyword arg "optionflags" or's together module constants, + and defaults to 0. Possible values (see the docs for details): + + DONT_ACCEPT_TRUE_FOR_1 + DONT_ACCEPT_BLANKLINE + NORMALIZE_WHITESPACE + ELLIPSIS + SKIP + IGNORE_EXCEPTION_DETAIL + REPORT_UDIFF + REPORT_CDIFF + REPORT_NDIFF + REPORT_ONLY_FIRST_FAILURE + + Optional keyword arg "raise_on_error" raises an exception on the + first unexpected exception or failure. This allows failures to be + post-mortem debugged. + + Optional keyword arg "parser" specifies a DocTestParser (or + subclass) that should be used to extract tests from the files. + + Optional keyword arg "encoding" specifies an encoding that should + be used to convert the file to unicode. + + Advanced tomfoolery: testmod runs methods of a local instance of + class doctest.Tester, then merges the results into (or creates) + global Tester instance doctest.master. Methods of doctest.master + can be called directly too, if you want to do something unusual. + Passing report=0 to testmod is especially useful then, to delay + displaying a summary. Invoke doctest.master.summarize(verbose) + when you're done fiddling. + 'b'Package may only be specified for module-relative paths.'u'Package may only be specified for module-relative paths.'b'NoName'u'NoName'b' + Test examples in the given object's docstring (`f`), using `globs` + as globals. Optional argument `name` is used in failure messages. + If the optional argument `verbose` is true, then generate output + even if there are no failures. + + `compileflags` gives the set of flags that should be used by the + Python compiler when running the examples. If not specified, then + it will default to the set of future-import flags that apply to + `globs`. + + Optional keyword arg `optionflags` specifies options for the + testing and output. See the documentation for `testmod` for more + information. + 'u' + Test examples in the given object's docstring (`f`), using `globs` + as globals. Optional argument `name` is used in failure messages. + If the optional argument `verbose` is true, then generate output + even if there are no failures. + + `compileflags` gives the set of flags that should be used by the + Python compiler when running the examples. If not specified, then + it will default to the set of future-import flags that apply to + `globs`. + + Optional keyword arg `optionflags` specifies options for the + testing and output. See the documentation for `testmod` for more + information. + 'b'Sets the unittest option flags. + + The old flag is returned so that a runner could restore the old + value if it wished to: + + >>> import doctest + >>> old = doctest._unittest_reportflags + >>> doctest.set_unittest_reportflags(REPORT_NDIFF | + ... REPORT_ONLY_FIRST_FAILURE) == old + True + + >>> doctest._unittest_reportflags == (REPORT_NDIFF | + ... REPORT_ONLY_FIRST_FAILURE) + True + + Only reporting flags can be set: + + >>> doctest.set_unittest_reportflags(ELLIPSIS) + Traceback (most recent call last): + ... + ValueError: ('Only reporting flags allowed', 8) + + >>> doctest.set_unittest_reportflags(old) == (REPORT_NDIFF | + ... REPORT_ONLY_FIRST_FAILURE) + True + 'u'Sets the unittest option flags. + + The old flag is returned so that a runner could restore the old + value if it wished to: + + >>> import doctest + >>> old = doctest._unittest_reportflags + >>> doctest.set_unittest_reportflags(REPORT_NDIFF | + ... REPORT_ONLY_FIRST_FAILURE) == old + True + + >>> doctest._unittest_reportflags == (REPORT_NDIFF | + ... REPORT_ONLY_FIRST_FAILURE) + True + + Only reporting flags can be set: + + >>> doctest.set_unittest_reportflags(ELLIPSIS) + Traceback (most recent call last): + ... + ValueError: ('Only reporting flags allowed', 8) + + >>> doctest.set_unittest_reportflags(old) == (REPORT_NDIFF | + ... REPORT_ONLY_FIRST_FAILURE) + True + 'b'Only reporting flags allowed'u'Only reporting flags allowed'b'unknown line number'u'unknown line number'b'Failed doctest test for %s + File "%s", line %s, in %s + +%s'u'Failed doctest test for %s + File "%s", line %s, in %s + +%s'b'Run the test case without results and without catching exceptions + + The unit test framework includes a debug method on test cases + and test suites to support post-mortem debugging. The test code + is run in such a way that errors are not caught. This way a + caller can catch the errors and initiate post-mortem debugging. + + The DocTestCase provides a debug method that raises + UnexpectedException errors if there is an unexpected + exception: + + >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', + ... {}, 'foo', 'foo.py', 0) + >>> case = DocTestCase(test) + >>> try: + ... case.debug() + ... except UnexpectedException as f: + ... failure = f + + The UnexpectedException contains the test, the example, and + the original exception: + + >>> failure.test is test + True + + >>> failure.example.want + '42\n' + + >>> exc_info = failure.exc_info + >>> raise exc_info[1] # Already has the traceback + Traceback (most recent call last): + ... + KeyError + + If the output doesn't match, then a DocTestFailure is raised: + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 1 + ... >>> x + ... 2 + ... ''', {}, 'foo', 'foo.py', 0) + >>> case = DocTestCase(test) + + >>> try: + ... case.debug() + ... except DocTestFailure as f: + ... failure = f + + DocTestFailure objects provide access to the test: + + >>> failure.test is test + True + + As well as to the example: + + >>> failure.example.want + '2\n' + + and the actual output: + + >>> failure.got + '1\n' + + 'u'Run the test case without results and without catching exceptions + + The unit test framework includes a debug method on test cases + and test suites to support post-mortem debugging. The test code + is run in such a way that errors are not caught. This way a + caller can catch the errors and initiate post-mortem debugging. + + The DocTestCase provides a debug method that raises + UnexpectedException errors if there is an unexpected + exception: + + >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', + ... {}, 'foo', 'foo.py', 0) + >>> case = DocTestCase(test) + >>> try: + ... case.debug() + ... except UnexpectedException as f: + ... failure = f + + The UnexpectedException contains the test, the example, and + the original exception: + + >>> failure.test is test + True + + >>> failure.example.want + '42\n' + + >>> exc_info = failure.exc_info + >>> raise exc_info[1] # Already has the traceback + Traceback (most recent call last): + ... + KeyError + + If the output doesn't match, then a DocTestFailure is raised: + + >>> test = DocTestParser().get_doctest(''' + ... >>> x = 1 + ... >>> x + ... 2 + ... ''', {}, 'foo', 'foo.py', 0) + >>> case = DocTestCase(test) + + >>> try: + ... case.debug() + ... except DocTestFailure as f: + ... failure = f + + DocTestFailure objects provide access to the test: + + >>> failure.test is test + True + + As well as to the example: + + >>> failure.example.want + '2\n' + + and the actual output: + + >>> failure.got + '1\n' + + 'b'Doctest: 'u'Doctest: 'b'DocTestSuite will not work with -O2 and above'u'DocTestSuite will not work with -O2 and above'b'Skipping tests from %s'u'Skipping tests from %s'b' + Convert doctest tests for a module to a unittest test suite. + + This converts each documentation string in a module that + contains doctest tests to a unittest test case. If any of the + tests in a doc string fail, then the test case fails. An exception + is raised showing the name of the file containing the test and a + (sometimes approximate) line number. + + The `module` argument provides the module to be tested. The argument + can be either a module or a module name. + + If no argument is given, the calling module is used. + + A number of options may be provided as keyword arguments: + + setUp + A set-up function. This is called before running the + tests in each file. The setUp function will be passed a DocTest + object. The setUp function can access the test globals as the + globs attribute of the test passed. + + tearDown + A tear-down function. This is called after running the + tests in each file. The tearDown function will be passed a DocTest + object. The tearDown function can access the test globals as the + globs attribute of the test passed. + + globs + A dictionary containing initial global variables for the tests. + + optionflags + A set of doctest option flags expressed as an integer. + 'u' + Convert doctest tests for a module to a unittest test suite. + + This converts each documentation string in a module that + contains doctest tests to a unittest test case. If any of the + tests in a doc string fail, then the test case fails. An exception + is raised showing the name of the file containing the test and a + (sometimes approximate) line number. + + The `module` argument provides the module to be tested. The argument + can be either a module or a module name. + + If no argument is given, the calling module is used. + + A number of options may be provided as keyword arguments: + + setUp + A set-up function. This is called before running the + tests in each file. The setUp function will be passed a DocTest + object. The setUp function can access the test globals as the + globs attribute of the test passed. + + tearDown + A tear-down function. This is called after running the + tests in each file. The tearDown function will be passed a DocTest + object. The tearDown function can access the test globals as the + globs attribute of the test passed. + + globs + A dictionary containing initial global variables for the tests. + + optionflags + A set of doctest option flags expressed as an integer. + 'b'Failed doctest test for %s + File "%s", line 0 + +%s'u'Failed doctest test for %s + File "%s", line 0 + +%s'b'A unittest suite for one or more doctest files. + + The path to each doctest file is given as a string; the + interpretation of that string depends on the keyword argument + "module_relative". + + A number of options may be provided as keyword arguments: + + module_relative + If "module_relative" is True, then the given file paths are + interpreted as os-independent module-relative paths. By + default, these paths are relative to the calling module's + directory; but if the "package" argument is specified, then + they are relative to that package. To ensure os-independence, + "filename" should use "/" characters to separate path + segments, and may not be an absolute path (i.e., it may not + begin with "/"). + + If "module_relative" is False, then the given file paths are + interpreted as os-specific paths. These paths may be absolute + or relative (to the current working directory). + + package + A Python package or the name of a Python package whose directory + should be used as the base directory for module relative paths. + If "package" is not specified, then the calling module's + directory is used as the base directory for module relative + filenames. It is an error to specify "package" if + "module_relative" is False. + + setUp + A set-up function. This is called before running the + tests in each file. The setUp function will be passed a DocTest + object. The setUp function can access the test globals as the + globs attribute of the test passed. + + tearDown + A tear-down function. This is called after running the + tests in each file. The tearDown function will be passed a DocTest + object. The tearDown function can access the test globals as the + globs attribute of the test passed. + + globs + A dictionary containing initial global variables for the tests. + + optionflags + A set of doctest option flags expressed as an integer. + + parser + A DocTestParser (or subclass) that should be used to extract + tests from the files. + + encoding + An encoding that will be used to convert the files to unicode. + 'u'A unittest suite for one or more doctest files. + + The path to each doctest file is given as a string; the + interpretation of that string depends on the keyword argument + "module_relative". + + A number of options may be provided as keyword arguments: + + module_relative + If "module_relative" is True, then the given file paths are + interpreted as os-independent module-relative paths. By + default, these paths are relative to the calling module's + directory; but if the "package" argument is specified, then + they are relative to that package. To ensure os-independence, + "filename" should use "/" characters to separate path + segments, and may not be an absolute path (i.e., it may not + begin with "/"). + + If "module_relative" is False, then the given file paths are + interpreted as os-specific paths. These paths may be absolute + or relative (to the current working directory). + + package + A Python package or the name of a Python package whose directory + should be used as the base directory for module relative paths. + If "package" is not specified, then the calling module's + directory is used as the base directory for module relative + filenames. It is an error to specify "package" if + "module_relative" is False. + + setUp + A set-up function. This is called before running the + tests in each file. The setUp function will be passed a DocTest + object. The setUp function can access the test globals as the + globs attribute of the test passed. + + tearDown + A tear-down function. This is called after running the + tests in each file. The tearDown function will be passed a DocTest + object. The tearDown function can access the test globals as the + globs attribute of the test passed. + + globs + A dictionary containing initial global variables for the tests. + + optionflags + A set of doctest option flags expressed as an integer. + + parser + A DocTestParser (or subclass) that should be used to extract + tests from the files. + + encoding + An encoding that will be used to convert the files to unicode. + 'b'module_relative'u'module_relative'b'package'u'package'b'Extract script from text with examples. + + Converts text with examples to a Python script. Example input is + converted to regular code. Example output and all other words + are converted to comments: + + >>> text = ''' + ... Here are examples of simple math. + ... + ... Python has super accurate integer addition + ... + ... >>> 2 + 2 + ... 5 + ... + ... And very friendly error messages: + ... + ... >>> 1/0 + ... To Infinity + ... And + ... Beyond + ... + ... You can use logic if you want: + ... + ... >>> if 0: + ... ... blah + ... ... blah + ... ... + ... + ... Ho hum + ... ''' + + >>> print(script_from_examples(text)) + # Here are examples of simple math. + # + # Python has super accurate integer addition + # + 2 + 2 + # Expected: + ## 5 + # + # And very friendly error messages: + # + 1/0 + # Expected: + ## To Infinity + ## And + ## Beyond + # + # You can use logic if you want: + # + if 0: + blah + blah + # + # Ho hum + + 'u'Extract script from text with examples. + + Converts text with examples to a Python script. Example input is + converted to regular code. Example output and all other words + are converted to comments: + + >>> text = ''' + ... Here are examples of simple math. + ... + ... Python has super accurate integer addition + ... + ... >>> 2 + 2 + ... 5 + ... + ... And very friendly error messages: + ... + ... >>> 1/0 + ... To Infinity + ... And + ... Beyond + ... + ... You can use logic if you want: + ... + ... >>> if 0: + ... ... blah + ... ... blah + ... ... + ... + ... Ho hum + ... ''' + + >>> print(script_from_examples(text)) + # Here are examples of simple math. + # + # Python has super accurate integer addition + # + 2 + 2 + # Expected: + ## 5 + # + # And very friendly error messages: + # + 1/0 + # Expected: + ## To Infinity + ## And + ## Beyond + # + # You can use logic if you want: + # + if 0: + blah + blah + # + # Ho hum + + 'b'# Expected:'u'# Expected:'b'## 'u'## 'b'Extract the test sources from a doctest docstring as a script. + + Provide the module (or dotted name of the module) containing the + test to be debugged and the name (within the module) of the object + with the doc string with tests to be debugged. + 'u'Extract the test sources from a doctest docstring as a script. + + Provide the module (or dotted name of the module) containing the + test to be debugged and the name (within the module) of the object + with the doc string with tests to be debugged. + 'b'not found in tests'u'not found in tests'b'Debug a single doctest docstring, in argument `src`''u'Debug a single doctest docstring, in argument `src`''b'Debug a test script. `src` is the script, as a string.'u'Debug a test script. `src` is the script, as a string.'b'exec(%r)'u'exec(%r)'b'Debug a single doctest docstring. + + Provide the module (or dotted name of the module) containing the + test to be debugged and the name (within the module) of the object + with the docstring with tests to be debugged. + 'u'Debug a single doctest docstring. + + Provide the module (or dotted name of the module) containing the + test to be debugged and the name (within the module) of the object + with the docstring with tests to be debugged. + 'b' + A pointless class, for sanity-checking of docstring testing. + + Methods: + square() + get() + + >>> _TestClass(13).get() + _TestClass(-12).get() + 1 + >>> hex(_TestClass(13).square().get()) + '0xa9' + 'u' + A pointless class, for sanity-checking of docstring testing. + + Methods: + square() + get() + + >>> _TestClass(13).get() + _TestClass(-12).get() + 1 + >>> hex(_TestClass(13).square().get()) + '0xa9' + 'b'val -> _TestClass object with associated value val. + + >>> t = _TestClass(123) + >>> print(t.get()) + 123 + 'u'val -> _TestClass object with associated value val. + + >>> t = _TestClass(123) + >>> print(t.get()) + 123 + 'b'square() -> square TestClass's associated value + + >>> _TestClass(13).square().get() + 169 + 'u'square() -> square TestClass's associated value + + >>> _TestClass(13).square().get() + 169 + 'b'get() -> return TestClass's associated value. + + >>> x = _TestClass(-42) + >>> print(x.get()) + -42 + 'u'get() -> return TestClass's associated value. + + >>> x = _TestClass(-42) + >>> print(x.get()) + -42 + 'b'_TestClass'u'_TestClass'b' + Example of a string object, searched as-is. + >>> x = 1; y = 2 + >>> x + y, x * y + (3, 2) + 'u' + Example of a string object, searched as-is. + >>> x = 1; y = 2 + >>> x + y, x * y + (3, 2) + 'b' + In 2.2, boolean expressions displayed + 0 or 1. By default, we still accept + them. This can be disabled by passing + DONT_ACCEPT_TRUE_FOR_1 to the new + optionflags argument. + >>> 4 == 4 + 1 + >>> 4 == 4 + True + >>> 4 > 4 + 0 + >>> 4 > 4 + False + 'u' + In 2.2, boolean expressions displayed + 0 or 1. By default, we still accept + them. This can be disabled by passing + DONT_ACCEPT_TRUE_FOR_1 to the new + optionflags argument. + >>> 4 == 4 + 1 + >>> 4 == 4 + True + >>> 4 > 4 + 0 + >>> 4 > 4 + False + 'b'bool-int equivalence'u'bool-int equivalence'b' + Blank lines can be marked with : + >>> print('foo\n\nbar\n') + foo + + bar + + 'u' + Blank lines can be marked with : + >>> print('foo\n\nbar\n') + foo + + bar + + 'b'blank lines'u'blank lines'b' + If the ellipsis flag is used, then '...' can be used to + elide substrings in the desired output: + >>> print(list(range(1000))) #doctest: +ELLIPSIS + [0, 1, 2, ..., 999] + 'u' + If the ellipsis flag is used, then '...' can be used to + elide substrings in the desired output: + >>> print(list(range(1000))) #doctest: +ELLIPSIS + [0, 1, 2, ..., 999] + 'b'ellipsis'u'ellipsis'b' + If the whitespace normalization flag is used, then + differences in whitespace are ignored. + >>> print(list(range(30))) #doctest: +NORMALIZE_WHITESPACE + [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, + 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, + 27, 28, 29] + 'u' + If the whitespace normalization flag is used, then + differences in whitespace are ignored. + >>> print(list(range(30))) #doctest: +NORMALIZE_WHITESPACE + [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, + 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, + 27, 28, 29] + 'b'whitespace normalization'u'whitespace normalization'b'doctest runner'u'doctest runner'b'--verbose'u'--verbose'b'print very verbose output for all tests'u'print very verbose output for all tests'b'-o'u'-o'b'--option'u'--option'b'specify a doctest option flag to apply to the test run; may be specified more than once to apply multiple options'u'specify a doctest option flag to apply to the test run; may be specified more than once to apply multiple options'b'-f'u'-f'b'--fail-fast'u'--fail-fast'b'stop running tests after first failure (this is a shorthand for -o FAIL_FAST, and is in addition to any other -o options)'u'stop running tests after first failure (this is a shorthand for -o FAIL_FAST, and is in addition to any other -o options)'b'file'u'file'b'file containing the tests to run'u'file containing the tests to run'u'doctest'Parser driver. + +This provides a high-level interface to parse a file into a syntax tree. + +Guido van Rossum Driverload_grammarpkgutilpgenconvertparse_tokensParse a series of tokens and return the syntax tree.setupline_textquintuples_linenos_columnCOMMENTOP%s %r (prefix=%r)tok_nameaddtokenStop.incomplete inputparse_stream_rawParse a stream and return the syntax tree.generate_tokensparse_streamparse_fileParse a file and return the syntax tree.parse_stringParse a string and return the syntax tree._generate_pickle_name.txt.pickleGrammar.txtgpLoad the grammar (maybe from a pickle)._newerGenerating grammar tables from %sgenerate_grammarWriting grammar tables to %sWriting failed: %sGrammarInquire whether file a was written since file b.getmtimeload_packaged_grammargrammar_sourceNormally, loads a pickled grammar by doing + pkgutil.get_data(package, pickled_grammar) + where *pickled_grammar* is computed from *grammar_source* by adding the + Python version and using a ``.pickle`` extension. + + However, if *grammar_source* is an extant file, load_grammar(grammar_source) + is called instead. This facilitates using a packaged grammar file when needed + but preserves load_grammar's automatic regeneration behavior when possible. + + pickled_nameMain program, when run as a script: produce grammar pickle files. + + Calls load_grammar for each argument, a path to a grammar text file. + # Modifications:# Copyright 2006 Google, Inc. All Rights Reserved.# Python imports# Pgen imports# XXX Move the prefix computation into a wrapper around tokenize.# We never broke out -- EOF is too soon (how can this happen???)b'Parser driver. + +This provides a high-level interface to parse a file into a syntax tree. + +'u'Parser driver. + +This provides a high-level interface to parse a file into a syntax tree. + +'b'Guido van Rossum 'u'Guido van Rossum 'b'Driver'u'Driver'b'load_grammar'u'load_grammar'b'Parse a series of tokens and return the syntax tree.'u'Parse a series of tokens and return the syntax tree.'b'%s %r (prefix=%r)'u'%s %r (prefix=%r)'b'Stop.'u'Stop.'b'incomplete input'u'incomplete input'b'Parse a stream and return the syntax tree.'u'Parse a stream and return the syntax tree.'b'Parse a file and return the syntax tree.'u'Parse a file and return the syntax tree.'b'Parse a string and return the syntax tree.'u'Parse a string and return the syntax tree.'b'.txt'u'.txt'b'.pickle'u'.pickle'b'Grammar.txt'u'Grammar.txt'b'Load the grammar (maybe from a pickle).'u'Load the grammar (maybe from a pickle).'b'Generating grammar tables from %s'u'Generating grammar tables from %s'b'Writing grammar tables to %s'u'Writing grammar tables to %s'b'Writing failed: %s'u'Writing failed: %s'b'Inquire whether file a was written since file b.'u'Inquire whether file a was written since file b.'b'Normally, loads a pickled grammar by doing + pkgutil.get_data(package, pickled_grammar) + where *pickled_grammar* is computed from *grammar_source* by adding the + Python version and using a ``.pickle`` extension. + + However, if *grammar_source* is an extant file, load_grammar(grammar_source) + is called instead. This facilitates using a packaged grammar file when needed + but preserves load_grammar's automatic regeneration behavior when possible. + + 'u'Normally, loads a pickled grammar by doing + pkgutil.get_data(package, pickled_grammar) + where *pickled_grammar* is computed from *grammar_source* by adding the + Python version and using a ``.pickle`` extension. + + However, if *grammar_source* is an extant file, load_grammar(grammar_source) + is called instead. This facilitates using a packaged grammar file when needed + but preserves load_grammar's automatic regeneration behavior when possible. + + 'b'Main program, when run as a script: produce grammar pickle files. + + Calls load_grammar for each argument, a path to a grammar text file. + 'u'Main program, when run as a script: produce grammar pickle files. + + Calls load_grammar for each argument, a path to a grammar text file. + 'u'lib2to3.pgen2.driver'u'pgen2.driver'u'driver' +dyld emulation +ctypes.macholib.frameworkframework_infoctypes.macholib.dylibdylib_infodyld_findframework_findexpanduser~/Library/Frameworks/Library/Frameworks/Network/Library/Frameworks/System/Library/FrameworksDEFAULT_FRAMEWORK_FALLBACK~/lib/usr/local/lib/usr/libDEFAULT_LIBRARY_FALLBACKdyld_envrvaldyld_image_suffixDYLD_IMAGE_SUFFIXdyld_framework_pathDYLD_FRAMEWORK_PATHdyld_library_pathDYLD_LIBRARY_PATHdyld_fallback_framework_pathDYLD_FALLBACK_FRAMEWORK_PATHdyld_fallback_library_pathDYLD_FALLBACK_LIBRARY_PATHdyld_image_suffix_searchFor a potential path iterator, add DYLD_IMAGE_SUFFIX semantics_inject.dylibdyld_override_searchframeworkdyld_executable_path_searchexecutable_path@executable_path/dyld_default_searchfallback_framework_pathfallback_library_path + Find a library or framework using dyld semantics + dylib %s could not be found + Find a framework using dyld semantics in a very loose manner. + + Will take input such as: + Python + Python.framework + Python.framework/Versions/Current + .frameworkfmwk_indextest_dyld_findlibSystem.dylib/usr/lib/libSystem.dylibSystem.framework/System/System/Library/Frameworks/System.framework/System# These are the defaults as per man dyld(1)# If DYLD_FRAMEWORK_PATH is set and this dylib_name is a# framework name, use the first file that exists in the framework# path if any. If there is none go on to search the DYLD_LIBRARY_PATH# if any.# If DYLD_LIBRARY_PATH is set then use the first file that exists# in the path. If none use the original name.# If we haven't done any searching and found a library and the# dylib_name starts with "@executable_path/" then construct the# library name.b' +dyld emulation +'u' +dyld emulation +'b'dyld_find'u'dyld_find'b'framework_find'u'framework_find'b'framework_info'u'framework_info'b'dylib_info'u'dylib_info'b'~/Library/Frameworks'u'~/Library/Frameworks'b'/Library/Frameworks'u'/Library/Frameworks'b'/Network/Library/Frameworks'u'/Network/Library/Frameworks'b'/System/Library/Frameworks'u'/System/Library/Frameworks'b'~/lib'u'~/lib'b'/usr/local/lib'u'/usr/local/lib'b'/usr/lib'u'/usr/lib'b'DYLD_IMAGE_SUFFIX'u'DYLD_IMAGE_SUFFIX'b'DYLD_FRAMEWORK_PATH'u'DYLD_FRAMEWORK_PATH'b'DYLD_LIBRARY_PATH'u'DYLD_LIBRARY_PATH'b'DYLD_FALLBACK_FRAMEWORK_PATH'u'DYLD_FALLBACK_FRAMEWORK_PATH'b'DYLD_FALLBACK_LIBRARY_PATH'u'DYLD_FALLBACK_LIBRARY_PATH'b'For a potential path iterator, add DYLD_IMAGE_SUFFIX semantics'u'For a potential path iterator, add DYLD_IMAGE_SUFFIX semantics'b'.dylib'u'.dylib'b'@executable_path/'u'@executable_path/'b' + Find a library or framework using dyld semantics + 'u' + Find a library or framework using dyld semantics + 'b'dylib %s could not be found'u'dylib %s could not be found'b' + Find a framework using dyld semantics in a very loose manner. + + Will take input such as: + Python + Python.framework + Python.framework/Versions/Current + 'u' + Find a framework using dyld semantics in a very loose manner. + + Will take input such as: + Python + Python.framework + Python.framework/Versions/Current + 'b'.framework'u'.framework'b'libSystem.dylib'u'libSystem.dylib'b'/usr/lib/libSystem.dylib'u'/usr/lib/libSystem.dylib'b'System.framework/System'u'System.framework/System'b'/System/Library/Frameworks/System.framework/System'u'/System/Library/Frameworks/System.framework/System'u'ctypes.macholib.dyld'u'macholib.dyld'u'dyld' +Generic dylib path manipulation +(?x) +(?P^.*)(?:^|/) +(?P + (?P\w+?) + (?:\.(?P[^._]+))? + (?:_(?P[^._]+))? + \.dylib$ +) +DYLIB_RE + A dylib name can take one of the following four forms: + Location/Name.SomeVersion_Suffix.dylib + Location/Name.SomeVersion.dylib + Location/Name_Suffix.dylib + Location/Name.dylib + + returns None if not found or a mapping equivalent to: + dict( + location='Location', + name='Name.SomeVersion_Suffix.dylib', + shortname='Name', + version='SomeVersion', + suffix='Suffix', + ) + + Note that SomeVersion and Suffix are optional and may be None + if not present. + is_dylibtest_dylib_infoshortnamecompletely/invalidcompletely/invalide_debugP/Foo.dylibFoo.dylibFooP/Foo_debug.dylibFoo_debug.dylibP/Foo.A.dylibFoo.A.dylibP/Foo_debug.A.dylibFoo_debug.A.dylibFoo_debugP/Foo.A_debug.dylibFoo.A_debug.dylibb' +Generic dylib path manipulation +'u' +Generic dylib path manipulation +'b'(?x) +(?P^.*)(?:^|/) +(?P + (?P\w+?) + (?:\.(?P[^._]+))? + (?:_(?P[^._]+))? + \.dylib$ +) +'u'(?x) +(?P^.*)(?:^|/) +(?P + (?P\w+?) + (?:\.(?P[^._]+))? + (?:_(?P[^._]+))? + \.dylib$ +) +'b' + A dylib name can take one of the following four forms: + Location/Name.SomeVersion_Suffix.dylib + Location/Name.SomeVersion.dylib + Location/Name_Suffix.dylib + Location/Name.dylib + + returns None if not found or a mapping equivalent to: + dict( + location='Location', + name='Name.SomeVersion_Suffix.dylib', + shortname='Name', + version='SomeVersion', + suffix='Suffix', + ) + + Note that SomeVersion and Suffix are optional and may be None + if not present. + 'u' + A dylib name can take one of the following four forms: + Location/Name.SomeVersion_Suffix.dylib + Location/Name.SomeVersion.dylib + Location/Name_Suffix.dylib + Location/Name.dylib + + returns None if not found or a mapping equivalent to: + dict( + location='Location', + name='Name.SomeVersion_Suffix.dylib', + shortname='Name', + version='SomeVersion', + suffix='Suffix', + ) + + Note that SomeVersion and Suffix are optional and may be None + if not present. + 'b'completely/invalid'u'completely/invalid'b'completely/invalide_debug'u'completely/invalide_debug'b'P/Foo.dylib'u'P/Foo.dylib'b'Foo.dylib'u'Foo.dylib'b'Foo'u'Foo'b'P/Foo_debug.dylib'u'P/Foo_debug.dylib'b'Foo_debug.dylib'u'Foo_debug.dylib'b'P/Foo.A.dylib'u'P/Foo.A.dylib'b'Foo.A.dylib'u'Foo.A.dylib'b'P/Foo_debug.A.dylib'u'P/Foo_debug.A.dylib'b'Foo_debug.A.dylib'u'Foo_debug.A.dylib'b'Foo_debug'u'Foo_debug'b'P/Foo.A_debug.dylib'u'P/Foo.A_debug.dylib'b'Foo.A_debug.dylib'u'Foo.A_debug.dylib'u'ctypes.macholib.dylib'u'macholib.dylib'Encodings and related functions.encode_base64encode_noopencode_quopri_bencode_encodestring_qencodequotetabs=20Encode the message's payload in Base64. + + Also, add an appropriate Content-Transfer-Encoding header. + get_payloadencdataset_payloadContent-Transfer-EncodingEncode the message's payload in quoted-printable. + + Also, add an appropriate Content-Transfer-Encoding header. + Set the Content-Transfer-Encoding header to 7bit or 8bit.Do nothing.# Copyright (C) 2001-2006 Python Software Foundation# Must encode spaces, which quopri.encodestring() doesn't do# There's no payload. For backwards compatibility we use 7bit# We play a trick to make this go fast. If decoding from ASCII succeeds,# we know the data must be 7bit, otherwise treat it as 8bit.b'Encodings and related functions.'u'Encodings and related functions.'b'encode_7or8bit'u'encode_7or8bit'b'encode_base64'u'encode_base64'b'encode_noop'u'encode_noop'b'encode_quopri'u'encode_quopri'b'=20'b'Encode the message's payload in Base64. + + Also, add an appropriate Content-Transfer-Encoding header. + 'u'Encode the message's payload in Base64. + + Also, add an appropriate Content-Transfer-Encoding header. + 'b'Content-Transfer-Encoding'u'Content-Transfer-Encoding'b'Encode the message's payload in quoted-printable. + + Also, add an appropriate Content-Transfer-Encoding header. + 'u'Encode the message's payload in quoted-printable. + + Also, add an appropriate Content-Transfer-Encoding header. + 'b'Set the Content-Transfer-Encoding header to 7bit or 8bit.'u'Set the Content-Transfer-Encoding header to 7bit or 8bit.'b'Do nothing.'u'Do nothing.'u'email.encoders'HTML character entity references.name2codepointcodepoint2nameentitydefs1980x00c6AElig1930x00c1Aacute1940x00c2Acirc1920x00c0Agrave9130x0391Alpha1970x00c5Aring1950x00c3Atilde1960x00c4Auml9140x0392Beta1990x00c7Ccedil9350x03a7Chi82250x2021Dagger9160x0394Delta0x00d0ETH0x00c9Eacute0x00caEcirc0x00c8Egrave9170x0395Epsilon9190x0397Eta0x00cbEuml9150x0393Gamma0x00cdIacute0x00ceIcirc0x00ccIgrave9210x0399Iota0x00cfIuml9220x039aKappa9230x039b9240x039cMu2090x00d1Ntilde9250x039dNu3380x0152OElig2110x00d3Oacute2120x00d4Ocirc2100x00d2Ograve9370x03a9Omega9270x039fOmicron2160x00d8Oslash2130x00d5Otilde2140x00d6Ouml9340x03a6Phi9280x03a0Pi82430x2033Prime0x03a8Psi9290x03a1Rho3520x0160Scaron9310x03a3Sigma2220x00deTHORN0x03a4Tau9200x0398Theta2180x00daUacute2190x00dbUcirc2170x00d9Ugrave9330x03a5Upsilon2200x00dcUuml9260x039eXi2210x00ddYacute3760x0178Yuml9180x0396Zeta2250x00e1aacute0x00e2acirc1800x00b4acute2300x00e6aelig2240x00e0agrave85010x2135alefsym9450x03b10x0026amp87430x2227and87360x2220ang2290x00e5aring87760x2248asymp2270x00e3atilde2280x00e4auml82220x201ebdquo9460x03b21660x00a6brvbar82260x2022bull87450x2229cap0x00e7ccedil1840x00b8cedil1620x00a2cent9670x03c7chi7100x02c6circ98270x2663clubs87730x2245cong1690x00a986290x21b5crarr87460x222acup1640x00a4curren86590x21d3dArr82240x2020dagger85950x2193darr1760x00b0deg9480x03b498300x2666diams2470x00f72330x00e9eacute2340x00eaecirc2320x00e8egrave87090x220581950x2003emsp81940x2002ensp0x03b588010x2261equiv9510x03b7eta2400x00f0eth2350x00ebeuml83640x20aceuro87070x2203exist0x0192fnof87040x2200forall1890x00bdfrac121880x00bcfrac141900x00befrac3482600x2044frasl9470x03b3gamma88050x2265620x003e86600x21d4hArr85960x2194harr98290x2665hearts82300x2026hellip2370x00ediacute2380x00eeicirc1610x00a1iexcl2360x00ecigrave84650x211187340x221einfin87470x222b9530x03b9iota1910x00bfiquest87120x2208isin2390x00efiuml9540x03bakappa86560x21d0lArr9550x03bb90010x23291710x00ablaquo85920x2190larr89680x2308lceil82200x201cldquo88040x226489700x230alfloor87270x2217lowast96740x25caloz82060x200elrm82490x2039lsaquo82160x2018lsquo0x003c1750x00afmacr82120x2014mdash1810x00b51830x00b7middot87220x22129560x03bcmu87110x2207nabla1600x00a0nbsp82110x2013ndash88000x226087150x220bni1720x00ac87130x2209notin88360x2284nsub2410x00f1ntilde9570x03bdnu2430x00f3oacute2440x00f4ocirc3390x0153oelig2420x00f2ograve82540x203eoline9690x03c9omega9590x03bfomicron88530x2295oplus87440x2228or1700x00aaordf1860x00baordm2480x00f8oslash2450x00f5otilde88550x2297otimes2460x00f6ouml1820x00b6para87060x220282400x2030permil88690x22a5perp9660x03c6phi9600x03c09820x03d6piv1770x00b1plusmn1630x00a3pound82420x2032prime87190x220fprod87330x221dprop9680x03c8psi0x002286580x21d2rArr87300x221aradic90020x232arang1870x00bbraquo85940x2192rarr89690x2309rceil82210x201drdquo84760x211c1740x00aereg89710x230brfloor9610x03c1rho82070x200frlm82500x203arsaquo82170x2019rsquo82180x201asbquo3530x0161scaron89010x22c5sdot1670x00a7sect1730x00adshy9630x03c3sigma9620x03c2sigmaf87640x223csim98240x2660spades88340x228288380x2286sube87210x221188350x22831850x00b9sup11780x00b2sup21790x00b3sup388390x2287supe2230x00dfszlig9640x03c4tau87560x2234there49520x03b8theta9770x03d1thetasym82010x2009thinsp2540x00fethorn7320x02dctilde2150x00d7times84820x2122trade86570x21d1uArr2500x00fauacute85930x2191uarr2510x00fbucirc2490x00f9ugrave1680x00a8uml9780x03d2upsih9650x03c5upsilon2520x00fcuuml84720x2118weierp9580x03bexi2530x00fdyacute1650x00a5yen0x00ffyuml0x03b6zeta82050x200dzwj82040x200czwnjÁáAacute;aacute;ĂAbreve;ăabreve;∾ac;∿acd;∾̳acE;ÂâAcirc;acirc;´acute;АAcy;аacy;ÆAElig;aelig;⁡af;Afr;afr;ÀàAgrave;agrave;ℵalefsym;aleph;ΑAlpha;αalpha;ĀAmacr;āamacr;⨿amalg;AMPAMP;amp;⩓And;∧and;⩕andand;⩜andd;⩘andslope;⩚andv;∠ang;⦤ange;angle;∡angmsd;⦨angmsdaa;⦩angmsdab;⦪angmsdac;⦫angmsdad;⦬angmsdae;⦭angmsdaf;⦮angmsdag;⦯angmsdah;∟angrt;⊾angrtvb;⦝angrtvbd;∢angsph;Åangst;⍼angzarr;ĄAogon;ąaogon;Aopf;aopf;≈ap;⩯apacir;⩰apE;≊ape;≋apid;apos;ApplyFunction;approx;approxeq;åAring;aring;Ascr;ascr;≔Assign;ast;asymp;≍asympeq;ÃãAtilde;atilde;ÄäAuml;auml;∳awconint;⨑awint;≌backcong;϶backepsilon;‵backprime;∽backsim;⋍backsimeq;∖Backslash;⫧Barv;⊽barvee;⌆Barwed;⌅barwed;barwedge;⎵bbrk;⎶bbrktbrk;bcong;БBcy;бbcy;bdquo;∵becaus;Because;because;⦰bemptyv;bepsi;ℬbernou;Bernoullis;ΒBeta;βbeta;ℶbeth;≬between;Bfr;bfr;⋂bigcap;◯bigcirc;⋃bigcup;⨀bigodot;⨁bigoplus;⨂bigotimes;⨆bigsqcup;★bigstar;▽bigtriangledown;△bigtriangleup;⨄biguplus;⋁bigvee;⋀bigwedge;⤍bkarow;⧫blacklozenge;▪blacksquare;▴blacktriangle;▾blacktriangledown;◂blacktriangleleft;▸blacktriangleright;␣blank;▒blk12;░blk14;▓blk34;█block;=⃥bne;≡⃥bnequiv;⫭bNot;⌐bnot;Bopf;bopf;⊥bot;bottom;⋈bowtie;⧉boxbox;╗boxDL;╖boxDl;╕boxdL;┐boxdl;╔boxDR;╓boxDr;╒boxdR;┌boxdr;═boxH;─boxh;╦boxHD;╤boxHd;╥boxhD;┬boxhd;╩boxHU;╧boxHu;╨boxhU;┴boxhu;⊟boxminus;⊞boxplus;⊠boxtimes;╝boxUL;╜boxUl;╛boxuL;┘boxul;╚boxUR;╙boxUr;╘boxuR;└boxur;║boxV;│boxv;╬boxVH;╫boxVh;╪boxvH;┼boxvh;╣boxVL;╢boxVl;╡boxvL;┤boxvl;╠boxVR;╟boxVr;╞boxvR;├boxvr;bprime;˘Breve;breve;¦brvbar;Bscr;bscr;⁏bsemi;bsim;bsime;bsol;⧅bsolb;⟈bsolhsub;bull;bullet;≎bump;⪮bumpE;≏bumpe;Bumpeq;bumpeq;ĆCacute;ćcacute;⋒Cap;∩cap;⩄capand;⩉capbrcup;⩋capcap;⩇capcup;⩀capdot;ⅅCapitalDifferentialD;∩︀caps;⁁caret;ˇcaron;ℭCayleys;⩍ccaps;ČCcaron;čccaron;ÇçCcedil;ccedil;ĈCcirc;ĉccirc;∰Cconint;⩌ccups;⩐ccupssm;ĊCdot;ċcdot;¸cedil;Cedilla;⦲cemptyv;¢cent;·CenterDot;centerdot;Cfr;cfr;ЧCHcy;чchcy;✓check;checkmark;ΧChi;χchi;○cir;circ;≗circeq;↺circlearrowleft;↻circlearrowright;⊛circledast;⊚circledcirc;⊝circleddash;⊙CircleDot;®circledR;ⓈcircledS;⊖CircleMinus;⊕CirclePlus;⊗CircleTimes;⧃cirE;cire;⨐cirfnint;⫯cirmid;⧂cirscir;∲ClockwiseContourIntegral;CloseCurlyDoubleQuote;CloseCurlyQuote;♣clubs;clubsuit;∷Colon;colon;⩴Colone;colone;coloneq;comma;commat;∁comp;∘compfn;complement;ℂcomplexes;≅cong;⩭congdot;≡Congruent;∯Conint;∮conint;ContourIntegral;Copf;copf;∐coprod;Coproduct;©COPYCOPY;copy;℗copysr;CounterClockwiseContourIntegral;↵crarr;⨯Cross;✗cross;Cscr;cscr;⫏csub;⫑csube;⫐csup;⫒csupe;⋯ctdot;⤸cudarrl;⤵cudarrr;⋞cuepr;⋟cuesc;↶cularr;⤽cularrp;⋓Cup;∪cup;⩈cupbrcap;CupCap;⩆cupcap;⩊cupcup;⊍cupdot;⩅cupor;∪︀cups;↷curarr;⤼curarrm;curlyeqprec;curlyeqsucc;⋎curlyvee;⋏curlywedge;¤curren;curvearrowleft;curvearrowright;cuvee;cuwed;cwconint;∱cwint;⌭cylcty;Dagger;dagger;ℸdaleth;↡Darr;⇓dArr;↓darr;‐dash;⫤Dashv;⊣dashv;⤏dbkarow;˝dblac;ĎDcaron;ďdcaron;ДDcy;дdcy;DD;ⅆdd;ddagger;⇊ddarr;⤑DDotrahd;⩷ddotseq;°deg;∇Del;ΔDelta;δdelta;⦱demptyv;⥿dfisht;Dfr;dfr;⥥dHar;⇃dharl;⇂dharr;DiacriticalAcute;˙DiacriticalDot;DiacriticalDoubleAcute;`DiacriticalGrave;DiacriticalTilde;⋄diam;Diamond;diamond;♦diamondsuit;diams;¨die;DifferentialD;ϝdigamma;⋲disin;÷div;divide;⋇divideontimes;divonx;ЂDJcy;ђdjcy;⌞dlcorn;⌍dlcrop;dollar;Dopf;dopf;Dot;dot;⃜DotDot;≐doteq;≑doteqdot;DotEqual;∸dotminus;∔dotplus;⊡dotsquare;doublebarwedge;DoubleContourIntegral;DoubleDot;DoubleDownArrow;⇐DoubleLeftArrow;⇔DoubleLeftRightArrow;DoubleLeftTee;⟸DoubleLongLeftArrow;⟺DoubleLongLeftRightArrow;⟹DoubleLongRightArrow;⇒DoubleRightArrow;⊨DoubleRightTee;⇑DoubleUpArrow;⇕DoubleUpDownArrow;∥DoubleVerticalBar;DownArrow;Downarrow;downarrow;⤓DownArrowBar;⇵DownArrowUpArrow;̑DownBreve;downdownarrows;downharpoonleft;downharpoonright;⥐DownLeftRightVector;⥞DownLeftTeeVector;↽DownLeftVector;⥖DownLeftVectorBar;⥟DownRightTeeVector;⇁DownRightVector;⥗DownRightVectorBar;⊤DownTee;↧DownTeeArrow;⤐drbkarow;⌟drcorn;⌌drcrop;Dscr;dscr;ЅDScy;ѕdscy;⧶dsol;ĐDstrok;đdstrok;⋱dtdot;▿dtri;dtrif;duarr;⥯duhar;⦦dwangle;ЏDZcy;џdzcy;⟿dzigrarr;ÉéEacute;eacute;⩮easter;ĚEcaron;ěecaron;≖ecir;ÊêEcirc;ecirc;≕ecolon;ЭEcy;эecy;eDDot;ĖEdot;eDot;ėedot;ⅇee;≒efDot;Efr;efr;⪚eg;ÈèEgrave;egrave;⪖egs;⪘egsdot;⪙el;∈Element;⏧elinters;ℓell;⪕els;⪗elsdot;ĒEmacr;ēemacr;∅empty;emptyset;◻EmptySmallSquare;emptyv;▫EmptyVerySmallSquare; emsp13; emsp14; emsp;ŊENG;ŋeng; ensp;ĘEogon;ęeogon;Eopf;eopf;⋕epar;⧣eparsl;⩱eplus;εepsi;ΕEpsilon;epsilon;ϵepsiv;eqcirc;eqcolon;≂eqsim;eqslantgtr;eqslantless;⩵Equal;equals;EqualTilde;≟equest;⇌Equilibrium;equiv;⩸equivDD;⧥eqvparsl;⥱erarr;≓erDot;ℰEscr;ℯescr;esdot;⩳Esim;esim;ΗEta;ηeta;ÐðETH;eth;ËëEuml;euml;euro;excl;∃exist;Exists;expectation;ExponentialE;exponentiale;fallingdotseq;ФFcy;фfcy;♀female;ffiffilig;fffflig;fflffllig;Ffr;ffr;fifilig;◼FilledSmallSquare;FilledVerySmallSquare;fjfjlig;♭flat;flfllig;▱fltns;fnof;Fopf;fopf;∀ForAll;forall;⋔fork;⫙forkv;ℱFouriertrf;⨍fpartint;½frac12;⅓frac13;¼frac14;⅕frac15;⅙frac16;⅛frac18;⅔frac23;⅖frac25;¾frac34;⅗frac35;⅜frac38;⅘frac45;⅚frac56;⅝frac58;⅞frac78;⁄frasl;⌢frown;Fscr;fscr;ǵgacute;ΓGamma;γgamma;ϜGammad;gammad;⪆gap;ĞGbreve;ğgbreve;ĢGcedil;ĜGcirc;ĝgcirc;ГGcy;гgcy;ĠGdot;ġgdot;≧gE;≥ge;⪌gEl;⋛gel;geq;geqq;⩾geqslant;ges;⪩gescc;⪀gesdot;⪂gesdoto;⪄gesdotol;⋛︀gesl;⪔gesles;Gfr;gfr;⋙Gg;≫gg;ggg;ℷgimel;ЃGJcy;ѓgjcy;≷gl;⪥gla;⪒glE;⪤glj;⪊gnap;gnapprox;≩gnE;⪈gne;gneq;gneqq;⋧gnsim;Gopf;gopf;grave;GreaterEqual;GreaterEqualLess;GreaterFullEqual;⪢GreaterGreater;GreaterLess;GreaterSlantEqual;≳GreaterTilde;Gscr;ℊgscr;gsim;⪎gsime;⪐gsiml;GTGT;Gt;gt;⪧gtcc;⩺gtcir;⋗gtdot;⦕gtlPar;⩼gtquest;gtrapprox;⥸gtrarr;gtrdot;gtreqless;gtreqqless;gtrless;gtrsim;≩︀gvertneqq;gvnE;Hacek; hairsp;half;ℋhamilt;ЪHARDcy;ъhardcy;hArr;↔harr;⥈harrcir;↭harrw;Hat;ℏhbar;ĤHcirc;ĥhcirc;♥hearts;heartsuit;hellip;⊹hercon;ℌHfr;hfr;HilbertSpace;⤥hksearow;⤦hkswarow;⇿hoarr;∻homtht;↩hookleftarrow;↪hookrightarrow;ℍHopf;hopf;―horbar;HorizontalLine;Hscr;hscr;hslash;ĦHstrok;ħhstrok;HumpDownHump;HumpEqual;⁃hybull;hyphen;ÍíIacute;iacute;⁣ic;ÎîIcirc;icirc;ИIcy;иicy;Idot;ЕIEcy;еiecy;¡iexcl;iff;ℑIfr;ifr;ÌìIgrave;igrave;ⅈii;⨌iiiint;∭iiint;⧜iinfin;℩iiota;IJIJlig;ijijlig;Im;ĪImacr;īimacr;image;ImaginaryI;ℐimagline;imagpart;ıimath;⊷imof;Ƶimped;Implies;in;℅incare;∞infin;⧝infintie;inodot;∬Int;∫int;⊺intcal;ℤintegers;Integral;intercal;Intersection;⨗intlarhk;⨼intprod;InvisibleComma;⁢InvisibleTimes;ЁIOcy;ёiocy;ĮIogon;įiogon;Iopf;iopf;ΙIota;ιiota;iprod;¿iquest;Iscr;iscr;isin;⋵isindot;⋹isinE;⋴isins;⋳isinsv;isinv;it;ĨItilde;ĩitilde;ІIukcy;іiukcy;ÏïIuml;iuml;ĴJcirc;ĵjcirc;ЙJcy;йjcy;Jfr;jfr;ȷjmath;Jopf;jopf;Jscr;jscr;ЈJsercy;јjsercy;ЄJukcy;єjukcy;ΚKappa;κkappa;ϰkappav;ĶKcedil;ķkcedil;Kcy;кkcy;Kfr;kfr;ĸkgreen;ХKHcy;хkhcy;ЌKJcy;ќkjcy;Kopf;kopf;Kscr;kscr;⇚lAarr;ĹLacute;ĺlacute;⦴laemptyv;ℒlagran;ΛLambda;λlambda;⟪Lang;⟨lang;⦑langd;langle;⪅lap;Laplacetrf;«laquo;↞Larr;lArr;←larr;⇤larrb;⤟larrbfs;⤝larrfs;larrhk;↫larrlp;⤹larrpl;⥳larrsim;↢larrtl;⪫lat;⤛lAtail;⤙latail;⪭late;⪭︀lates;⤎lBarr;⤌lbarr;❲lbbrk;lbrace;lbrack;⦋lbrke;⦏lbrksld;⦍lbrkslu;ĽLcaron;ľlcaron;ĻLcedil;ļlcedil;⌈lceil;lcub;ЛLcy;лlcy;⤶ldca;ldquo;ldquor;⥧ldrdhar;⥋ldrushar;↲ldsh;≦lE;≤le;LeftAngleBracket;LeftArrow;Leftarrow;leftarrow;LeftArrowBar;⇆LeftArrowRightArrow;leftarrowtail;LeftCeiling;⟦LeftDoubleBracket;⥡LeftDownTeeVector;LeftDownVector;⥙LeftDownVectorBar;⌊LeftFloor;leftharpoondown;↼leftharpoonup;⇇leftleftarrows;LeftRightArrow;Leftrightarrow;leftrightarrow;leftrightarrows;⇋leftrightharpoons;leftrightsquigarrow;⥎LeftRightVector;LeftTee;↤LeftTeeArrow;⥚LeftTeeVector;⋋leftthreetimes;⊲LeftTriangle;⧏LeftTriangleBar;⊴LeftTriangleEqual;⥑LeftUpDownVector;⥠LeftUpTeeVector;↿LeftUpVector;⥘LeftUpVectorBar;LeftVector;⥒LeftVectorBar;⪋lEg;⋚leg;leq;leqq;⩽leqslant;les;⪨lescc;⩿lesdot;⪁lesdoto;⪃lesdotor;⋚︀lesg;⪓lesges;lessapprox;⋖lessdot;lesseqgtr;lesseqqgtr;LessEqualGreater;LessFullEqual;≶LessGreater;lessgtr;⪡LessLess;≲lesssim;LessSlantEqual;LessTilde;⥼lfisht;lfloor;Lfr;lfr;lg;⪑lgE;⥢lHar;lhard;lharu;⥪lharul;▄lhblk;ЉLJcy;љljcy;⋘Ll;≪ll;llarr;llcorner;Lleftarrow;⥫llhard;◺lltri;ĿLmidot;ŀlmidot;⎰lmoust;lmoustache;⪉lnap;lnapprox;≨lnE;⪇lne;lneq;lneqq;⋦lnsim;⟬loang;⇽loarr;lobrk;⟵LongLeftArrow;Longleftarrow;longleftarrow;⟷LongLeftRightArrow;Longleftrightarrow;longleftrightarrow;⟼longmapsto;⟶LongRightArrow;Longrightarrow;longrightarrow;looparrowleft;↬looparrowright;⦅lopar;Lopf;lopf;⨭loplus;⨴lotimes;∗lowast;lowbar;↙LowerLeftArrow;↘LowerRightArrow;◊loz;lozenge;lozf;lpar;⦓lparlt;lrarr;lrcorner;lrhar;⥭lrhard;‎lrm;⊿lrtri;lsaquo;Lscr;lscr;↰Lsh;lsh;lsim;⪍lsime;⪏lsimg;lsqb;lsquo;lsquor;Lstrok;łlstrok;LTLT;Lt;lt;⪦ltcc;⩹ltcir;ltdot;lthree;⋉ltimes;⥶ltlarr;⩻ltquest;◃ltri;ltrie;ltrif;⦖ltrPar;⥊lurdshar;⥦luruhar;≨︀lvertneqq;lvnE;¯macr;♂male;✠malt;maltese;⤅Map;↦map;mapsto;mapstodown;mapstoleft;↥mapstoup;▮marker;⨩mcomma;МMcy;мmcy;mdash;∺mDDot;measuredangle; MediumSpace;ℳMellintrf;Mfr;mfr;℧mho;µmicro;∣mid;midast;⫰midcir;middot;−minus;minusb;minusd;⨪minusdu;∓MinusPlus;⫛mlcp;mldr;mnplus;⊧models;Mopf;mopf;mp;Mscr;mscr;mstpos;ΜMu;μmu;⊸multimap;mumap;nabla;ŃNacute;ńnacute;∠⃒nang;≉nap;⩰̸napE;≋̸napid;ʼnnapos;napprox;♮natur;natural;ℕnaturals;nbsp;≎̸nbump;≏̸nbumpe;⩃ncap;ŇNcaron;ňncaron;ŅNcedil;ņncedil;≇ncong;⩭̸ncongdot;⩂ncup;НNcy;нncy;ndash;≠ne;⤤nearhk;⇗neArr;↗nearr;nearrow;≐̸nedot;​NegativeMediumSpace;NegativeThickSpace;NegativeThinSpace;NegativeVeryThinSpace;≢nequiv;⤨nesear;≂̸nesim;NestedGreaterGreater;NestedLessLess;NewLine;∄nexist;nexists;Nfr;nfr;≧̸ngE;≱nge;ngeq;ngeqq;⩾̸ngeqslant;nges;⋙̸nGg;≵ngsim;≫⃒nGt;≯ngt;ngtr;≫̸nGtv;⇎nhArr;↮nharr;⫲nhpar;∋ni;⋼nis;⋺nisd;niv;ЊNJcy;њnjcy;⇍nlArr;↚nlarr;‥nldr;≦̸nlE;≰nle;nLeftarrow;nleftarrow;nLeftrightarrow;nleftrightarrow;nleq;nleqq;⩽̸nleqslant;nles;≮nless;⋘̸nLl;≴nlsim;≪⃒nLt;nlt;⋪nltri;⋬nltrie;≪̸nLtv;∤nmid;⁠NoBreak;NonBreakingSpace;Nopf;nopf;¬⫬Not;not;NotCongruent;≭NotCupCap;∦NotDoubleVerticalBar;∉NotElement;NotEqual;NotEqualTilde;NotExists;NotGreater;NotGreaterEqual;NotGreaterFullEqual;NotGreaterGreater;≹NotGreaterLess;NotGreaterSlantEqual;NotGreaterTilde;NotHumpDownHump;NotHumpEqual;notin;⋵̸notindot;⋹̸notinE;notinva;⋷notinvb;⋶notinvc;NotLeftTriangle;⧏̸NotLeftTriangleBar;NotLeftTriangleEqual;NotLess;NotLessEqual;≸NotLessGreater;NotLessLess;NotLessSlantEqual;NotLessTilde;⪢̸NotNestedGreaterGreater;⪡̸NotNestedLessLess;∌notni;notniva;⋾notnivb;⋽notnivc;⊀NotPrecedes;⪯̸NotPrecedesEqual;⋠NotPrecedesSlantEqual;NotReverseElement;⋫NotRightTriangle;⧐̸NotRightTriangleBar;⋭NotRightTriangleEqual;⊏̸NotSquareSubset;⋢NotSquareSubsetEqual;⊐̸NotSquareSuperset;⋣NotSquareSupersetEqual;⊂⃒NotSubset;⊈NotSubsetEqual;⊁NotSucceeds;⪰̸NotSucceedsEqual;⋡NotSucceedsSlantEqual;≿̸NotSucceedsTilde;⊃⃒NotSuperset;⊉NotSupersetEqual;≁NotTilde;≄NotTildeEqual;NotTildeFullEqual;NotTildeTilde;NotVerticalBar;npar;nparallel;⫽⃥nparsl;∂̸npart;⨔npolint;npr;nprcue;npre;nprec;npreceq;⇏nrArr;↛nrarr;⤳̸nrarrc;↝̸nrarrw;nRightarrow;nrightarrow;nrtri;nrtrie;nsc;nsccue;nsce;Nscr;nscr;nshortmid;nshortparallel;nsim;nsime;nsimeq;nsmid;nspar;nsqsube;nsqsupe;⊄nsub;⫅̸nsubE;nsube;nsubset;nsubseteq;nsubseteqq;nsucc;nsucceq;⊅nsup;⫆̸nsupE;nsupe;nsupset;nsupseteq;nsupseteqq;ntgl;ÑñNtilde;ntilde;ntlg;ntriangleleft;ntrianglelefteq;ntriangleright;ntrianglerighteq;ΝNu;νnu;num;№numero; numsp;≍⃒nvap;⊯nVDash;⊮nVdash;⊭nvDash;⊬nvdash;≥⃒nvge;>⃒nvgt;⤄nvHarr;⧞nvinfin;⤂nvlArr;≤⃒nvle;<⃒nvlt;⊴⃒nvltrie;⤃nvrArr;⊵⃒nvrtrie;∼⃒nvsim;⤣nwarhk;⇖nwArr;↖nwarr;nwarrow;⤧nwnear;ÓóOacute;oacute;oast;ocir;ÔôOcirc;ocirc;ОOcy;оocy;odash;ŐOdblac;őodblac;⨸odiv;odot;⦼odsold;OElig;oelig;⦿ofcir;Ofr;ofr;˛ogon;ÒòOgrave;ograve;⧁ogt;⦵ohbar;Ωohm;oint;olarr;⦾olcir;⦻olcross;‾oline;⧀olt;ŌOmacr;ōomacr;Omega;ωomega;ΟOmicron;οomicron;⦶omid;ominus;Oopf;oopf;⦷opar;OpenCurlyDoubleQuote;OpenCurlyQuote;⦹operp;oplus;⩔Or;∨or;orarr;⩝ord;ℴorder;orderof;ªordf;ºordm;⊶origof;⩖oror;⩗orslope;⩛orv;oS;Oscr;oscr;ØøOslash;oslash;⊘osol;ÕõOtilde;otilde;⨷Otimes;otimes;⨶otimesas;ÖöOuml;ouml;⌽ovbar;OverBar;⏞OverBrace;⎴OverBracket;⏜OverParenthesis;par;¶para;parallel;⫳parsim;⫽parsl;∂part;PartialD;ПPcy;пpcy;percnt;period;permil;perp;‱pertenk;Pfr;pfr;ΦPhi;phi;ϕphiv;phmmat;☎phone;ΠPi;πpi;pitchfork;ϖpiv;planck;ℎplanckh;plankv;plus;⨣plusacir;plusb;⨢pluscir;plusdo;⨥plusdu;⩲pluse;±PlusMinus;plusmn;⨦plussim;⨧plustwo;pm;Poincareplane;⨕pointint;ℙPopf;popf;£pound;⪻Pr;≺pr;⪷prap;≼prcue;⪳prE;⪯pre;prec;precapprox;preccurlyeq;Precedes;PrecedesEqual;PrecedesSlantEqual;≾PrecedesTilde;preceq;⪹precnapprox;⪵precneqq;⋨precnsim;precsim;″Prime;′prime;primes;prnap;prnE;prnsim;∏prod;Product;⌮profalar;⌒profline;⌓profsurf;∝prop;Proportion;Proportional;propto;prsim;⊰prurel;Pscr;pscr;ΨPsi;ψpsi; puncsp;Qfr;qfr;qint;ℚQopf;qopf;⁗qprime;Qscr;qscr;quaternions;⨖quatint;quest;questeq;QUOTQUOT;quot;⇛rAarr;∽̱race;ŔRacute;ŕracute;√radic;⦳raemptyv;⟫Rang;⟩rang;⦒rangd;⦥range;rangle;»raquo;↠Rarr;rArr;→rarr;⥵rarrap;⇥rarrb;⤠rarrbfs;⤳rarrc;⤞rarrfs;rarrhk;rarrlp;⥅rarrpl;⥴rarrsim;⤖Rarrtl;↣rarrtl;↝rarrw;⤜rAtail;⤚ratail;∶ratio;rationals;RBarr;rBarr;rbarr;❳rbbrk;rbrace;rbrack;⦌rbrke;⦎rbrksld;⦐rbrkslu;ŘRcaron;řrcaron;ŖRcedil;ŗrcedil;⌉rceil;rcub;РRcy;рrcy;⤷rdca;⥩rdldhar;rdquo;rdquor;↳rdsh;ℜRe;real;ℛrealine;realpart;ℝreals;▭rect;REGREG;reg;ReverseElement;ReverseEquilibrium;ReverseUpEquilibrium;⥽rfisht;⌋rfloor;Rfr;rfr;⥤rHar;rhard;⇀rharu;⥬rharul;ΡRho;ρrho;ϱrhov;RightAngleBracket;RightArrow;Rightarrow;rightarrow;RightArrowBar;⇄RightArrowLeftArrow;rightarrowtail;RightCeiling;⟧RightDoubleBracket;⥝RightDownTeeVector;RightDownVector;⥕RightDownVectorBar;RightFloor;rightharpoondown;rightharpoonup;rightleftarrows;rightleftharpoons;⇉rightrightarrows;rightsquigarrow;⊢RightTee;RightTeeArrow;⥛RightTeeVector;⋌rightthreetimes;⊳RightTriangle;⧐RightTriangleBar;⊵RightTriangleEqual;⥏RightUpDownVector;⥜RightUpTeeVector;↾RightUpVector;⥔RightUpVectorBar;RightVector;⥓RightVectorBar;˚ring;risingdotseq;rlarr;rlhar;‏rlm;⎱rmoust;rmoustache;⫮rnmid;⟭roang;⇾roarr;robrk;⦆ropar;Ropf;ropf;⨮roplus;⨵rotimes;⥰RoundImplies;rpar;⦔rpargt;⨒rppolint;rrarr;Rrightarrow;rsaquo;Rscr;rscr;↱Rsh;rsh;rsqb;rsquo;rsquor;rthree;⋊rtimes;▹rtri;rtrie;rtrif;⧎rtriltri;⧴RuleDelayed;⥨ruluhar;℞rx;ŚSacute;śsacute;sbquo;⪼Sc;≻sc;⪸scap;Scaron;scaron;≽sccue;⪴scE;⪰sce;ŞScedil;şscedil;ŜScirc;ŝscirc;⪺scnap;⪶scnE;⋩scnsim;⨓scpolint;≿scsim;СScy;сscy;⋅sdot;sdotb;⩦sdote;searhk;⇘seArr;searr;searrow;§sect;semi;⤩seswar;setminus;setmn;✶sext;Sfr;sfr;sfrown;♯sharp;ЩSHCHcy;щshchcy;ШSHcy;шshcy;ShortDownArrow;ShortLeftArrow;shortmid;shortparallel;ShortRightArrow;↑ShortUpArrow;­shy;ΣSigma;σsigma;ςsigmaf;sigmav;∼sim;⩪simdot;≃sime;simeq;⪞simg;⪠simgE;⪝siml;⪟simlE;≆simne;⨤simplus;⥲simrarr;slarr;SmallCircle;smallsetminus;⨳smashp;⧤smeparsl;smid;⌣smile;⪪smt;⪬smte;⪬︀smtes;ЬSOFTcy;ьsoftcy;sol;⧄solb;⌿solbar;Sopf;sopf;♠spades;spadesuit;spar;⊓sqcap;⊓︀sqcaps;⊔sqcup;⊔︀sqcups;Sqrt;⊏sqsub;⊑sqsube;sqsubset;sqsubseteq;⊐sqsup;⊒sqsupe;sqsupset;sqsupseteq;□squ;Square;square;SquareIntersection;SquareSubset;SquareSubsetEqual;SquareSuperset;SquareSupersetEqual;SquareUnion;squarf;squf;srarr;Sscr;sscr;ssetmn;ssmile;⋆sstarf;Star;☆star;starf;straightepsilon;straightphi;strns;⋐Sub;⊂sub;⪽subdot;⫅subE;⊆sube;⫃subedot;⫁submult;⫋subnE;⊊subne;⪿subplus;⥹subrarr;Subset;subset;subseteq;subseteqq;SubsetEqual;subsetneq;subsetneqq;⫇subsim;⫕subsub;⫓subsup;succ;succapprox;succcurlyeq;Succeeds;SucceedsEqual;SucceedsSlantEqual;SucceedsTilde;succeq;succnapprox;succneqq;succnsim;succsim;SuchThat;∑Sum;sum;♪sung;¹sup1;²sup2;³sup3;⋑Sup;⊃sup;⪾supdot;⫘supdsub;⫆supE;⊇supe;⫄supedot;Superset;SupersetEqual;⟉suphsol;⫗suphsub;⥻suplarr;⫂supmult;⫌supnE;⊋supne;⫀supplus;Supset;supset;supseteq;supseteqq;supsetneq;supsetneqq;⫈supsim;⫔supsub;⫖supsup;swarhk;⇙swArr;swarr;swarrow;⤪swnwar;ßszlig;Tab;⌖target;ΤTau;τtau;tbrk;ŤTcaron;ťtcaron;ŢTcedil;ţtcedil;ТTcy;тtcy;⃛tdot;⌕telrec;Tfr;tfr;∴there4;Therefore;therefore;ΘTheta;θtheta;ϑthetasym;thetav;thickapprox;thicksim;  ThickSpace; thinsp;ThinSpace;thkap;thksim;ÞþTHORN;thorn;Tilde;tilde;TildeEqual;TildeFullEqual;TildeTilde;×times;timesb;⨱timesbar;⨰timesd;tint;toea;top;⌶topbot;⫱topcir;Topf;topf;⫚topfork;tosa;‴tprime;TRADE;trade;▵triangle;triangledown;triangleleft;trianglelefteq;≜triangleq;triangleright;trianglerighteq;◬tridot;trie;⨺triminus;TripleDot;⨹triplus;⧍trisb;⨻tritime;⏢trpezium;Tscr;tscr;ЦTScy;цtscy;ЋTSHcy;ћtshcy;ŦTstrok;ŧtstrok;twixt;twoheadleftarrow;twoheadrightarrow;ÚúUacute;uacute;↟Uarr;uArr;uarr;⥉Uarrocir;ЎUbrcy;ўubrcy;ŬUbreve;ŭubreve;ÛûUcirc;ucirc;УUcy;уucy;⇅udarr;ŰUdblac;űudblac;⥮udhar;⥾ufisht;Ufr;ufr;ÙùUgrave;ugrave;⥣uHar;uharl;uharr;▀uhblk;⌜ulcorn;ulcorner;⌏ulcrop;◸ultri;ŪUmacr;ūumacr;uml;UnderBar;⏟UnderBrace;UnderBracket;⏝UnderParenthesis;Union;⊎UnionPlus;ŲUogon;ųuogon;Uopf;uopf;UpArrow;Uparrow;uparrow;⤒UpArrowBar;UpArrowDownArrow;↕UpDownArrow;Updownarrow;updownarrow;UpEquilibrium;upharpoonleft;upharpoonright;uplus;UpperLeftArrow;UpperRightArrow;ϒUpsi;υupsi;upsih;ΥUpsilon;upsilon;UpTee;UpTeeArrow;⇈upuparrows;⌝urcorn;urcorner;⌎urcrop;ŮUring;ůuring;◹urtri;Uscr;uscr;⋰utdot;ŨUtilde;ũutilde;utri;utrif;uuarr;ÜüUuml;uuml;⦧uwangle;⦜vangrt;varepsilon;varkappa;varnothing;varphi;varpi;varpropto;vArr;varr;varrho;varsigma;⊊︀varsubsetneq;⫋︀varsubsetneqq;⊋︀varsupsetneq;⫌︀varsupsetneqq;vartheta;vartriangleleft;vartriangleright;⫫Vbar;⫨vBar;⫩vBarv;ВVcy;вvcy;⊫VDash;⊩Vdash;vDash;vdash;⫦Vdashl;Vee;vee;⊻veebar;≚veeeq;⋮vellip;‖Verbar;verbar;Vert;vert;VerticalBar;VerticalLine;❘VerticalSeparator;≀VerticalTilde;VeryThinSpace;Vfr;vfr;vltri;vnsub;vnsup;Vopf;vopf;vprop;vrtri;Vscr;vscr;vsubnE;vsubne;vsupnE;vsupne;⊪Vvdash;⦚vzigzag;ŴWcirc;ŵwcirc;⩟wedbar;Wedge;wedge;≙wedgeq;℘weierp;Wfr;wfr;Wopf;wopf;wp;wr;wreath;Wscr;wscr;xcap;xcirc;xcup;xdtri;Xfr;xfr;xhArr;xharr;ΞXi;ξxi;xlArr;xlarr;xmap;⋻xnis;xodot;Xopf;xopf;xoplus;xotime;xrArr;xrarr;Xscr;xscr;xsqcup;xuplus;xutri;xvee;xwedge;ÝýYacute;yacute;ЯYAcy;яyacy;ŶYcirc;ŷycirc;ЫYcy;ыycy;¥yen;Yfr;yfr;ЇYIcy;їyicy;Yopf;yopf;Yscr;yscr;ЮYUcy;юyucy;Yuml;yuml;ŹZacute;źzacute;Zcaron;zcaron;ЗZcy;зzcy;ŻZdot;żzdot;ℨzeetrf;ZeroWidthSpace;ΖZeta;ζzeta;Zfr;zfr;ЖZHcy;жzhcy;⇝zigrarr;Zopf;zopf;Zscr;zscr;‍zwj;‌zwnj;codepoint# maps the HTML entity name to the Unicode code point# latin capital letter AE = latin capital ligature AE, U+00C6 ISOlat1# latin capital letter A with acute, U+00C1 ISOlat1# latin capital letter A with circumflex, U+00C2 ISOlat1# latin capital letter A with grave = latin capital letter A grave, U+00C0 ISOlat1# greek capital letter alpha, U+0391# latin capital letter A with ring above = latin capital letter A ring, U+00C5 ISOlat1# latin capital letter A with tilde, U+00C3 ISOlat1# latin capital letter A with diaeresis, U+00C4 ISOlat1# greek capital letter beta, U+0392# latin capital letter C with cedilla, U+00C7 ISOlat1# greek capital letter chi, U+03A7# double dagger, U+2021 ISOpub# greek capital letter delta, U+0394 ISOgrk3# latin capital letter ETH, U+00D0 ISOlat1# latin capital letter E with acute, U+00C9 ISOlat1# latin capital letter E with circumflex, U+00CA ISOlat1# latin capital letter E with grave, U+00C8 ISOlat1# greek capital letter epsilon, U+0395# greek capital letter eta, U+0397# latin capital letter E with diaeresis, U+00CB ISOlat1# greek capital letter gamma, U+0393 ISOgrk3# latin capital letter I with acute, U+00CD ISOlat1# latin capital letter I with circumflex, U+00CE ISOlat1# latin capital letter I with grave, U+00CC ISOlat1# greek capital letter iota, U+0399# latin capital letter I with diaeresis, U+00CF ISOlat1# greek capital letter kappa, U+039A# greek capital letter lambda, U+039B ISOgrk3# greek capital letter mu, U+039C# latin capital letter N with tilde, U+00D1 ISOlat1# greek capital letter nu, U+039D# latin capital ligature OE, U+0152 ISOlat2# latin capital letter O with acute, U+00D3 ISOlat1# latin capital letter O with circumflex, U+00D4 ISOlat1# latin capital letter O with grave, U+00D2 ISOlat1# greek capital letter omega, U+03A9 ISOgrk3# greek capital letter omicron, U+039F# latin capital letter O with stroke = latin capital letter O slash, U+00D8 ISOlat1# latin capital letter O with tilde, U+00D5 ISOlat1# latin capital letter O with diaeresis, U+00D6 ISOlat1# greek capital letter phi, U+03A6 ISOgrk3# greek capital letter pi, U+03A0 ISOgrk3# double prime = seconds = inches, U+2033 ISOtech# greek capital letter psi, U+03A8 ISOgrk3# greek capital letter rho, U+03A1# latin capital letter S with caron, U+0160 ISOlat2# greek capital letter sigma, U+03A3 ISOgrk3# latin capital letter THORN, U+00DE ISOlat1# greek capital letter tau, U+03A4# greek capital letter theta, U+0398 ISOgrk3# latin capital letter U with acute, U+00DA ISOlat1# latin capital letter U with circumflex, U+00DB ISOlat1# latin capital letter U with grave, U+00D9 ISOlat1# greek capital letter upsilon, U+03A5 ISOgrk3# latin capital letter U with diaeresis, U+00DC ISOlat1# greek capital letter xi, U+039E ISOgrk3# latin capital letter Y with acute, U+00DD ISOlat1# latin capital letter Y with diaeresis, U+0178 ISOlat2# greek capital letter zeta, U+0396# latin small letter a with acute, U+00E1 ISOlat1# latin small letter a with circumflex, U+00E2 ISOlat1# acute accent = spacing acute, U+00B4 ISOdia# latin small letter ae = latin small ligature ae, U+00E6 ISOlat1# latin small letter a with grave = latin small letter a grave, U+00E0 ISOlat1# alef symbol = first transfinite cardinal, U+2135 NEW# greek small letter alpha, U+03B1 ISOgrk3# ampersand, U+0026 ISOnum# logical and = wedge, U+2227 ISOtech# angle, U+2220 ISOamso# latin small letter a with ring above = latin small letter a ring, U+00E5 ISOlat1# almost equal to = asymptotic to, U+2248 ISOamsr# latin small letter a with tilde, U+00E3 ISOlat1# latin small letter a with diaeresis, U+00E4 ISOlat1# double low-9 quotation mark, U+201E NEW# greek small letter beta, U+03B2 ISOgrk3# broken bar = broken vertical bar, U+00A6 ISOnum# bullet = black small circle, U+2022 ISOpub# intersection = cap, U+2229 ISOtech# latin small letter c with cedilla, U+00E7 ISOlat1# cedilla = spacing cedilla, U+00B8 ISOdia# cent sign, U+00A2 ISOnum# greek small letter chi, U+03C7 ISOgrk3# modifier letter circumflex accent, U+02C6 ISOpub# black club suit = shamrock, U+2663 ISOpub# approximately equal to, U+2245 ISOtech# copyright sign, U+00A9 ISOnum# downwards arrow with corner leftwards = carriage return, U+21B5 NEW# union = cup, U+222A ISOtech# currency sign, U+00A4 ISOnum# downwards double arrow, U+21D3 ISOamsa# dagger, U+2020 ISOpub# downwards arrow, U+2193 ISOnum# degree sign, U+00B0 ISOnum# greek small letter delta, U+03B4 ISOgrk3# black diamond suit, U+2666 ISOpub# division sign, U+00F7 ISOnum# latin small letter e with acute, U+00E9 ISOlat1# latin small letter e with circumflex, U+00EA ISOlat1# latin small letter e with grave, U+00E8 ISOlat1# empty set = null set = diameter, U+2205 ISOamso# em space, U+2003 ISOpub# en space, U+2002 ISOpub# greek small letter epsilon, U+03B5 ISOgrk3# identical to, U+2261 ISOtech# greek small letter eta, U+03B7 ISOgrk3# latin small letter eth, U+00F0 ISOlat1# latin small letter e with diaeresis, U+00EB ISOlat1# euro sign, U+20AC NEW# there exists, U+2203 ISOtech# latin small f with hook = function = florin, U+0192 ISOtech# for all, U+2200 ISOtech# vulgar fraction one half = fraction one half, U+00BD ISOnum# vulgar fraction one quarter = fraction one quarter, U+00BC ISOnum# vulgar fraction three quarters = fraction three quarters, U+00BE ISOnum# fraction slash, U+2044 NEW# greek small letter gamma, U+03B3 ISOgrk3# greater-than or equal to, U+2265 ISOtech# greater-than sign, U+003E ISOnum# left right double arrow, U+21D4 ISOamsa# left right arrow, U+2194 ISOamsa# black heart suit = valentine, U+2665 ISOpub# horizontal ellipsis = three dot leader, U+2026 ISOpub# latin small letter i with acute, U+00ED ISOlat1# latin small letter i with circumflex, U+00EE ISOlat1# inverted exclamation mark, U+00A1 ISOnum# latin small letter i with grave, U+00EC ISOlat1# blackletter capital I = imaginary part, U+2111 ISOamso# infinity, U+221E ISOtech# integral, U+222B ISOtech# greek small letter iota, U+03B9 ISOgrk3# inverted question mark = turned question mark, U+00BF ISOnum# element of, U+2208 ISOtech# latin small letter i with diaeresis, U+00EF ISOlat1# greek small letter kappa, U+03BA ISOgrk3# leftwards double arrow, U+21D0 ISOtech# greek small letter lambda, U+03BB ISOgrk3# left-pointing angle bracket = bra, U+2329 ISOtech# left-pointing double angle quotation mark = left pointing guillemet, U+00AB ISOnum# leftwards arrow, U+2190 ISOnum# left ceiling = apl upstile, U+2308 ISOamsc# left double quotation mark, U+201C ISOnum# less-than or equal to, U+2264 ISOtech# left floor = apl downstile, U+230A ISOamsc# asterisk operator, U+2217 ISOtech# lozenge, U+25CA ISOpub# left-to-right mark, U+200E NEW RFC 2070# single left-pointing angle quotation mark, U+2039 ISO proposed# left single quotation mark, U+2018 ISOnum# less-than sign, U+003C ISOnum# macron = spacing macron = overline = APL overbar, U+00AF ISOdia# em dash, U+2014 ISOpub# micro sign, U+00B5 ISOnum# middle dot = Georgian comma = Greek middle dot, U+00B7 ISOnum# minus sign, U+2212 ISOtech# greek small letter mu, U+03BC ISOgrk3# nabla = backward difference, U+2207 ISOtech# no-break space = non-breaking space, U+00A0 ISOnum# en dash, U+2013 ISOpub# not equal to, U+2260 ISOtech# contains as member, U+220B ISOtech# not sign, U+00AC ISOnum# not an element of, U+2209 ISOtech# not a subset of, U+2284 ISOamsn# latin small letter n with tilde, U+00F1 ISOlat1# greek small letter nu, U+03BD ISOgrk3# latin small letter o with acute, U+00F3 ISOlat1# latin small letter o with circumflex, U+00F4 ISOlat1# latin small ligature oe, U+0153 ISOlat2# latin small letter o with grave, U+00F2 ISOlat1# overline = spacing overscore, U+203E NEW# greek small letter omega, U+03C9 ISOgrk3# greek small letter omicron, U+03BF NEW# circled plus = direct sum, U+2295 ISOamsb# logical or = vee, U+2228 ISOtech# feminine ordinal indicator, U+00AA ISOnum# masculine ordinal indicator, U+00BA ISOnum# latin small letter o with stroke, = latin small letter o slash, U+00F8 ISOlat1# latin small letter o with tilde, U+00F5 ISOlat1# circled times = vector product, U+2297 ISOamsb# latin small letter o with diaeresis, U+00F6 ISOlat1# pilcrow sign = paragraph sign, U+00B6 ISOnum# partial differential, U+2202 ISOtech# per mille sign, U+2030 ISOtech# up tack = orthogonal to = perpendicular, U+22A5 ISOtech# greek small letter phi, U+03C6 ISOgrk3# greek small letter pi, U+03C0 ISOgrk3# greek pi symbol, U+03D6 ISOgrk3# plus-minus sign = plus-or-minus sign, U+00B1 ISOnum# pound sign, U+00A3 ISOnum# prime = minutes = feet, U+2032 ISOtech# n-ary product = product sign, U+220F ISOamsb# proportional to, U+221D ISOtech# greek small letter psi, U+03C8 ISOgrk3# quotation mark = APL quote, U+0022 ISOnum# rightwards double arrow, U+21D2 ISOtech# square root = radical sign, U+221A ISOtech# right-pointing angle bracket = ket, U+232A ISOtech# right-pointing double angle quotation mark = right pointing guillemet, U+00BB ISOnum# rightwards arrow, U+2192 ISOnum# right ceiling, U+2309 ISOamsc# right double quotation mark, U+201D ISOnum# blackletter capital R = real part symbol, U+211C ISOamso# registered sign = registered trade mark sign, U+00AE ISOnum# right floor, U+230B ISOamsc# greek small letter rho, U+03C1 ISOgrk3# right-to-left mark, U+200F NEW RFC 2070# single right-pointing angle quotation mark, U+203A ISO proposed# right single quotation mark, U+2019 ISOnum# single low-9 quotation mark, U+201A NEW# latin small letter s with caron, U+0161 ISOlat2# dot operator, U+22C5 ISOamsb# section sign, U+00A7 ISOnum# soft hyphen = discretionary hyphen, U+00AD ISOnum# greek small letter sigma, U+03C3 ISOgrk3# greek small letter final sigma, U+03C2 ISOgrk3# tilde operator = varies with = similar to, U+223C ISOtech# black spade suit, U+2660 ISOpub# subset of, U+2282 ISOtech# subset of or equal to, U+2286 ISOtech# n-ary summation, U+2211 ISOamsb# superset of, U+2283 ISOtech# superscript one = superscript digit one, U+00B9 ISOnum# superscript two = superscript digit two = squared, U+00B2 ISOnum# superscript three = superscript digit three = cubed, U+00B3 ISOnum# superset of or equal to, U+2287 ISOtech# latin small letter sharp s = ess-zed, U+00DF ISOlat1# greek small letter tau, U+03C4 ISOgrk3# therefore, U+2234 ISOtech# greek small letter theta, U+03B8 ISOgrk3# greek small letter theta symbol, U+03D1 NEW# thin space, U+2009 ISOpub# latin small letter thorn with, U+00FE ISOlat1# small tilde, U+02DC ISOdia# multiplication sign, U+00D7 ISOnum# trade mark sign, U+2122 ISOnum# upwards double arrow, U+21D1 ISOamsa# latin small letter u with acute, U+00FA ISOlat1# upwards arrow, U+2191 ISOnum# latin small letter u with circumflex, U+00FB ISOlat1# latin small letter u with grave, U+00F9 ISOlat1# diaeresis = spacing diaeresis, U+00A8 ISOdia# greek upsilon with hook symbol, U+03D2 NEW# greek small letter upsilon, U+03C5 ISOgrk3# latin small letter u with diaeresis, U+00FC ISOlat1# script capital P = power set = Weierstrass p, U+2118 ISOamso# greek small letter xi, U+03BE ISOgrk3# latin small letter y with acute, U+00FD ISOlat1# yen sign = yuan sign, U+00A5 ISOnum# latin small letter y with diaeresis, U+00FF ISOlat1# greek small letter zeta, U+03B6 ISOgrk3# zero width joiner, U+200D NEW RFC 2070# zero width non-joiner, U+200C NEW RFC 2070# maps the HTML5 named character references to the equivalent Unicode character(s)# maps the Unicode code point to the HTML entity name# maps the HTML entity name to the character# (or a character reference if the character is outside the Latin-1 range)b'HTML character entity references.'u'HTML character entity references.'b'html5'u'html5'b'name2codepoint'u'name2codepoint'b'codepoint2name'u'codepoint2name'b'entitydefs'u'entitydefs'b'AElig'u'AElig'b'Aacute'u'Aacute'b'Acirc'u'Acirc'b'Agrave'u'Agrave'b'Alpha'u'Alpha'b'Aring'u'Aring'b'Atilde'u'Atilde'b'Auml'u'Auml'b'Beta'u'Beta'b'Ccedil'u'Ccedil'b'Chi'u'Chi'b'Dagger'u'Dagger'b'Delta'u'Delta'b'ETH'u'ETH'b'Eacute'u'Eacute'b'Ecirc'u'Ecirc'b'Egrave'u'Egrave'b'Epsilon'u'Epsilon'b'Eta'u'Eta'b'Euml'u'Euml'b'Gamma'u'Gamma'b'Iacute'u'Iacute'b'Icirc'u'Icirc'b'Igrave'u'Igrave'b'Iota'u'Iota'b'Iuml'u'Iuml'b'Kappa'u'Kappa'b'Lambda'u'Lambda'b'Mu'u'Mu'b'Ntilde'u'Ntilde'b'Nu'u'Nu'b'OElig'u'OElig'b'Oacute'u'Oacute'b'Ocirc'u'Ocirc'b'Ograve'u'Ograve'b'Omega'u'Omega'b'Omicron'u'Omicron'b'Oslash'u'Oslash'b'Otilde'u'Otilde'b'Ouml'u'Ouml'b'Phi'u'Phi'b'Pi'u'Pi'b'Prime'u'Prime'b'Psi'u'Psi'b'Rho'u'Rho'b'Scaron'u'Scaron'b'Sigma'u'Sigma'b'THORN'u'THORN'b'Tau'u'Tau'b'Theta'u'Theta'b'Uacute'u'Uacute'b'Ucirc'u'Ucirc'b'Ugrave'u'Ugrave'b'Upsilon'u'Upsilon'b'Uuml'u'Uuml'b'Xi'u'Xi'b'Yacute'u'Yacute'b'Yuml'u'Yuml'b'Zeta'u'Zeta'b'aacute'u'aacute'b'acirc'u'acirc'b'acute'u'acute'b'aelig'u'aelig'b'agrave'u'agrave'b'alefsym'u'alefsym'b'amp'u'amp'b'and'u'and'b'ang'u'ang'b'aring'u'aring'b'asymp'u'asymp'b'atilde'u'atilde'b'auml'u'auml'b'bdquo'u'bdquo'b'brvbar'u'brvbar'b'bull'u'bull'b'cap'u'cap'b'ccedil'u'ccedil'b'cedil'u'cedil'b'cent'u'cent'b'chi'u'chi'b'circ'u'circ'b'clubs'u'clubs'b'cong'u'cong'b'crarr'u'crarr'b'cup'u'cup'b'curren'u'curren'b'dArr'u'dArr'b'dagger'u'dagger'b'darr'u'darr'b'deg'u'deg'b'diams'u'diams'b'divide'u'divide'b'eacute'u'eacute'b'ecirc'u'ecirc'b'egrave'u'egrave'b'empty'u'empty'b'emsp'u'emsp'b'ensp'u'ensp'b'epsilon'u'epsilon'b'equiv'u'equiv'b'eta'u'eta'b'eth'u'eth'b'euml'u'euml'b'euro'u'euro'b'exist'u'exist'b'fnof'u'fnof'b'forall'u'forall'b'frac12'u'frac12'b'frac14'u'frac14'b'frac34'u'frac34'b'frasl'u'frasl'b'gamma'u'gamma'b'ge'u'ge'b'gt'u'gt'b'hArr'u'hArr'b'harr'u'harr'b'hearts'u'hearts'b'hellip'u'hellip'b'iacute'u'iacute'b'icirc'u'icirc'b'iexcl'u'iexcl'b'igrave'u'igrave'b'infin'u'infin'b'iota'u'iota'b'iquest'u'iquest'b'isin'u'isin'b'iuml'u'iuml'b'kappa'u'kappa'b'lArr'u'lArr'b'lambda'u'lambda'b'lang'u'lang'b'laquo'u'laquo'b'larr'u'larr'b'lceil'u'lceil'b'ldquo'u'ldquo'b'le'u'le'b'lfloor'u'lfloor'b'lowast'u'lowast'b'loz'u'loz'b'lrm'u'lrm'b'lsaquo'u'lsaquo'b'lsquo'u'lsquo'b'lt'u'lt'b'macr'u'macr'b'mdash'u'mdash'b'micro'u'micro'b'middot'u'middot'b'minus'u'minus'b'mu'u'mu'b'nabla'u'nabla'b'nbsp'u'nbsp'b'ndash'u'ndash'b'ni'u'ni'b'notin'u'notin'b'nsub'u'nsub'b'ntilde'u'ntilde'b'nu'u'nu'b'oacute'u'oacute'b'ocirc'u'ocirc'b'oelig'u'oelig'b'ograve'u'ograve'b'oline'u'oline'b'omega'u'omega'b'omicron'u'omicron'b'oplus'u'oplus'b'or'u'or'b'ordf'u'ordf'b'ordm'u'ordm'b'oslash'u'oslash'b'otilde'u'otilde'b'otimes'u'otimes'b'ouml'u'ouml'b'para'u'para'b'part'u'part'b'permil'u'permil'b'perp'u'perp'b'phi'u'phi'b'piv'u'piv'b'plusmn'u'plusmn'b'pound'u'pound'b'prime'u'prime'b'prod'u'prod'b'prop'u'prop'b'psi'u'psi'b'quot'u'quot'b'rArr'u'rArr'b'radic'u'radic'b'rang'u'rang'b'raquo'u'raquo'b'rarr'u'rarr'b'rceil'u'rceil'b'rdquo'u'rdquo'b'real'u'real'b'reg'u'reg'b'rfloor'u'rfloor'b'rho'u'rho'b'rlm'u'rlm'b'rsaquo'u'rsaquo'b'rsquo'u'rsquo'b'sbquo'u'sbquo'b'scaron'u'scaron'b'sdot'u'sdot'b'sect'u'sect'b'shy'u'shy'b'sigma'u'sigma'b'sigmaf'u'sigmaf'b'sim'u'sim'b'spades'u'spades'b'sub'u'sub'b'sube'u'sube'b'sum'u'sum'b'sup'u'sup'b'sup1'u'sup1'b'sup2'u'sup2'b'sup3'u'sup3'b'supe'u'supe'b'szlig'u'szlig'b'tau'u'tau'b'there4'u'there4'b'theta'u'theta'b'thetasym'u'thetasym'b'thinsp'u'thinsp'b'thorn'u'thorn'b'tilde'u'tilde'b'times'u'times'b'trade'u'trade'b'uArr'u'uArr'b'uacute'u'uacute'b'uarr'u'uarr'b'ucirc'u'ucirc'b'ugrave'u'ugrave'b'uml'u'uml'b'upsih'u'upsih'b'upsilon'u'upsilon'b'uuml'u'uuml'b'weierp'u'weierp'b'xi'u'xi'b'yacute'u'yacute'b'yen'u'yen'b'yuml'u'yuml'b'zeta'u'zeta'b'zwj'u'zwj'b'zwnj'u'zwnj'b'Á'u'Á'b'á'u'á'b'Aacute;'u'Aacute;'b'aacute;'u'aacute;'u'Ă'b'Abreve;'u'Abreve;'u'ă'b'abreve;'u'abreve;'u'∾'b'ac;'u'ac;'u'∿'b'acd;'u'acd;'u'∾̳'b'acE;'u'acE;'b'Â'u'Â'b'â'u'â'b'Acirc;'u'Acirc;'b'acirc;'u'acirc;'b'´'u'´'b'acute;'u'acute;'u'А'b'Acy;'u'Acy;'u'а'b'acy;'u'acy;'b'Æ'u'Æ'b'AElig;'u'AElig;'b'aelig;'u'aelig;'u'⁡'b'af;'u'af;'b'Afr;'u'Afr;'b'afr;'u'afr;'b'À'u'À'b'à'u'à'b'Agrave;'u'Agrave;'b'agrave;'u'agrave;'u'ℵ'b'alefsym;'u'alefsym;'b'aleph;'u'aleph;'u'Α'b'Alpha;'u'Alpha;'u'α'b'alpha;'u'alpha;'u'Ā'b'Amacr;'u'Amacr;'u'ā'b'amacr;'u'amacr;'u'⨿'b'amalg;'u'amalg;'b'AMP'u'AMP'b'AMP;'u'AMP;'b'amp;'u'amp;'u'⩓'b'And;'u'And;'u'∧'b'and;'u'and;'u'⩕'b'andand;'u'andand;'u'⩜'b'andd;'u'andd;'u'⩘'b'andslope;'u'andslope;'u'⩚'b'andv;'u'andv;'u'∠'b'ang;'u'ang;'u'⦤'b'ange;'u'ange;'b'angle;'u'angle;'u'∡'b'angmsd;'u'angmsd;'u'⦨'b'angmsdaa;'u'angmsdaa;'u'⦩'b'angmsdab;'u'angmsdab;'u'⦪'b'angmsdac;'u'angmsdac;'u'⦫'b'angmsdad;'u'angmsdad;'u'⦬'b'angmsdae;'u'angmsdae;'u'⦭'b'angmsdaf;'u'angmsdaf;'u'⦮'b'angmsdag;'u'angmsdag;'u'⦯'b'angmsdah;'u'angmsdah;'u'∟'b'angrt;'u'angrt;'u'⊾'b'angrtvb;'u'angrtvb;'u'⦝'b'angrtvbd;'u'angrtvbd;'u'∢'b'angsph;'u'angsph;'b'Å'u'Å'b'angst;'u'angst;'u'⍼'b'angzarr;'u'angzarr;'u'Ą'b'Aogon;'u'Aogon;'u'ą'b'aogon;'u'aogon;'b'Aopf;'u'Aopf;'b'aopf;'u'aopf;'u'≈'b'ap;'u'ap;'u'⩯'b'apacir;'u'apacir;'u'⩰'b'apE;'u'apE;'u'≊'b'ape;'u'ape;'u'≋'b'apid;'u'apid;'b'apos;'u'apos;'b'ApplyFunction;'u'ApplyFunction;'b'approx;'u'approx;'b'approxeq;'u'approxeq;'b'å'u'å'b'Aring;'u'Aring;'b'aring;'u'aring;'b'Ascr;'u'Ascr;'b'ascr;'u'ascr;'u'≔'b'Assign;'u'Assign;'b'ast;'u'ast;'b'asymp;'u'asymp;'u'≍'b'asympeq;'u'asympeq;'b'Ã'u'Ã'b'ã'u'ã'b'Atilde;'u'Atilde;'b'atilde;'u'atilde;'b'Ä'u'Ä'b'ä'u'ä'b'Auml;'u'Auml;'b'auml;'u'auml;'u'∳'b'awconint;'u'awconint;'u'⨑'b'awint;'u'awint;'u'≌'b'backcong;'u'backcong;'u'϶'b'backepsilon;'u'backepsilon;'u'‵'b'backprime;'u'backprime;'u'∽'b'backsim;'u'backsim;'u'⋍'b'backsimeq;'u'backsimeq;'u'∖'b'Backslash;'u'Backslash;'u'⫧'b'Barv;'u'Barv;'u'⊽'b'barvee;'u'barvee;'u'⌆'b'Barwed;'u'Barwed;'u'⌅'b'barwed;'u'barwed;'b'barwedge;'u'barwedge;'u'⎵'b'bbrk;'u'bbrk;'u'⎶'b'bbrktbrk;'u'bbrktbrk;'b'bcong;'u'bcong;'u'Б'b'Bcy;'u'Bcy;'u'б'b'bcy;'u'bcy;'b'bdquo;'u'bdquo;'u'∵'b'becaus;'u'becaus;'b'Because;'u'Because;'b'because;'u'because;'u'⦰'b'bemptyv;'u'bemptyv;'b'bepsi;'u'bepsi;'u'ℬ'b'bernou;'u'bernou;'b'Bernoullis;'u'Bernoullis;'u'Β'b'Beta;'u'Beta;'u'β'b'beta;'u'beta;'u'ℶ'b'beth;'u'beth;'u'≬'b'between;'u'between;'b'Bfr;'u'Bfr;'b'bfr;'u'bfr;'u'⋂'b'bigcap;'u'bigcap;'u'◯'b'bigcirc;'u'bigcirc;'u'⋃'b'bigcup;'u'bigcup;'u'⨀'b'bigodot;'u'bigodot;'u'⨁'b'bigoplus;'u'bigoplus;'u'⨂'b'bigotimes;'u'bigotimes;'u'⨆'b'bigsqcup;'u'bigsqcup;'u'★'b'bigstar;'u'bigstar;'u'▽'b'bigtriangledown;'u'bigtriangledown;'u'△'b'bigtriangleup;'u'bigtriangleup;'u'⨄'b'biguplus;'u'biguplus;'u'⋁'b'bigvee;'u'bigvee;'u'⋀'b'bigwedge;'u'bigwedge;'u'⤍'b'bkarow;'u'bkarow;'u'⧫'b'blacklozenge;'u'blacklozenge;'u'▪'b'blacksquare;'u'blacksquare;'u'▴'b'blacktriangle;'u'blacktriangle;'u'▾'b'blacktriangledown;'u'blacktriangledown;'u'◂'b'blacktriangleleft;'u'blacktriangleleft;'u'▸'b'blacktriangleright;'u'blacktriangleright;'u'␣'b'blank;'u'blank;'u'▒'b'blk12;'u'blk12;'u'░'b'blk14;'u'blk14;'u'▓'b'blk34;'u'blk34;'u'█'b'block;'u'block;'u'=⃥'b'bne;'u'bne;'u'≡⃥'b'bnequiv;'u'bnequiv;'u'⫭'b'bNot;'u'bNot;'u'⌐'b'bnot;'u'bnot;'b'Bopf;'u'Bopf;'b'bopf;'u'bopf;'u'⊥'b'bot;'u'bot;'b'bottom;'u'bottom;'u'⋈'b'bowtie;'u'bowtie;'u'⧉'b'boxbox;'u'boxbox;'u'╗'b'boxDL;'u'boxDL;'u'╖'b'boxDl;'u'boxDl;'u'╕'b'boxdL;'u'boxdL;'u'┐'b'boxdl;'u'boxdl;'u'╔'b'boxDR;'u'boxDR;'u'╓'b'boxDr;'u'boxDr;'u'╒'b'boxdR;'u'boxdR;'u'┌'b'boxdr;'u'boxdr;'u'═'b'boxH;'u'boxH;'u'─'b'boxh;'u'boxh;'u'╦'b'boxHD;'u'boxHD;'u'╤'b'boxHd;'u'boxHd;'u'╥'b'boxhD;'u'boxhD;'u'┬'b'boxhd;'u'boxhd;'u'╩'b'boxHU;'u'boxHU;'u'╧'b'boxHu;'u'boxHu;'u'╨'b'boxhU;'u'boxhU;'u'┴'b'boxhu;'u'boxhu;'u'⊟'b'boxminus;'u'boxminus;'u'⊞'b'boxplus;'u'boxplus;'u'⊠'b'boxtimes;'u'boxtimes;'u'╝'b'boxUL;'u'boxUL;'u'╜'b'boxUl;'u'boxUl;'u'╛'b'boxuL;'u'boxuL;'u'┘'b'boxul;'u'boxul;'u'╚'b'boxUR;'u'boxUR;'u'╙'b'boxUr;'u'boxUr;'u'╘'b'boxuR;'u'boxuR;'u'└'b'boxur;'u'boxur;'u'║'b'boxV;'u'boxV;'u'│'b'boxv;'u'boxv;'u'╬'b'boxVH;'u'boxVH;'u'╫'b'boxVh;'u'boxVh;'u'╪'b'boxvH;'u'boxvH;'u'┼'b'boxvh;'u'boxvh;'u'╣'b'boxVL;'u'boxVL;'u'╢'b'boxVl;'u'boxVl;'u'╡'b'boxvL;'u'boxvL;'u'┤'b'boxvl;'u'boxvl;'u'╠'b'boxVR;'u'boxVR;'u'╟'b'boxVr;'u'boxVr;'u'╞'b'boxvR;'u'boxvR;'u'├'b'boxvr;'u'boxvr;'b'bprime;'u'bprime;'u'˘'b'Breve;'u'Breve;'b'breve;'u'breve;'b'¦'u'¦'b'brvbar;'u'brvbar;'b'Bscr;'u'Bscr;'b'bscr;'u'bscr;'u'⁏'b'bsemi;'u'bsemi;'b'bsim;'u'bsim;'b'bsime;'u'bsime;'b'bsol;'u'bsol;'u'⧅'b'bsolb;'u'bsolb;'u'⟈'b'bsolhsub;'u'bsolhsub;'b'bull;'u'bull;'b'bullet;'u'bullet;'u'≎'b'bump;'u'bump;'u'⪮'b'bumpE;'u'bumpE;'u'≏'b'bumpe;'u'bumpe;'b'Bumpeq;'u'Bumpeq;'b'bumpeq;'u'bumpeq;'u'Ć'b'Cacute;'u'Cacute;'u'ć'b'cacute;'u'cacute;'u'⋒'b'Cap;'u'Cap;'u'∩'b'cap;'u'cap;'u'⩄'b'capand;'u'capand;'u'⩉'b'capbrcup;'u'capbrcup;'u'⩋'b'capcap;'u'capcap;'u'⩇'b'capcup;'u'capcup;'u'⩀'b'capdot;'u'capdot;'u'ⅅ'b'CapitalDifferentialD;'u'CapitalDifferentialD;'u'∩︀'b'caps;'u'caps;'u'⁁'b'caret;'u'caret;'u'ˇ'b'caron;'u'caron;'u'ℭ'b'Cayleys;'u'Cayleys;'u'⩍'b'ccaps;'u'ccaps;'u'Č'b'Ccaron;'u'Ccaron;'u'č'b'ccaron;'u'ccaron;'b'Ç'u'Ç'b'ç'u'ç'b'Ccedil;'u'Ccedil;'b'ccedil;'u'ccedil;'u'Ĉ'b'Ccirc;'u'Ccirc;'u'ĉ'b'ccirc;'u'ccirc;'u'∰'b'Cconint;'u'Cconint;'u'⩌'b'ccups;'u'ccups;'u'⩐'b'ccupssm;'u'ccupssm;'u'Ċ'b'Cdot;'u'Cdot;'u'ċ'b'cdot;'u'cdot;'b'¸'u'¸'b'cedil;'u'cedil;'b'Cedilla;'u'Cedilla;'u'⦲'b'cemptyv;'u'cemptyv;'b'¢'u'¢'b'cent;'u'cent;'b'·'u'·'b'CenterDot;'u'CenterDot;'b'centerdot;'u'centerdot;'b'Cfr;'u'Cfr;'b'cfr;'u'cfr;'u'Ч'b'CHcy;'u'CHcy;'u'ч'b'chcy;'u'chcy;'u'✓'b'check;'u'check;'b'checkmark;'u'checkmark;'u'Χ'b'Chi;'u'Chi;'u'χ'b'chi;'u'chi;'u'○'b'cir;'u'cir;'b'circ;'u'circ;'u'≗'b'circeq;'u'circeq;'u'↺'b'circlearrowleft;'u'circlearrowleft;'u'↻'b'circlearrowright;'u'circlearrowright;'u'⊛'b'circledast;'u'circledast;'u'⊚'b'circledcirc;'u'circledcirc;'u'⊝'b'circleddash;'u'circleddash;'u'⊙'b'CircleDot;'u'CircleDot;'b'®'u'®'b'circledR;'u'circledR;'u'Ⓢ'b'circledS;'u'circledS;'u'⊖'b'CircleMinus;'u'CircleMinus;'u'⊕'b'CirclePlus;'u'CirclePlus;'u'⊗'b'CircleTimes;'u'CircleTimes;'u'⧃'b'cirE;'u'cirE;'b'cire;'u'cire;'u'⨐'b'cirfnint;'u'cirfnint;'u'⫯'b'cirmid;'u'cirmid;'u'⧂'b'cirscir;'u'cirscir;'u'∲'b'ClockwiseContourIntegral;'u'ClockwiseContourIntegral;'b'CloseCurlyDoubleQuote;'u'CloseCurlyDoubleQuote;'b'CloseCurlyQuote;'u'CloseCurlyQuote;'u'♣'b'clubs;'u'clubs;'b'clubsuit;'u'clubsuit;'u'∷'b'Colon;'u'Colon;'b'colon;'u'colon;'u'⩴'b'Colone;'u'Colone;'b'colone;'u'colone;'b'coloneq;'u'coloneq;'b'comma;'u'comma;'b'commat;'u'commat;'u'∁'b'comp;'u'comp;'u'∘'b'compfn;'u'compfn;'b'complement;'u'complement;'u'ℂ'b'complexes;'u'complexes;'u'≅'b'cong;'u'cong;'u'⩭'b'congdot;'u'congdot;'u'≡'b'Congruent;'u'Congruent;'u'∯'b'Conint;'u'Conint;'u'∮'b'conint;'u'conint;'b'ContourIntegral;'u'ContourIntegral;'b'Copf;'u'Copf;'b'copf;'u'copf;'u'∐'b'coprod;'u'coprod;'b'Coproduct;'u'Coproduct;'b'©'u'©'b'COPY'u'COPY'b'COPY;'u'COPY;'b'copy;'u'copy;'u'℗'b'copysr;'u'copysr;'b'CounterClockwiseContourIntegral;'u'CounterClockwiseContourIntegral;'u'↵'b'crarr;'u'crarr;'u'⨯'b'Cross;'u'Cross;'u'✗'b'cross;'u'cross;'b'Cscr;'u'Cscr;'b'cscr;'u'cscr;'u'⫏'b'csub;'u'csub;'u'⫑'b'csube;'u'csube;'u'⫐'b'csup;'u'csup;'u'⫒'b'csupe;'u'csupe;'u'⋯'b'ctdot;'u'ctdot;'u'⤸'b'cudarrl;'u'cudarrl;'u'⤵'b'cudarrr;'u'cudarrr;'u'⋞'b'cuepr;'u'cuepr;'u'⋟'b'cuesc;'u'cuesc;'u'↶'b'cularr;'u'cularr;'u'⤽'b'cularrp;'u'cularrp;'u'⋓'b'Cup;'u'Cup;'u'∪'b'cup;'u'cup;'u'⩈'b'cupbrcap;'u'cupbrcap;'b'CupCap;'u'CupCap;'u'⩆'b'cupcap;'u'cupcap;'u'⩊'b'cupcup;'u'cupcup;'u'⊍'b'cupdot;'u'cupdot;'u'⩅'b'cupor;'u'cupor;'u'∪︀'b'cups;'u'cups;'u'↷'b'curarr;'u'curarr;'u'⤼'b'curarrm;'u'curarrm;'b'curlyeqprec;'u'curlyeqprec;'b'curlyeqsucc;'u'curlyeqsucc;'u'⋎'b'curlyvee;'u'curlyvee;'u'⋏'b'curlywedge;'u'curlywedge;'b'¤'u'¤'b'curren;'u'curren;'b'curvearrowleft;'u'curvearrowleft;'b'curvearrowright;'u'curvearrowright;'b'cuvee;'u'cuvee;'b'cuwed;'u'cuwed;'b'cwconint;'u'cwconint;'u'∱'b'cwint;'u'cwint;'u'⌭'b'cylcty;'u'cylcty;'b'Dagger;'u'Dagger;'b'dagger;'u'dagger;'u'ℸ'b'daleth;'u'daleth;'u'↡'b'Darr;'u'Darr;'u'⇓'b'dArr;'u'dArr;'u'↓'b'darr;'u'darr;'u'‐'b'dash;'u'dash;'u'⫤'b'Dashv;'u'Dashv;'u'⊣'b'dashv;'u'dashv;'u'⤏'b'dbkarow;'u'dbkarow;'u'˝'b'dblac;'u'dblac;'u'Ď'b'Dcaron;'u'Dcaron;'u'ď'b'dcaron;'u'dcaron;'u'Д'b'Dcy;'u'Dcy;'u'д'b'dcy;'u'dcy;'b'DD;'u'DD;'u'ⅆ'b'dd;'u'dd;'b'ddagger;'u'ddagger;'u'⇊'b'ddarr;'u'ddarr;'u'⤑'b'DDotrahd;'u'DDotrahd;'u'⩷'b'ddotseq;'u'ddotseq;'b'°'u'°'b'deg;'u'deg;'u'∇'b'Del;'u'Del;'u'Δ'b'Delta;'u'Delta;'u'δ'b'delta;'u'delta;'u'⦱'b'demptyv;'u'demptyv;'u'⥿'b'dfisht;'u'dfisht;'b'Dfr;'u'Dfr;'b'dfr;'u'dfr;'u'⥥'b'dHar;'u'dHar;'u'⇃'b'dharl;'u'dharl;'u'⇂'b'dharr;'u'dharr;'b'DiacriticalAcute;'u'DiacriticalAcute;'u'˙'b'DiacriticalDot;'u'DiacriticalDot;'b'DiacriticalDoubleAcute;'u'DiacriticalDoubleAcute;'b'`'u'`'b'DiacriticalGrave;'u'DiacriticalGrave;'b'DiacriticalTilde;'u'DiacriticalTilde;'u'⋄'b'diam;'u'diam;'b'Diamond;'u'Diamond;'b'diamond;'u'diamond;'u'♦'b'diamondsuit;'u'diamondsuit;'b'diams;'u'diams;'b'¨'u'¨'b'die;'u'die;'b'DifferentialD;'u'DifferentialD;'u'ϝ'b'digamma;'u'digamma;'u'⋲'b'disin;'u'disin;'b'÷'u'÷'b'div;'u'div;'b'divide;'u'divide;'u'⋇'b'divideontimes;'u'divideontimes;'b'divonx;'u'divonx;'u'Ђ'b'DJcy;'u'DJcy;'u'ђ'b'djcy;'u'djcy;'u'⌞'b'dlcorn;'u'dlcorn;'u'⌍'b'dlcrop;'u'dlcrop;'b'dollar;'u'dollar;'b'Dopf;'u'Dopf;'b'dopf;'u'dopf;'b'Dot;'u'Dot;'b'dot;'u'dot;'u'⃜'b'DotDot;'u'DotDot;'u'≐'b'doteq;'u'doteq;'u'≑'b'doteqdot;'u'doteqdot;'b'DotEqual;'u'DotEqual;'u'∸'b'dotminus;'u'dotminus;'u'∔'b'dotplus;'u'dotplus;'u'⊡'b'dotsquare;'u'dotsquare;'b'doublebarwedge;'u'doublebarwedge;'b'DoubleContourIntegral;'u'DoubleContourIntegral;'b'DoubleDot;'u'DoubleDot;'b'DoubleDownArrow;'u'DoubleDownArrow;'u'⇐'b'DoubleLeftArrow;'u'DoubleLeftArrow;'u'⇔'b'DoubleLeftRightArrow;'u'DoubleLeftRightArrow;'b'DoubleLeftTee;'u'DoubleLeftTee;'u'⟸'b'DoubleLongLeftArrow;'u'DoubleLongLeftArrow;'u'⟺'b'DoubleLongLeftRightArrow;'u'DoubleLongLeftRightArrow;'u'⟹'b'DoubleLongRightArrow;'u'DoubleLongRightArrow;'u'⇒'b'DoubleRightArrow;'u'DoubleRightArrow;'u'⊨'b'DoubleRightTee;'u'DoubleRightTee;'u'⇑'b'DoubleUpArrow;'u'DoubleUpArrow;'u'⇕'b'DoubleUpDownArrow;'u'DoubleUpDownArrow;'u'∥'b'DoubleVerticalBar;'u'DoubleVerticalBar;'b'DownArrow;'u'DownArrow;'b'Downarrow;'u'Downarrow;'b'downarrow;'u'downarrow;'u'⤓'b'DownArrowBar;'u'DownArrowBar;'u'⇵'b'DownArrowUpArrow;'u'DownArrowUpArrow;'u'̑'b'DownBreve;'u'DownBreve;'b'downdownarrows;'u'downdownarrows;'b'downharpoonleft;'u'downharpoonleft;'b'downharpoonright;'u'downharpoonright;'u'⥐'b'DownLeftRightVector;'u'DownLeftRightVector;'u'⥞'b'DownLeftTeeVector;'u'DownLeftTeeVector;'u'↽'b'DownLeftVector;'u'DownLeftVector;'u'⥖'b'DownLeftVectorBar;'u'DownLeftVectorBar;'u'⥟'b'DownRightTeeVector;'u'DownRightTeeVector;'u'⇁'b'DownRightVector;'u'DownRightVector;'u'⥗'b'DownRightVectorBar;'u'DownRightVectorBar;'u'⊤'b'DownTee;'u'DownTee;'u'↧'b'DownTeeArrow;'u'DownTeeArrow;'u'⤐'b'drbkarow;'u'drbkarow;'u'⌟'b'drcorn;'u'drcorn;'u'⌌'b'drcrop;'u'drcrop;'b'Dscr;'u'Dscr;'b'dscr;'u'dscr;'u'Ѕ'b'DScy;'u'DScy;'u'ѕ'b'dscy;'u'dscy;'u'⧶'b'dsol;'u'dsol;'u'Đ'b'Dstrok;'u'Dstrok;'u'đ'b'dstrok;'u'dstrok;'u'⋱'b'dtdot;'u'dtdot;'u'▿'b'dtri;'u'dtri;'b'dtrif;'u'dtrif;'b'duarr;'u'duarr;'u'⥯'b'duhar;'u'duhar;'u'⦦'b'dwangle;'u'dwangle;'u'Џ'b'DZcy;'u'DZcy;'u'џ'b'dzcy;'u'dzcy;'u'⟿'b'dzigrarr;'u'dzigrarr;'b'É'u'É'b'é'u'é'b'Eacute;'u'Eacute;'b'eacute;'u'eacute;'u'⩮'b'easter;'u'easter;'u'Ě'b'Ecaron;'u'Ecaron;'u'ě'b'ecaron;'u'ecaron;'u'≖'b'ecir;'u'ecir;'b'Ê'u'Ê'b'ê'u'ê'b'Ecirc;'u'Ecirc;'b'ecirc;'u'ecirc;'u'≕'b'ecolon;'u'ecolon;'u'Э'b'Ecy;'u'Ecy;'u'э'b'ecy;'u'ecy;'b'eDDot;'u'eDDot;'u'Ė'b'Edot;'u'Edot;'b'eDot;'u'eDot;'u'ė'b'edot;'u'edot;'u'ⅇ'b'ee;'u'ee;'u'≒'b'efDot;'u'efDot;'b'Efr;'u'Efr;'b'efr;'u'efr;'u'⪚'b'eg;'u'eg;'b'È'u'È'b'è'u'è'b'Egrave;'u'Egrave;'b'egrave;'u'egrave;'u'⪖'b'egs;'u'egs;'u'⪘'b'egsdot;'u'egsdot;'u'⪙'b'el;'u'el;'u'∈'b'Element;'u'Element;'u'⏧'b'elinters;'u'elinters;'u'ℓ'b'ell;'u'ell;'u'⪕'b'els;'u'els;'u'⪗'b'elsdot;'u'elsdot;'u'Ē'b'Emacr;'u'Emacr;'u'ē'b'emacr;'u'emacr;'u'∅'b'empty;'u'empty;'b'emptyset;'u'emptyset;'u'◻'b'EmptySmallSquare;'u'EmptySmallSquare;'b'emptyv;'u'emptyv;'u'▫'b'EmptyVerySmallSquare;'u'EmptyVerySmallSquare;'u' 'b'emsp13;'u'emsp13;'u' 'b'emsp14;'u'emsp14;'u' 'b'emsp;'u'emsp;'u'Ŋ'b'ENG;'u'ENG;'u'ŋ'b'eng;'u'eng;'u' 'b'ensp;'u'ensp;'u'Ę'b'Eogon;'u'Eogon;'u'ę'b'eogon;'u'eogon;'b'Eopf;'u'Eopf;'b'eopf;'u'eopf;'u'⋕'b'epar;'u'epar;'u'⧣'b'eparsl;'u'eparsl;'u'⩱'b'eplus;'u'eplus;'u'ε'b'epsi;'u'epsi;'u'Ε'b'Epsilon;'u'Epsilon;'b'epsilon;'u'epsilon;'u'ϵ'b'epsiv;'u'epsiv;'b'eqcirc;'u'eqcirc;'b'eqcolon;'u'eqcolon;'u'≂'b'eqsim;'u'eqsim;'b'eqslantgtr;'u'eqslantgtr;'b'eqslantless;'u'eqslantless;'u'⩵'b'Equal;'u'Equal;'b'equals;'u'equals;'b'EqualTilde;'u'EqualTilde;'u'≟'b'equest;'u'equest;'u'⇌'b'Equilibrium;'u'Equilibrium;'b'equiv;'u'equiv;'u'⩸'b'equivDD;'u'equivDD;'u'⧥'b'eqvparsl;'u'eqvparsl;'u'⥱'b'erarr;'u'erarr;'u'≓'b'erDot;'u'erDot;'u'ℰ'b'Escr;'u'Escr;'u'ℯ'b'escr;'u'escr;'b'esdot;'u'esdot;'u'⩳'b'Esim;'u'Esim;'b'esim;'u'esim;'u'Η'b'Eta;'u'Eta;'u'η'b'eta;'u'eta;'b'Ð'u'Ð'b'ð'u'ð'b'ETH;'u'ETH;'b'eth;'u'eth;'b'Ë'u'Ë'b'ë'u'ë'b'Euml;'u'Euml;'b'euml;'u'euml;'b'euro;'u'euro;'b'excl;'u'excl;'u'∃'b'exist;'u'exist;'b'Exists;'u'Exists;'b'expectation;'u'expectation;'b'ExponentialE;'u'ExponentialE;'b'exponentiale;'u'exponentiale;'b'fallingdotseq;'u'fallingdotseq;'u'Ф'b'Fcy;'u'Fcy;'u'ф'b'fcy;'u'fcy;'u'♀'b'female;'u'female;'u'ffi'b'ffilig;'u'ffilig;'u'ff'b'fflig;'u'fflig;'u'ffl'b'ffllig;'u'ffllig;'b'Ffr;'u'Ffr;'b'ffr;'u'ffr;'u'fi'b'filig;'u'filig;'u'◼'b'FilledSmallSquare;'u'FilledSmallSquare;'b'FilledVerySmallSquare;'u'FilledVerySmallSquare;'b'fj'u'fj'b'fjlig;'u'fjlig;'u'♭'b'flat;'u'flat;'u'fl'b'fllig;'u'fllig;'u'▱'b'fltns;'u'fltns;'b'fnof;'u'fnof;'b'Fopf;'u'Fopf;'b'fopf;'u'fopf;'u'∀'b'ForAll;'u'ForAll;'b'forall;'u'forall;'u'⋔'b'fork;'u'fork;'u'⫙'b'forkv;'u'forkv;'u'ℱ'b'Fouriertrf;'u'Fouriertrf;'u'⨍'b'fpartint;'u'fpartint;'b'½'u'½'b'frac12;'u'frac12;'u'⅓'b'frac13;'u'frac13;'b'¼'u'¼'b'frac14;'u'frac14;'u'⅕'b'frac15;'u'frac15;'u'⅙'b'frac16;'u'frac16;'u'⅛'b'frac18;'u'frac18;'u'⅔'b'frac23;'u'frac23;'u'⅖'b'frac25;'u'frac25;'b'¾'u'¾'b'frac34;'u'frac34;'u'⅗'b'frac35;'u'frac35;'u'⅜'b'frac38;'u'frac38;'u'⅘'b'frac45;'u'frac45;'u'⅚'b'frac56;'u'frac56;'u'⅝'b'frac58;'u'frac58;'u'⅞'b'frac78;'u'frac78;'u'⁄'b'frasl;'u'frasl;'u'⌢'b'frown;'u'frown;'b'Fscr;'u'Fscr;'b'fscr;'u'fscr;'u'ǵ'b'gacute;'u'gacute;'u'Γ'b'Gamma;'u'Gamma;'u'γ'b'gamma;'u'gamma;'u'Ϝ'b'Gammad;'u'Gammad;'b'gammad;'u'gammad;'u'⪆'b'gap;'u'gap;'u'Ğ'b'Gbreve;'u'Gbreve;'u'ğ'b'gbreve;'u'gbreve;'u'Ģ'b'Gcedil;'u'Gcedil;'u'Ĝ'b'Gcirc;'u'Gcirc;'u'ĝ'b'gcirc;'u'gcirc;'u'Г'b'Gcy;'u'Gcy;'u'г'b'gcy;'u'gcy;'u'Ġ'b'Gdot;'u'Gdot;'u'ġ'b'gdot;'u'gdot;'u'≧'b'gE;'u'gE;'u'≥'b'ge;'u'ge;'u'⪌'b'gEl;'u'gEl;'u'⋛'b'gel;'u'gel;'b'geq;'u'geq;'b'geqq;'u'geqq;'u'⩾'b'geqslant;'u'geqslant;'b'ges;'u'ges;'u'⪩'b'gescc;'u'gescc;'u'⪀'b'gesdot;'u'gesdot;'u'⪂'b'gesdoto;'u'gesdoto;'u'⪄'b'gesdotol;'u'gesdotol;'u'⋛︀'b'gesl;'u'gesl;'u'⪔'b'gesles;'u'gesles;'b'Gfr;'u'Gfr;'b'gfr;'u'gfr;'u'⋙'b'Gg;'u'Gg;'u'≫'b'gg;'u'gg;'b'ggg;'u'ggg;'u'ℷ'b'gimel;'u'gimel;'u'Ѓ'b'GJcy;'u'GJcy;'u'ѓ'b'gjcy;'u'gjcy;'u'≷'b'gl;'u'gl;'u'⪥'b'gla;'u'gla;'u'⪒'b'glE;'u'glE;'u'⪤'b'glj;'u'glj;'u'⪊'b'gnap;'u'gnap;'b'gnapprox;'u'gnapprox;'u'≩'b'gnE;'u'gnE;'u'⪈'b'gne;'u'gne;'b'gneq;'u'gneq;'b'gneqq;'u'gneqq;'u'⋧'b'gnsim;'u'gnsim;'b'Gopf;'u'Gopf;'b'gopf;'u'gopf;'b'grave;'u'grave;'b'GreaterEqual;'u'GreaterEqual;'b'GreaterEqualLess;'u'GreaterEqualLess;'b'GreaterFullEqual;'u'GreaterFullEqual;'u'⪢'b'GreaterGreater;'u'GreaterGreater;'b'GreaterLess;'u'GreaterLess;'b'GreaterSlantEqual;'u'GreaterSlantEqual;'u'≳'b'GreaterTilde;'u'GreaterTilde;'b'Gscr;'u'Gscr;'u'ℊ'b'gscr;'u'gscr;'b'gsim;'u'gsim;'u'⪎'b'gsime;'u'gsime;'u'⪐'b'gsiml;'u'gsiml;'b'GT'u'GT'b'GT;'u'GT;'b'Gt;'u'Gt;'b'gt;'u'gt;'u'⪧'b'gtcc;'u'gtcc;'u'⩺'b'gtcir;'u'gtcir;'u'⋗'b'gtdot;'u'gtdot;'u'⦕'b'gtlPar;'u'gtlPar;'u'⩼'b'gtquest;'u'gtquest;'b'gtrapprox;'u'gtrapprox;'u'⥸'b'gtrarr;'u'gtrarr;'b'gtrdot;'u'gtrdot;'b'gtreqless;'u'gtreqless;'b'gtreqqless;'u'gtreqqless;'b'gtrless;'u'gtrless;'b'gtrsim;'u'gtrsim;'u'≩︀'b'gvertneqq;'u'gvertneqq;'b'gvnE;'u'gvnE;'b'Hacek;'u'Hacek;'u' 'b'hairsp;'u'hairsp;'b'half;'u'half;'u'ℋ'b'hamilt;'u'hamilt;'u'Ъ'b'HARDcy;'u'HARDcy;'u'ъ'b'hardcy;'u'hardcy;'b'hArr;'u'hArr;'u'↔'b'harr;'u'harr;'u'⥈'b'harrcir;'u'harrcir;'u'↭'b'harrw;'u'harrw;'b'Hat;'u'Hat;'u'ℏ'b'hbar;'u'hbar;'u'Ĥ'b'Hcirc;'u'Hcirc;'u'ĥ'b'hcirc;'u'hcirc;'u'♥'b'hearts;'u'hearts;'b'heartsuit;'u'heartsuit;'b'hellip;'u'hellip;'u'⊹'b'hercon;'u'hercon;'u'ℌ'b'Hfr;'u'Hfr;'b'hfr;'u'hfr;'b'HilbertSpace;'u'HilbertSpace;'u'⤥'b'hksearow;'u'hksearow;'u'⤦'b'hkswarow;'u'hkswarow;'u'⇿'b'hoarr;'u'hoarr;'u'∻'b'homtht;'u'homtht;'u'↩'b'hookleftarrow;'u'hookleftarrow;'u'↪'b'hookrightarrow;'u'hookrightarrow;'u'ℍ'b'Hopf;'u'Hopf;'b'hopf;'u'hopf;'u'―'b'horbar;'u'horbar;'b'HorizontalLine;'u'HorizontalLine;'b'Hscr;'u'Hscr;'b'hscr;'u'hscr;'b'hslash;'u'hslash;'u'Ħ'b'Hstrok;'u'Hstrok;'u'ħ'b'hstrok;'u'hstrok;'b'HumpDownHump;'u'HumpDownHump;'b'HumpEqual;'u'HumpEqual;'u'⁃'b'hybull;'u'hybull;'b'hyphen;'u'hyphen;'b'Í'u'Í'b'í'u'í'b'Iacute;'u'Iacute;'b'iacute;'u'iacute;'u'⁣'b'ic;'u'ic;'b'Î'u'Î'b'î'u'î'b'Icirc;'u'Icirc;'b'icirc;'u'icirc;'u'И'b'Icy;'u'Icy;'u'и'b'icy;'u'icy;'b'Idot;'u'Idot;'u'Е'b'IEcy;'u'IEcy;'u'е'b'iecy;'u'iecy;'b'¡'u'¡'b'iexcl;'u'iexcl;'b'iff;'u'iff;'u'ℑ'b'Ifr;'u'Ifr;'b'ifr;'u'ifr;'b'Ì'u'Ì'b'ì'u'ì'b'Igrave;'u'Igrave;'b'igrave;'u'igrave;'u'ⅈ'b'ii;'u'ii;'u'⨌'b'iiiint;'u'iiiint;'u'∭'b'iiint;'u'iiint;'u'⧜'b'iinfin;'u'iinfin;'u'℩'b'iiota;'u'iiota;'u'IJ'b'IJlig;'u'IJlig;'u'ij'b'ijlig;'u'ijlig;'b'Im;'u'Im;'u'Ī'b'Imacr;'u'Imacr;'u'ī'b'imacr;'u'imacr;'b'image;'u'image;'b'ImaginaryI;'u'ImaginaryI;'u'ℐ'b'imagline;'u'imagline;'b'imagpart;'u'imagpart;'u'ı'b'imath;'u'imath;'u'⊷'b'imof;'u'imof;'u'Ƶ'b'imped;'u'imped;'b'Implies;'u'Implies;'b'in;'u'in;'u'℅'b'incare;'u'incare;'u'∞'b'infin;'u'infin;'u'⧝'b'infintie;'u'infintie;'b'inodot;'u'inodot;'u'∬'b'Int;'u'Int;'u'∫'b'int;'u'int;'u'⊺'b'intcal;'u'intcal;'u'ℤ'b'integers;'u'integers;'b'Integral;'u'Integral;'b'intercal;'u'intercal;'b'Intersection;'u'Intersection;'u'⨗'b'intlarhk;'u'intlarhk;'u'⨼'b'intprod;'u'intprod;'b'InvisibleComma;'u'InvisibleComma;'u'⁢'b'InvisibleTimes;'u'InvisibleTimes;'u'Ё'b'IOcy;'u'IOcy;'u'ё'b'iocy;'u'iocy;'u'Į'b'Iogon;'u'Iogon;'u'į'b'iogon;'u'iogon;'b'Iopf;'u'Iopf;'b'iopf;'u'iopf;'u'Ι'b'Iota;'u'Iota;'u'ι'b'iota;'u'iota;'b'iprod;'u'iprod;'b'¿'u'¿'b'iquest;'u'iquest;'b'Iscr;'u'Iscr;'b'iscr;'u'iscr;'b'isin;'u'isin;'u'⋵'b'isindot;'u'isindot;'u'⋹'b'isinE;'u'isinE;'u'⋴'b'isins;'u'isins;'u'⋳'b'isinsv;'u'isinsv;'b'isinv;'u'isinv;'b'it;'u'it;'u'Ĩ'b'Itilde;'u'Itilde;'u'ĩ'b'itilde;'u'itilde;'u'І'b'Iukcy;'u'Iukcy;'u'і'b'iukcy;'u'iukcy;'b'Ï'u'Ï'b'ï'u'ï'b'Iuml;'u'Iuml;'b'iuml;'u'iuml;'u'Ĵ'b'Jcirc;'u'Jcirc;'u'ĵ'b'jcirc;'u'jcirc;'u'Й'b'Jcy;'u'Jcy;'u'й'b'jcy;'u'jcy;'b'Jfr;'u'Jfr;'b'jfr;'u'jfr;'u'ȷ'b'jmath;'u'jmath;'b'Jopf;'u'Jopf;'b'jopf;'u'jopf;'b'Jscr;'u'Jscr;'b'jscr;'u'jscr;'u'Ј'b'Jsercy;'u'Jsercy;'u'ј'b'jsercy;'u'jsercy;'u'Є'b'Jukcy;'u'Jukcy;'u'є'b'jukcy;'u'jukcy;'u'Κ'b'Kappa;'u'Kappa;'u'κ'b'kappa;'u'kappa;'u'ϰ'b'kappav;'u'kappav;'u'Ķ'b'Kcedil;'u'Kcedil;'u'ķ'b'kcedil;'u'kcedil;'b'Kcy;'u'Kcy;'u'к'b'kcy;'u'kcy;'b'Kfr;'u'Kfr;'b'kfr;'u'kfr;'u'ĸ'b'kgreen;'u'kgreen;'u'Х'b'KHcy;'u'KHcy;'u'х'b'khcy;'u'khcy;'u'Ќ'b'KJcy;'u'KJcy;'u'ќ'b'kjcy;'u'kjcy;'b'Kopf;'u'Kopf;'b'kopf;'u'kopf;'b'Kscr;'u'Kscr;'b'kscr;'u'kscr;'u'⇚'b'lAarr;'u'lAarr;'u'Ĺ'b'Lacute;'u'Lacute;'u'ĺ'b'lacute;'u'lacute;'u'⦴'b'laemptyv;'u'laemptyv;'u'ℒ'b'lagran;'u'lagran;'u'Λ'b'Lambda;'u'Lambda;'u'λ'b'lambda;'u'lambda;'u'⟪'b'Lang;'u'Lang;'u'⟨'b'lang;'u'lang;'u'⦑'b'langd;'u'langd;'b'langle;'u'langle;'u'⪅'b'lap;'u'lap;'b'Laplacetrf;'u'Laplacetrf;'b'«'u'«'b'laquo;'u'laquo;'u'↞'b'Larr;'u'Larr;'b'lArr;'u'lArr;'u'←'b'larr;'u'larr;'u'⇤'b'larrb;'u'larrb;'u'⤟'b'larrbfs;'u'larrbfs;'u'⤝'b'larrfs;'u'larrfs;'b'larrhk;'u'larrhk;'u'↫'b'larrlp;'u'larrlp;'u'⤹'b'larrpl;'u'larrpl;'u'⥳'b'larrsim;'u'larrsim;'u'↢'b'larrtl;'u'larrtl;'u'⪫'b'lat;'u'lat;'u'⤛'b'lAtail;'u'lAtail;'u'⤙'b'latail;'u'latail;'u'⪭'b'late;'u'late;'u'⪭︀'b'lates;'u'lates;'u'⤎'b'lBarr;'u'lBarr;'u'⤌'b'lbarr;'u'lbarr;'u'❲'b'lbbrk;'u'lbbrk;'b'lbrace;'u'lbrace;'b'lbrack;'u'lbrack;'u'⦋'b'lbrke;'u'lbrke;'u'⦏'b'lbrksld;'u'lbrksld;'u'⦍'b'lbrkslu;'u'lbrkslu;'u'Ľ'b'Lcaron;'u'Lcaron;'u'ľ'b'lcaron;'u'lcaron;'u'Ļ'b'Lcedil;'u'Lcedil;'u'ļ'b'lcedil;'u'lcedil;'u'⌈'b'lceil;'u'lceil;'b'lcub;'u'lcub;'u'Л'b'Lcy;'u'Lcy;'u'л'b'lcy;'u'lcy;'u'⤶'b'ldca;'u'ldca;'b'ldquo;'u'ldquo;'b'ldquor;'u'ldquor;'u'⥧'b'ldrdhar;'u'ldrdhar;'u'⥋'b'ldrushar;'u'ldrushar;'u'↲'b'ldsh;'u'ldsh;'u'≦'b'lE;'u'lE;'u'≤'b'le;'u'le;'b'LeftAngleBracket;'u'LeftAngleBracket;'b'LeftArrow;'u'LeftArrow;'b'Leftarrow;'u'Leftarrow;'b'leftarrow;'u'leftarrow;'b'LeftArrowBar;'u'LeftArrowBar;'u'⇆'b'LeftArrowRightArrow;'u'LeftArrowRightArrow;'b'leftarrowtail;'u'leftarrowtail;'b'LeftCeiling;'u'LeftCeiling;'u'⟦'b'LeftDoubleBracket;'u'LeftDoubleBracket;'u'⥡'b'LeftDownTeeVector;'u'LeftDownTeeVector;'b'LeftDownVector;'u'LeftDownVector;'u'⥙'b'LeftDownVectorBar;'u'LeftDownVectorBar;'u'⌊'b'LeftFloor;'u'LeftFloor;'b'leftharpoondown;'u'leftharpoondown;'u'↼'b'leftharpoonup;'u'leftharpoonup;'u'⇇'b'leftleftarrows;'u'leftleftarrows;'b'LeftRightArrow;'u'LeftRightArrow;'b'Leftrightarrow;'u'Leftrightarrow;'b'leftrightarrow;'u'leftrightarrow;'b'leftrightarrows;'u'leftrightarrows;'u'⇋'b'leftrightharpoons;'u'leftrightharpoons;'b'leftrightsquigarrow;'u'leftrightsquigarrow;'u'⥎'b'LeftRightVector;'u'LeftRightVector;'b'LeftTee;'u'LeftTee;'u'↤'b'LeftTeeArrow;'u'LeftTeeArrow;'u'⥚'b'LeftTeeVector;'u'LeftTeeVector;'u'⋋'b'leftthreetimes;'u'leftthreetimes;'u'⊲'b'LeftTriangle;'u'LeftTriangle;'u'⧏'b'LeftTriangleBar;'u'LeftTriangleBar;'u'⊴'b'LeftTriangleEqual;'u'LeftTriangleEqual;'u'⥑'b'LeftUpDownVector;'u'LeftUpDownVector;'u'⥠'b'LeftUpTeeVector;'u'LeftUpTeeVector;'u'↿'b'LeftUpVector;'u'LeftUpVector;'u'⥘'b'LeftUpVectorBar;'u'LeftUpVectorBar;'b'LeftVector;'u'LeftVector;'u'⥒'b'LeftVectorBar;'u'LeftVectorBar;'u'⪋'b'lEg;'u'lEg;'u'⋚'b'leg;'u'leg;'b'leq;'u'leq;'b'leqq;'u'leqq;'u'⩽'b'leqslant;'u'leqslant;'b'les;'u'les;'u'⪨'b'lescc;'u'lescc;'u'⩿'b'lesdot;'u'lesdot;'u'⪁'b'lesdoto;'u'lesdoto;'u'⪃'b'lesdotor;'u'lesdotor;'u'⋚︀'b'lesg;'u'lesg;'u'⪓'b'lesges;'u'lesges;'b'lessapprox;'u'lessapprox;'u'⋖'b'lessdot;'u'lessdot;'b'lesseqgtr;'u'lesseqgtr;'b'lesseqqgtr;'u'lesseqqgtr;'b'LessEqualGreater;'u'LessEqualGreater;'b'LessFullEqual;'u'LessFullEqual;'u'≶'b'LessGreater;'u'LessGreater;'b'lessgtr;'u'lessgtr;'u'⪡'b'LessLess;'u'LessLess;'u'≲'b'lesssim;'u'lesssim;'b'LessSlantEqual;'u'LessSlantEqual;'b'LessTilde;'u'LessTilde;'u'⥼'b'lfisht;'u'lfisht;'b'lfloor;'u'lfloor;'b'Lfr;'u'Lfr;'b'lfr;'u'lfr;'b'lg;'u'lg;'u'⪑'b'lgE;'u'lgE;'u'⥢'b'lHar;'u'lHar;'b'lhard;'u'lhard;'b'lharu;'u'lharu;'u'⥪'b'lharul;'u'lharul;'u'▄'b'lhblk;'u'lhblk;'u'Љ'b'LJcy;'u'LJcy;'u'љ'b'ljcy;'u'ljcy;'u'⋘'b'Ll;'u'Ll;'u'≪'b'll;'u'll;'b'llarr;'u'llarr;'b'llcorner;'u'llcorner;'b'Lleftarrow;'u'Lleftarrow;'u'⥫'b'llhard;'u'llhard;'u'◺'b'lltri;'u'lltri;'u'Ŀ'b'Lmidot;'u'Lmidot;'u'ŀ'b'lmidot;'u'lmidot;'u'⎰'b'lmoust;'u'lmoust;'b'lmoustache;'u'lmoustache;'u'⪉'b'lnap;'u'lnap;'b'lnapprox;'u'lnapprox;'u'≨'b'lnE;'u'lnE;'u'⪇'b'lne;'u'lne;'b'lneq;'u'lneq;'b'lneqq;'u'lneqq;'u'⋦'b'lnsim;'u'lnsim;'u'⟬'b'loang;'u'loang;'u'⇽'b'loarr;'u'loarr;'b'lobrk;'u'lobrk;'u'⟵'b'LongLeftArrow;'u'LongLeftArrow;'b'Longleftarrow;'u'Longleftarrow;'b'longleftarrow;'u'longleftarrow;'u'⟷'b'LongLeftRightArrow;'u'LongLeftRightArrow;'b'Longleftrightarrow;'u'Longleftrightarrow;'b'longleftrightarrow;'u'longleftrightarrow;'u'⟼'b'longmapsto;'u'longmapsto;'u'⟶'b'LongRightArrow;'u'LongRightArrow;'b'Longrightarrow;'u'Longrightarrow;'b'longrightarrow;'u'longrightarrow;'b'looparrowleft;'u'looparrowleft;'u'↬'b'looparrowright;'u'looparrowright;'u'⦅'b'lopar;'u'lopar;'b'Lopf;'u'Lopf;'b'lopf;'u'lopf;'u'⨭'b'loplus;'u'loplus;'u'⨴'b'lotimes;'u'lotimes;'u'∗'b'lowast;'u'lowast;'b'lowbar;'u'lowbar;'u'↙'b'LowerLeftArrow;'u'LowerLeftArrow;'u'↘'b'LowerRightArrow;'u'LowerRightArrow;'u'◊'b'loz;'u'loz;'b'lozenge;'u'lozenge;'b'lozf;'u'lozf;'b'lpar;'u'lpar;'u'⦓'b'lparlt;'u'lparlt;'b'lrarr;'u'lrarr;'b'lrcorner;'u'lrcorner;'b'lrhar;'u'lrhar;'u'⥭'b'lrhard;'u'lrhard;'u'‎'b'lrm;'u'lrm;'u'⊿'b'lrtri;'u'lrtri;'b'lsaquo;'u'lsaquo;'b'Lscr;'u'Lscr;'b'lscr;'u'lscr;'u'↰'b'Lsh;'u'Lsh;'b'lsh;'u'lsh;'b'lsim;'u'lsim;'u'⪍'b'lsime;'u'lsime;'u'⪏'b'lsimg;'u'lsimg;'b'lsqb;'u'lsqb;'b'lsquo;'u'lsquo;'b'lsquor;'u'lsquor;'b'Lstrok;'u'Lstrok;'u'ł'b'lstrok;'u'lstrok;'b'LT'u'LT'b'LT;'u'LT;'b'Lt;'u'Lt;'b'lt;'u'lt;'u'⪦'b'ltcc;'u'ltcc;'u'⩹'b'ltcir;'u'ltcir;'b'ltdot;'u'ltdot;'b'lthree;'u'lthree;'u'⋉'b'ltimes;'u'ltimes;'u'⥶'b'ltlarr;'u'ltlarr;'u'⩻'b'ltquest;'u'ltquest;'u'◃'b'ltri;'u'ltri;'b'ltrie;'u'ltrie;'b'ltrif;'u'ltrif;'u'⦖'b'ltrPar;'u'ltrPar;'u'⥊'b'lurdshar;'u'lurdshar;'u'⥦'b'luruhar;'u'luruhar;'u'≨︀'b'lvertneqq;'u'lvertneqq;'b'lvnE;'u'lvnE;'b'¯'u'¯'b'macr;'u'macr;'u'♂'b'male;'u'male;'u'✠'b'malt;'u'malt;'b'maltese;'u'maltese;'u'⤅'b'Map;'u'Map;'u'↦'b'map;'u'map;'b'mapsto;'u'mapsto;'b'mapstodown;'u'mapstodown;'b'mapstoleft;'u'mapstoleft;'u'↥'b'mapstoup;'u'mapstoup;'u'▮'b'marker;'u'marker;'u'⨩'b'mcomma;'u'mcomma;'u'М'b'Mcy;'u'Mcy;'u'м'b'mcy;'u'mcy;'b'mdash;'u'mdash;'u'∺'b'mDDot;'u'mDDot;'b'measuredangle;'u'measuredangle;'u' 'b'MediumSpace;'u'MediumSpace;'u'ℳ'b'Mellintrf;'u'Mellintrf;'b'Mfr;'u'Mfr;'b'mfr;'u'mfr;'u'℧'b'mho;'u'mho;'b'µ'u'µ'b'micro;'u'micro;'u'∣'b'mid;'u'mid;'b'midast;'u'midast;'u'⫰'b'midcir;'u'midcir;'b'middot;'u'middot;'u'−'b'minus;'u'minus;'b'minusb;'u'minusb;'b'minusd;'u'minusd;'u'⨪'b'minusdu;'u'minusdu;'u'∓'b'MinusPlus;'u'MinusPlus;'u'⫛'b'mlcp;'u'mlcp;'b'mldr;'u'mldr;'b'mnplus;'u'mnplus;'u'⊧'b'models;'u'models;'b'Mopf;'u'Mopf;'b'mopf;'u'mopf;'b'mp;'u'mp;'b'Mscr;'u'Mscr;'b'mscr;'u'mscr;'b'mstpos;'u'mstpos;'u'Μ'b'Mu;'u'Mu;'u'μ'b'mu;'u'mu;'u'⊸'b'multimap;'u'multimap;'b'mumap;'u'mumap;'b'nabla;'u'nabla;'u'Ń'b'Nacute;'u'Nacute;'u'ń'b'nacute;'u'nacute;'u'∠⃒'b'nang;'u'nang;'u'≉'b'nap;'u'nap;'u'⩰̸'b'napE;'u'napE;'u'≋̸'b'napid;'u'napid;'u'ʼn'b'napos;'u'napos;'b'napprox;'u'napprox;'u'♮'b'natur;'u'natur;'b'natural;'u'natural;'u'ℕ'b'naturals;'u'naturals;'b'nbsp;'u'nbsp;'u'≎̸'b'nbump;'u'nbump;'u'≏̸'b'nbumpe;'u'nbumpe;'u'⩃'b'ncap;'u'ncap;'u'Ň'b'Ncaron;'u'Ncaron;'u'ň'b'ncaron;'u'ncaron;'u'Ņ'b'Ncedil;'u'Ncedil;'u'ņ'b'ncedil;'u'ncedil;'u'≇'b'ncong;'u'ncong;'u'⩭̸'b'ncongdot;'u'ncongdot;'u'⩂'b'ncup;'u'ncup;'u'Н'b'Ncy;'u'Ncy;'u'н'b'ncy;'u'ncy;'b'ndash;'u'ndash;'u'≠'b'ne;'u'ne;'u'⤤'b'nearhk;'u'nearhk;'u'⇗'b'neArr;'u'neArr;'u'↗'b'nearr;'u'nearr;'b'nearrow;'u'nearrow;'u'≐̸'b'nedot;'u'nedot;'u'​'b'NegativeMediumSpace;'u'NegativeMediumSpace;'b'NegativeThickSpace;'u'NegativeThickSpace;'b'NegativeThinSpace;'u'NegativeThinSpace;'b'NegativeVeryThinSpace;'u'NegativeVeryThinSpace;'u'≢'b'nequiv;'u'nequiv;'u'⤨'b'nesear;'u'nesear;'u'≂̸'b'nesim;'u'nesim;'b'NestedGreaterGreater;'u'NestedGreaterGreater;'b'NestedLessLess;'u'NestedLessLess;'b'NewLine;'u'NewLine;'u'∄'b'nexist;'u'nexist;'b'nexists;'u'nexists;'b'Nfr;'u'Nfr;'b'nfr;'u'nfr;'u'≧̸'b'ngE;'u'ngE;'u'≱'b'nge;'u'nge;'b'ngeq;'u'ngeq;'b'ngeqq;'u'ngeqq;'u'⩾̸'b'ngeqslant;'u'ngeqslant;'b'nges;'u'nges;'u'⋙̸'b'nGg;'u'nGg;'u'≵'b'ngsim;'u'ngsim;'u'≫⃒'b'nGt;'u'nGt;'u'≯'b'ngt;'u'ngt;'b'ngtr;'u'ngtr;'u'≫̸'b'nGtv;'u'nGtv;'u'⇎'b'nhArr;'u'nhArr;'u'↮'b'nharr;'u'nharr;'u'⫲'b'nhpar;'u'nhpar;'u'∋'b'ni;'u'ni;'u'⋼'b'nis;'u'nis;'u'⋺'b'nisd;'u'nisd;'b'niv;'u'niv;'u'Њ'b'NJcy;'u'NJcy;'u'њ'b'njcy;'u'njcy;'u'⇍'b'nlArr;'u'nlArr;'u'↚'b'nlarr;'u'nlarr;'u'‥'b'nldr;'u'nldr;'u'≦̸'b'nlE;'u'nlE;'u'≰'b'nle;'u'nle;'b'nLeftarrow;'u'nLeftarrow;'b'nleftarrow;'u'nleftarrow;'b'nLeftrightarrow;'u'nLeftrightarrow;'b'nleftrightarrow;'u'nleftrightarrow;'b'nleq;'u'nleq;'b'nleqq;'u'nleqq;'u'⩽̸'b'nleqslant;'u'nleqslant;'b'nles;'u'nles;'u'≮'b'nless;'u'nless;'u'⋘̸'b'nLl;'u'nLl;'u'≴'b'nlsim;'u'nlsim;'u'≪⃒'b'nLt;'u'nLt;'b'nlt;'u'nlt;'u'⋪'b'nltri;'u'nltri;'u'⋬'b'nltrie;'u'nltrie;'u'≪̸'b'nLtv;'u'nLtv;'u'∤'b'nmid;'u'nmid;'u'⁠'b'NoBreak;'u'NoBreak;'b'NonBreakingSpace;'u'NonBreakingSpace;'b'Nopf;'u'Nopf;'b'nopf;'u'nopf;'b'¬'u'¬'u'⫬'b'Not;'u'Not;'b'not;'u'not;'b'NotCongruent;'u'NotCongruent;'u'≭'b'NotCupCap;'u'NotCupCap;'u'∦'b'NotDoubleVerticalBar;'u'NotDoubleVerticalBar;'u'∉'b'NotElement;'u'NotElement;'b'NotEqual;'u'NotEqual;'b'NotEqualTilde;'u'NotEqualTilde;'b'NotExists;'u'NotExists;'b'NotGreater;'u'NotGreater;'b'NotGreaterEqual;'u'NotGreaterEqual;'b'NotGreaterFullEqual;'u'NotGreaterFullEqual;'b'NotGreaterGreater;'u'NotGreaterGreater;'u'≹'b'NotGreaterLess;'u'NotGreaterLess;'b'NotGreaterSlantEqual;'u'NotGreaterSlantEqual;'b'NotGreaterTilde;'u'NotGreaterTilde;'b'NotHumpDownHump;'u'NotHumpDownHump;'b'NotHumpEqual;'u'NotHumpEqual;'b'notin;'u'notin;'u'⋵̸'b'notindot;'u'notindot;'u'⋹̸'b'notinE;'u'notinE;'b'notinva;'u'notinva;'u'⋷'b'notinvb;'u'notinvb;'u'⋶'b'notinvc;'u'notinvc;'b'NotLeftTriangle;'u'NotLeftTriangle;'u'⧏̸'b'NotLeftTriangleBar;'u'NotLeftTriangleBar;'b'NotLeftTriangleEqual;'u'NotLeftTriangleEqual;'b'NotLess;'u'NotLess;'b'NotLessEqual;'u'NotLessEqual;'u'≸'b'NotLessGreater;'u'NotLessGreater;'b'NotLessLess;'u'NotLessLess;'b'NotLessSlantEqual;'u'NotLessSlantEqual;'b'NotLessTilde;'u'NotLessTilde;'u'⪢̸'b'NotNestedGreaterGreater;'u'NotNestedGreaterGreater;'u'⪡̸'b'NotNestedLessLess;'u'NotNestedLessLess;'u'∌'b'notni;'u'notni;'b'notniva;'u'notniva;'u'⋾'b'notnivb;'u'notnivb;'u'⋽'b'notnivc;'u'notnivc;'u'⊀'b'NotPrecedes;'u'NotPrecedes;'u'⪯̸'b'NotPrecedesEqual;'u'NotPrecedesEqual;'u'⋠'b'NotPrecedesSlantEqual;'u'NotPrecedesSlantEqual;'b'NotReverseElement;'u'NotReverseElement;'u'⋫'b'NotRightTriangle;'u'NotRightTriangle;'u'⧐̸'b'NotRightTriangleBar;'u'NotRightTriangleBar;'u'⋭'b'NotRightTriangleEqual;'u'NotRightTriangleEqual;'u'⊏̸'b'NotSquareSubset;'u'NotSquareSubset;'u'⋢'b'NotSquareSubsetEqual;'u'NotSquareSubsetEqual;'u'⊐̸'b'NotSquareSuperset;'u'NotSquareSuperset;'u'⋣'b'NotSquareSupersetEqual;'u'NotSquareSupersetEqual;'u'⊂⃒'b'NotSubset;'u'NotSubset;'u'⊈'b'NotSubsetEqual;'u'NotSubsetEqual;'u'⊁'b'NotSucceeds;'u'NotSucceeds;'u'⪰̸'b'NotSucceedsEqual;'u'NotSucceedsEqual;'u'⋡'b'NotSucceedsSlantEqual;'u'NotSucceedsSlantEqual;'u'≿̸'b'NotSucceedsTilde;'u'NotSucceedsTilde;'u'⊃⃒'b'NotSuperset;'u'NotSuperset;'u'⊉'b'NotSupersetEqual;'u'NotSupersetEqual;'u'≁'b'NotTilde;'u'NotTilde;'u'≄'b'NotTildeEqual;'u'NotTildeEqual;'b'NotTildeFullEqual;'u'NotTildeFullEqual;'b'NotTildeTilde;'u'NotTildeTilde;'b'NotVerticalBar;'u'NotVerticalBar;'b'npar;'u'npar;'b'nparallel;'u'nparallel;'u'⫽⃥'b'nparsl;'u'nparsl;'u'∂̸'b'npart;'u'npart;'u'⨔'b'npolint;'u'npolint;'b'npr;'u'npr;'b'nprcue;'u'nprcue;'b'npre;'u'npre;'b'nprec;'u'nprec;'b'npreceq;'u'npreceq;'u'⇏'b'nrArr;'u'nrArr;'u'↛'b'nrarr;'u'nrarr;'u'⤳̸'b'nrarrc;'u'nrarrc;'u'↝̸'b'nrarrw;'u'nrarrw;'b'nRightarrow;'u'nRightarrow;'b'nrightarrow;'u'nrightarrow;'b'nrtri;'u'nrtri;'b'nrtrie;'u'nrtrie;'b'nsc;'u'nsc;'b'nsccue;'u'nsccue;'b'nsce;'u'nsce;'b'Nscr;'u'Nscr;'b'nscr;'u'nscr;'b'nshortmid;'u'nshortmid;'b'nshortparallel;'u'nshortparallel;'b'nsim;'u'nsim;'b'nsime;'u'nsime;'b'nsimeq;'u'nsimeq;'b'nsmid;'u'nsmid;'b'nspar;'u'nspar;'b'nsqsube;'u'nsqsube;'b'nsqsupe;'u'nsqsupe;'u'⊄'b'nsub;'u'nsub;'u'⫅̸'b'nsubE;'u'nsubE;'b'nsube;'u'nsube;'b'nsubset;'u'nsubset;'b'nsubseteq;'u'nsubseteq;'b'nsubseteqq;'u'nsubseteqq;'b'nsucc;'u'nsucc;'b'nsucceq;'u'nsucceq;'u'⊅'b'nsup;'u'nsup;'u'⫆̸'b'nsupE;'u'nsupE;'b'nsupe;'u'nsupe;'b'nsupset;'u'nsupset;'b'nsupseteq;'u'nsupseteq;'b'nsupseteqq;'u'nsupseteqq;'b'ntgl;'u'ntgl;'b'Ñ'u'Ñ'b'ñ'u'ñ'b'Ntilde;'u'Ntilde;'b'ntilde;'u'ntilde;'b'ntlg;'u'ntlg;'b'ntriangleleft;'u'ntriangleleft;'b'ntrianglelefteq;'u'ntrianglelefteq;'b'ntriangleright;'u'ntriangleright;'b'ntrianglerighteq;'u'ntrianglerighteq;'u'Ν'b'Nu;'u'Nu;'u'ν'b'nu;'u'nu;'b'num;'u'num;'u'№'b'numero;'u'numero;'u' 'b'numsp;'u'numsp;'u'≍⃒'b'nvap;'u'nvap;'u'⊯'b'nVDash;'u'nVDash;'u'⊮'b'nVdash;'u'nVdash;'u'⊭'b'nvDash;'u'nvDash;'u'⊬'b'nvdash;'u'nvdash;'u'≥⃒'b'nvge;'u'nvge;'u'>⃒'b'nvgt;'u'nvgt;'u'⤄'b'nvHarr;'u'nvHarr;'u'⧞'b'nvinfin;'u'nvinfin;'u'⤂'b'nvlArr;'u'nvlArr;'u'≤⃒'b'nvle;'u'nvle;'u'<⃒'b'nvlt;'u'nvlt;'u'⊴⃒'b'nvltrie;'u'nvltrie;'u'⤃'b'nvrArr;'u'nvrArr;'u'⊵⃒'b'nvrtrie;'u'nvrtrie;'u'∼⃒'b'nvsim;'u'nvsim;'u'⤣'b'nwarhk;'u'nwarhk;'u'⇖'b'nwArr;'u'nwArr;'u'↖'b'nwarr;'u'nwarr;'b'nwarrow;'u'nwarrow;'u'⤧'b'nwnear;'u'nwnear;'b'Ó'u'Ó'b'ó'u'ó'b'Oacute;'u'Oacute;'b'oacute;'u'oacute;'b'oast;'u'oast;'b'ocir;'u'ocir;'b'Ô'u'Ô'b'ô'u'ô'b'Ocirc;'u'Ocirc;'b'ocirc;'u'ocirc;'u'О'b'Ocy;'u'Ocy;'u'о'b'ocy;'u'ocy;'b'odash;'u'odash;'u'Ő'b'Odblac;'u'Odblac;'u'ő'b'odblac;'u'odblac;'u'⨸'b'odiv;'u'odiv;'b'odot;'u'odot;'u'⦼'b'odsold;'u'odsold;'b'OElig;'u'OElig;'b'oelig;'u'oelig;'u'⦿'b'ofcir;'u'ofcir;'b'Ofr;'u'Ofr;'b'ofr;'u'ofr;'u'˛'b'ogon;'u'ogon;'b'Ò'u'Ò'b'ò'u'ò'b'Ograve;'u'Ograve;'b'ograve;'u'ograve;'u'⧁'b'ogt;'u'ogt;'u'⦵'b'ohbar;'u'ohbar;'u'Ω'b'ohm;'u'ohm;'b'oint;'u'oint;'b'olarr;'u'olarr;'u'⦾'b'olcir;'u'olcir;'u'⦻'b'olcross;'u'olcross;'u'‾'b'oline;'u'oline;'u'⧀'b'olt;'u'olt;'u'Ō'b'Omacr;'u'Omacr;'u'ō'b'omacr;'u'omacr;'b'Omega;'u'Omega;'u'ω'b'omega;'u'omega;'u'Ο'b'Omicron;'u'Omicron;'u'ο'b'omicron;'u'omicron;'u'⦶'b'omid;'u'omid;'b'ominus;'u'ominus;'b'Oopf;'u'Oopf;'b'oopf;'u'oopf;'u'⦷'b'opar;'u'opar;'b'OpenCurlyDoubleQuote;'u'OpenCurlyDoubleQuote;'b'OpenCurlyQuote;'u'OpenCurlyQuote;'u'⦹'b'operp;'u'operp;'b'oplus;'u'oplus;'u'⩔'b'Or;'u'Or;'u'∨'b'or;'u'or;'b'orarr;'u'orarr;'u'⩝'b'ord;'u'ord;'u'ℴ'b'order;'u'order;'b'orderof;'u'orderof;'b'ª'u'ª'b'ordf;'u'ordf;'b'º'u'º'b'ordm;'u'ordm;'u'⊶'b'origof;'u'origof;'u'⩖'b'oror;'u'oror;'u'⩗'b'orslope;'u'orslope;'u'⩛'b'orv;'u'orv;'b'oS;'u'oS;'b'Oscr;'u'Oscr;'b'oscr;'u'oscr;'b'Ø'u'Ø'b'ø'u'ø'b'Oslash;'u'Oslash;'b'oslash;'u'oslash;'u'⊘'b'osol;'u'osol;'b'Õ'u'Õ'b'õ'u'õ'b'Otilde;'u'Otilde;'b'otilde;'u'otilde;'u'⨷'b'Otimes;'u'Otimes;'b'otimes;'u'otimes;'u'⨶'b'otimesas;'u'otimesas;'b'Ö'u'Ö'b'ö'u'ö'b'Ouml;'u'Ouml;'b'ouml;'u'ouml;'u'⌽'b'ovbar;'u'ovbar;'b'OverBar;'u'OverBar;'u'⏞'b'OverBrace;'u'OverBrace;'u'⎴'b'OverBracket;'u'OverBracket;'u'⏜'b'OverParenthesis;'u'OverParenthesis;'b'par;'u'par;'b'¶'u'¶'b'para;'u'para;'b'parallel;'u'parallel;'u'⫳'b'parsim;'u'parsim;'u'⫽'b'parsl;'u'parsl;'u'∂'b'part;'u'part;'b'PartialD;'u'PartialD;'u'П'b'Pcy;'u'Pcy;'u'п'b'pcy;'u'pcy;'b'percnt;'u'percnt;'b'period;'u'period;'b'permil;'u'permil;'b'perp;'u'perp;'u'‱'b'pertenk;'u'pertenk;'b'Pfr;'u'Pfr;'b'pfr;'u'pfr;'u'Φ'b'Phi;'u'Phi;'b'phi;'u'phi;'u'ϕ'b'phiv;'u'phiv;'b'phmmat;'u'phmmat;'u'☎'b'phone;'u'phone;'u'Π'b'Pi;'u'Pi;'u'π'b'pi;'u'pi;'b'pitchfork;'u'pitchfork;'u'ϖ'b'piv;'u'piv;'b'planck;'u'planck;'u'ℎ'b'planckh;'u'planckh;'b'plankv;'u'plankv;'b'plus;'u'plus;'u'⨣'b'plusacir;'u'plusacir;'b'plusb;'u'plusb;'u'⨢'b'pluscir;'u'pluscir;'b'plusdo;'u'plusdo;'u'⨥'b'plusdu;'u'plusdu;'u'⩲'b'pluse;'u'pluse;'b'±'u'±'b'PlusMinus;'u'PlusMinus;'b'plusmn;'u'plusmn;'u'⨦'b'plussim;'u'plussim;'u'⨧'b'plustwo;'u'plustwo;'b'pm;'u'pm;'b'Poincareplane;'u'Poincareplane;'u'⨕'b'pointint;'u'pointint;'u'ℙ'b'Popf;'u'Popf;'b'popf;'u'popf;'b'£'u'£'b'pound;'u'pound;'u'⪻'b'Pr;'u'Pr;'u'≺'b'pr;'u'pr;'u'⪷'b'prap;'u'prap;'u'≼'b'prcue;'u'prcue;'u'⪳'b'prE;'u'prE;'u'⪯'b'pre;'u'pre;'b'prec;'u'prec;'b'precapprox;'u'precapprox;'b'preccurlyeq;'u'preccurlyeq;'b'Precedes;'u'Precedes;'b'PrecedesEqual;'u'PrecedesEqual;'b'PrecedesSlantEqual;'u'PrecedesSlantEqual;'u'≾'b'PrecedesTilde;'u'PrecedesTilde;'b'preceq;'u'preceq;'u'⪹'b'precnapprox;'u'precnapprox;'u'⪵'b'precneqq;'u'precneqq;'u'⋨'b'precnsim;'u'precnsim;'b'precsim;'u'precsim;'u'″'b'Prime;'u'Prime;'u'′'b'prime;'u'prime;'b'primes;'u'primes;'b'prnap;'u'prnap;'b'prnE;'u'prnE;'b'prnsim;'u'prnsim;'u'∏'b'prod;'u'prod;'b'Product;'u'Product;'u'⌮'b'profalar;'u'profalar;'u'⌒'b'profline;'u'profline;'u'⌓'b'profsurf;'u'profsurf;'u'∝'b'prop;'u'prop;'b'Proportion;'u'Proportion;'b'Proportional;'u'Proportional;'b'propto;'u'propto;'b'prsim;'u'prsim;'u'⊰'b'prurel;'u'prurel;'b'Pscr;'u'Pscr;'b'pscr;'u'pscr;'u'Ψ'b'Psi;'u'Psi;'u'ψ'b'psi;'u'psi;'u' 'b'puncsp;'u'puncsp;'b'Qfr;'u'Qfr;'b'qfr;'u'qfr;'b'qint;'u'qint;'u'ℚ'b'Qopf;'u'Qopf;'b'qopf;'u'qopf;'u'⁗'b'qprime;'u'qprime;'b'Qscr;'u'Qscr;'b'qscr;'u'qscr;'b'quaternions;'u'quaternions;'u'⨖'b'quatint;'u'quatint;'b'quest;'u'quest;'b'questeq;'u'questeq;'b'QUOT'u'QUOT'b'QUOT;'u'QUOT;'b'quot;'u'quot;'u'⇛'b'rAarr;'u'rAarr;'u'∽̱'b'race;'u'race;'u'Ŕ'b'Racute;'u'Racute;'u'ŕ'b'racute;'u'racute;'u'√'b'radic;'u'radic;'u'⦳'b'raemptyv;'u'raemptyv;'u'⟫'b'Rang;'u'Rang;'u'⟩'b'rang;'u'rang;'u'⦒'b'rangd;'u'rangd;'u'⦥'b'range;'u'range;'b'rangle;'u'rangle;'b'»'u'»'b'raquo;'u'raquo;'u'↠'b'Rarr;'u'Rarr;'b'rArr;'u'rArr;'u'→'b'rarr;'u'rarr;'u'⥵'b'rarrap;'u'rarrap;'u'⇥'b'rarrb;'u'rarrb;'u'⤠'b'rarrbfs;'u'rarrbfs;'u'⤳'b'rarrc;'u'rarrc;'u'⤞'b'rarrfs;'u'rarrfs;'b'rarrhk;'u'rarrhk;'b'rarrlp;'u'rarrlp;'u'⥅'b'rarrpl;'u'rarrpl;'u'⥴'b'rarrsim;'u'rarrsim;'u'⤖'b'Rarrtl;'u'Rarrtl;'u'↣'b'rarrtl;'u'rarrtl;'u'↝'b'rarrw;'u'rarrw;'u'⤜'b'rAtail;'u'rAtail;'u'⤚'b'ratail;'u'ratail;'u'∶'b'ratio;'u'ratio;'b'rationals;'u'rationals;'b'RBarr;'u'RBarr;'b'rBarr;'u'rBarr;'b'rbarr;'u'rbarr;'u'❳'b'rbbrk;'u'rbbrk;'b'rbrace;'u'rbrace;'b'rbrack;'u'rbrack;'u'⦌'b'rbrke;'u'rbrke;'u'⦎'b'rbrksld;'u'rbrksld;'u'⦐'b'rbrkslu;'u'rbrkslu;'u'Ř'b'Rcaron;'u'Rcaron;'u'ř'b'rcaron;'u'rcaron;'u'Ŗ'b'Rcedil;'u'Rcedil;'u'ŗ'b'rcedil;'u'rcedil;'u'⌉'b'rceil;'u'rceil;'b'rcub;'u'rcub;'u'Р'b'Rcy;'u'Rcy;'u'р'b'rcy;'u'rcy;'u'⤷'b'rdca;'u'rdca;'u'⥩'b'rdldhar;'u'rdldhar;'b'rdquo;'u'rdquo;'b'rdquor;'u'rdquor;'u'↳'b'rdsh;'u'rdsh;'u'ℜ'b'Re;'u'Re;'b'real;'u'real;'u'ℛ'b'realine;'u'realine;'b'realpart;'u'realpart;'u'ℝ'b'reals;'u'reals;'u'▭'b'rect;'u'rect;'b'REG'u'REG'b'REG;'u'REG;'b'reg;'u'reg;'b'ReverseElement;'u'ReverseElement;'b'ReverseEquilibrium;'u'ReverseEquilibrium;'b'ReverseUpEquilibrium;'u'ReverseUpEquilibrium;'u'⥽'b'rfisht;'u'rfisht;'u'⌋'b'rfloor;'u'rfloor;'b'Rfr;'u'Rfr;'b'rfr;'u'rfr;'u'⥤'b'rHar;'u'rHar;'b'rhard;'u'rhard;'u'⇀'b'rharu;'u'rharu;'u'⥬'b'rharul;'u'rharul;'u'Ρ'b'Rho;'u'Rho;'u'ρ'b'rho;'u'rho;'u'ϱ'b'rhov;'u'rhov;'b'RightAngleBracket;'u'RightAngleBracket;'b'RightArrow;'u'RightArrow;'b'Rightarrow;'u'Rightarrow;'b'rightarrow;'u'rightarrow;'b'RightArrowBar;'u'RightArrowBar;'u'⇄'b'RightArrowLeftArrow;'u'RightArrowLeftArrow;'b'rightarrowtail;'u'rightarrowtail;'b'RightCeiling;'u'RightCeiling;'u'⟧'b'RightDoubleBracket;'u'RightDoubleBracket;'u'⥝'b'RightDownTeeVector;'u'RightDownTeeVector;'b'RightDownVector;'u'RightDownVector;'u'⥕'b'RightDownVectorBar;'u'RightDownVectorBar;'b'RightFloor;'u'RightFloor;'b'rightharpoondown;'u'rightharpoondown;'b'rightharpoonup;'u'rightharpoonup;'b'rightleftarrows;'u'rightleftarrows;'b'rightleftharpoons;'u'rightleftharpoons;'u'⇉'b'rightrightarrows;'u'rightrightarrows;'b'rightsquigarrow;'u'rightsquigarrow;'u'⊢'b'RightTee;'u'RightTee;'b'RightTeeArrow;'u'RightTeeArrow;'u'⥛'b'RightTeeVector;'u'RightTeeVector;'u'⋌'b'rightthreetimes;'u'rightthreetimes;'u'⊳'b'RightTriangle;'u'RightTriangle;'u'⧐'b'RightTriangleBar;'u'RightTriangleBar;'u'⊵'b'RightTriangleEqual;'u'RightTriangleEqual;'u'⥏'b'RightUpDownVector;'u'RightUpDownVector;'u'⥜'b'RightUpTeeVector;'u'RightUpTeeVector;'u'↾'b'RightUpVector;'u'RightUpVector;'u'⥔'b'RightUpVectorBar;'u'RightUpVectorBar;'b'RightVector;'u'RightVector;'u'⥓'b'RightVectorBar;'u'RightVectorBar;'u'˚'b'ring;'u'ring;'b'risingdotseq;'u'risingdotseq;'b'rlarr;'u'rlarr;'b'rlhar;'u'rlhar;'u'‏'b'rlm;'u'rlm;'u'⎱'b'rmoust;'u'rmoust;'b'rmoustache;'u'rmoustache;'u'⫮'b'rnmid;'u'rnmid;'u'⟭'b'roang;'u'roang;'u'⇾'b'roarr;'u'roarr;'b'robrk;'u'robrk;'u'⦆'b'ropar;'u'ropar;'b'Ropf;'u'Ropf;'b'ropf;'u'ropf;'u'⨮'b'roplus;'u'roplus;'u'⨵'b'rotimes;'u'rotimes;'u'⥰'b'RoundImplies;'u'RoundImplies;'b'rpar;'u'rpar;'u'⦔'b'rpargt;'u'rpargt;'u'⨒'b'rppolint;'u'rppolint;'b'rrarr;'u'rrarr;'b'Rrightarrow;'u'Rrightarrow;'b'rsaquo;'u'rsaquo;'b'Rscr;'u'Rscr;'b'rscr;'u'rscr;'u'↱'b'Rsh;'u'Rsh;'b'rsh;'u'rsh;'b'rsqb;'u'rsqb;'b'rsquo;'u'rsquo;'b'rsquor;'u'rsquor;'b'rthree;'u'rthree;'u'⋊'b'rtimes;'u'rtimes;'u'▹'b'rtri;'u'rtri;'b'rtrie;'u'rtrie;'b'rtrif;'u'rtrif;'u'⧎'b'rtriltri;'u'rtriltri;'u'⧴'b'RuleDelayed;'u'RuleDelayed;'u'⥨'b'ruluhar;'u'ruluhar;'u'℞'b'rx;'u'rx;'u'Ś'b'Sacute;'u'Sacute;'u'ś'b'sacute;'u'sacute;'b'sbquo;'u'sbquo;'u'⪼'b'Sc;'u'Sc;'u'≻'b'sc;'u'sc;'u'⪸'b'scap;'u'scap;'b'Scaron;'u'Scaron;'b'scaron;'u'scaron;'u'≽'b'sccue;'u'sccue;'u'⪴'b'scE;'u'scE;'u'⪰'b'sce;'u'sce;'u'Ş'b'Scedil;'u'Scedil;'u'ş'b'scedil;'u'scedil;'u'Ŝ'b'Scirc;'u'Scirc;'u'ŝ'b'scirc;'u'scirc;'u'⪺'b'scnap;'u'scnap;'u'⪶'b'scnE;'u'scnE;'u'⋩'b'scnsim;'u'scnsim;'u'⨓'b'scpolint;'u'scpolint;'u'≿'b'scsim;'u'scsim;'u'С'b'Scy;'u'Scy;'u'с'b'scy;'u'scy;'u'⋅'b'sdot;'u'sdot;'b'sdotb;'u'sdotb;'u'⩦'b'sdote;'u'sdote;'b'searhk;'u'searhk;'u'⇘'b'seArr;'u'seArr;'b'searr;'u'searr;'b'searrow;'u'searrow;'b'§'u'§'b'sect;'u'sect;'b'semi;'u'semi;'u'⤩'b'seswar;'u'seswar;'b'setminus;'u'setminus;'b'setmn;'u'setmn;'u'✶'b'sext;'u'sext;'b'Sfr;'u'Sfr;'b'sfr;'u'sfr;'b'sfrown;'u'sfrown;'u'♯'b'sharp;'u'sharp;'u'Щ'b'SHCHcy;'u'SHCHcy;'u'щ'b'shchcy;'u'shchcy;'u'Ш'b'SHcy;'u'SHcy;'u'ш'b'shcy;'u'shcy;'b'ShortDownArrow;'u'ShortDownArrow;'b'ShortLeftArrow;'u'ShortLeftArrow;'b'shortmid;'u'shortmid;'b'shortparallel;'u'shortparallel;'b'ShortRightArrow;'u'ShortRightArrow;'u'↑'b'ShortUpArrow;'u'ShortUpArrow;'b'­'u'­'b'shy;'u'shy;'u'Σ'b'Sigma;'u'Sigma;'u'σ'b'sigma;'u'sigma;'u'ς'b'sigmaf;'u'sigmaf;'b'sigmav;'u'sigmav;'u'∼'b'sim;'u'sim;'u'⩪'b'simdot;'u'simdot;'u'≃'b'sime;'u'sime;'b'simeq;'u'simeq;'u'⪞'b'simg;'u'simg;'u'⪠'b'simgE;'u'simgE;'u'⪝'b'siml;'u'siml;'u'⪟'b'simlE;'u'simlE;'u'≆'b'simne;'u'simne;'u'⨤'b'simplus;'u'simplus;'u'⥲'b'simrarr;'u'simrarr;'b'slarr;'u'slarr;'b'SmallCircle;'u'SmallCircle;'b'smallsetminus;'u'smallsetminus;'u'⨳'b'smashp;'u'smashp;'u'⧤'b'smeparsl;'u'smeparsl;'b'smid;'u'smid;'u'⌣'b'smile;'u'smile;'u'⪪'b'smt;'u'smt;'u'⪬'b'smte;'u'smte;'u'⪬︀'b'smtes;'u'smtes;'u'Ь'b'SOFTcy;'u'SOFTcy;'u'ь'b'softcy;'u'softcy;'b'sol;'u'sol;'u'⧄'b'solb;'u'solb;'u'⌿'b'solbar;'u'solbar;'b'Sopf;'u'Sopf;'b'sopf;'u'sopf;'u'♠'b'spades;'u'spades;'b'spadesuit;'u'spadesuit;'b'spar;'u'spar;'u'⊓'b'sqcap;'u'sqcap;'u'⊓︀'b'sqcaps;'u'sqcaps;'u'⊔'b'sqcup;'u'sqcup;'u'⊔︀'b'sqcups;'u'sqcups;'b'Sqrt;'u'Sqrt;'u'⊏'b'sqsub;'u'sqsub;'u'⊑'b'sqsube;'u'sqsube;'b'sqsubset;'u'sqsubset;'b'sqsubseteq;'u'sqsubseteq;'u'⊐'b'sqsup;'u'sqsup;'u'⊒'b'sqsupe;'u'sqsupe;'b'sqsupset;'u'sqsupset;'b'sqsupseteq;'u'sqsupseteq;'u'□'b'squ;'u'squ;'b'Square;'u'Square;'b'square;'u'square;'b'SquareIntersection;'u'SquareIntersection;'b'SquareSubset;'u'SquareSubset;'b'SquareSubsetEqual;'u'SquareSubsetEqual;'b'SquareSuperset;'u'SquareSuperset;'b'SquareSupersetEqual;'u'SquareSupersetEqual;'b'SquareUnion;'u'SquareUnion;'b'squarf;'u'squarf;'b'squf;'u'squf;'b'srarr;'u'srarr;'b'Sscr;'u'Sscr;'b'sscr;'u'sscr;'b'ssetmn;'u'ssetmn;'b'ssmile;'u'ssmile;'u'⋆'b'sstarf;'u'sstarf;'b'Star;'u'Star;'u'☆'b'star;'u'star;'b'starf;'u'starf;'b'straightepsilon;'u'straightepsilon;'b'straightphi;'u'straightphi;'b'strns;'u'strns;'u'⋐'b'Sub;'u'Sub;'u'⊂'b'sub;'u'sub;'u'⪽'b'subdot;'u'subdot;'u'⫅'b'subE;'u'subE;'u'⊆'b'sube;'u'sube;'u'⫃'b'subedot;'u'subedot;'u'⫁'b'submult;'u'submult;'u'⫋'b'subnE;'u'subnE;'u'⊊'b'subne;'u'subne;'u'⪿'b'subplus;'u'subplus;'u'⥹'b'subrarr;'u'subrarr;'b'Subset;'u'Subset;'b'subset;'u'subset;'b'subseteq;'u'subseteq;'b'subseteqq;'u'subseteqq;'b'SubsetEqual;'u'SubsetEqual;'b'subsetneq;'u'subsetneq;'b'subsetneqq;'u'subsetneqq;'u'⫇'b'subsim;'u'subsim;'u'⫕'b'subsub;'u'subsub;'u'⫓'b'subsup;'u'subsup;'b'succ;'u'succ;'b'succapprox;'u'succapprox;'b'succcurlyeq;'u'succcurlyeq;'b'Succeeds;'u'Succeeds;'b'SucceedsEqual;'u'SucceedsEqual;'b'SucceedsSlantEqual;'u'SucceedsSlantEqual;'b'SucceedsTilde;'u'SucceedsTilde;'b'succeq;'u'succeq;'b'succnapprox;'u'succnapprox;'b'succneqq;'u'succneqq;'b'succnsim;'u'succnsim;'b'succsim;'u'succsim;'b'SuchThat;'u'SuchThat;'u'∑'b'Sum;'u'Sum;'b'sum;'u'sum;'u'♪'b'sung;'u'sung;'b'¹'u'¹'b'sup1;'u'sup1;'b'²'u'²'b'sup2;'u'sup2;'b'³'u'³'b'sup3;'u'sup3;'u'⋑'b'Sup;'u'Sup;'u'⊃'b'sup;'u'sup;'u'⪾'b'supdot;'u'supdot;'u'⫘'b'supdsub;'u'supdsub;'u'⫆'b'supE;'u'supE;'u'⊇'b'supe;'u'supe;'u'⫄'b'supedot;'u'supedot;'b'Superset;'u'Superset;'b'SupersetEqual;'u'SupersetEqual;'u'⟉'b'suphsol;'u'suphsol;'u'⫗'b'suphsub;'u'suphsub;'u'⥻'b'suplarr;'u'suplarr;'u'⫂'b'supmult;'u'supmult;'u'⫌'b'supnE;'u'supnE;'u'⊋'b'supne;'u'supne;'u'⫀'b'supplus;'u'supplus;'b'Supset;'u'Supset;'b'supset;'u'supset;'b'supseteq;'u'supseteq;'b'supseteqq;'u'supseteqq;'b'supsetneq;'u'supsetneq;'b'supsetneqq;'u'supsetneqq;'u'⫈'b'supsim;'u'supsim;'u'⫔'b'supsub;'u'supsub;'u'⫖'b'supsup;'u'supsup;'b'swarhk;'u'swarhk;'u'⇙'b'swArr;'u'swArr;'b'swarr;'u'swarr;'b'swarrow;'u'swarrow;'u'⤪'b'swnwar;'u'swnwar;'b'ß'u'ß'b'szlig;'u'szlig;'b'Tab;'u'Tab;'u'⌖'b'target;'u'target;'u'Τ'b'Tau;'u'Tau;'u'τ'b'tau;'u'tau;'b'tbrk;'u'tbrk;'u'Ť'b'Tcaron;'u'Tcaron;'u'ť'b'tcaron;'u'tcaron;'u'Ţ'b'Tcedil;'u'Tcedil;'u'ţ'b'tcedil;'u'tcedil;'u'Т'b'Tcy;'u'Tcy;'u'т'b'tcy;'u'tcy;'u'⃛'b'tdot;'u'tdot;'u'⌕'b'telrec;'u'telrec;'b'Tfr;'u'Tfr;'b'tfr;'u'tfr;'u'∴'b'there4;'u'there4;'b'Therefore;'u'Therefore;'b'therefore;'u'therefore;'u'Θ'b'Theta;'u'Theta;'u'θ'b'theta;'u'theta;'u'ϑ'b'thetasym;'u'thetasym;'b'thetav;'u'thetav;'b'thickapprox;'u'thickapprox;'b'thicksim;'u'thicksim;'u'  'b'ThickSpace;'u'ThickSpace;'u' 'b'thinsp;'u'thinsp;'b'ThinSpace;'u'ThinSpace;'b'thkap;'u'thkap;'b'thksim;'u'thksim;'b'Þ'u'Þ'b'þ'u'þ'b'THORN;'u'THORN;'b'thorn;'u'thorn;'b'Tilde;'u'Tilde;'b'tilde;'u'tilde;'b'TildeEqual;'u'TildeEqual;'b'TildeFullEqual;'u'TildeFullEqual;'b'TildeTilde;'u'TildeTilde;'b'×'u'×'b'times;'u'times;'b'timesb;'u'timesb;'u'⨱'b'timesbar;'u'timesbar;'u'⨰'b'timesd;'u'timesd;'b'tint;'u'tint;'b'toea;'u'toea;'b'top;'u'top;'u'⌶'b'topbot;'u'topbot;'u'⫱'b'topcir;'u'topcir;'b'Topf;'u'Topf;'b'topf;'u'topf;'u'⫚'b'topfork;'u'topfork;'b'tosa;'u'tosa;'u'‴'b'tprime;'u'tprime;'b'TRADE;'u'TRADE;'b'trade;'u'trade;'u'▵'b'triangle;'u'triangle;'b'triangledown;'u'triangledown;'b'triangleleft;'u'triangleleft;'b'trianglelefteq;'u'trianglelefteq;'u'≜'b'triangleq;'u'triangleq;'b'triangleright;'u'triangleright;'b'trianglerighteq;'u'trianglerighteq;'u'◬'b'tridot;'u'tridot;'b'trie;'u'trie;'u'⨺'b'triminus;'u'triminus;'b'TripleDot;'u'TripleDot;'u'⨹'b'triplus;'u'triplus;'u'⧍'b'trisb;'u'trisb;'u'⨻'b'tritime;'u'tritime;'u'⏢'b'trpezium;'u'trpezium;'b'Tscr;'u'Tscr;'b'tscr;'u'tscr;'u'Ц'b'TScy;'u'TScy;'u'ц'b'tscy;'u'tscy;'u'Ћ'b'TSHcy;'u'TSHcy;'u'ћ'b'tshcy;'u'tshcy;'u'Ŧ'b'Tstrok;'u'Tstrok;'u'ŧ'b'tstrok;'u'tstrok;'b'twixt;'u'twixt;'b'twoheadleftarrow;'u'twoheadleftarrow;'b'twoheadrightarrow;'u'twoheadrightarrow;'b'Ú'u'Ú'b'ú'u'ú'b'Uacute;'u'Uacute;'b'uacute;'u'uacute;'u'↟'b'Uarr;'u'Uarr;'b'uArr;'u'uArr;'b'uarr;'u'uarr;'u'⥉'b'Uarrocir;'u'Uarrocir;'u'Ў'b'Ubrcy;'u'Ubrcy;'u'ў'b'ubrcy;'u'ubrcy;'u'Ŭ'b'Ubreve;'u'Ubreve;'u'ŭ'b'ubreve;'u'ubreve;'b'Û'u'Û'b'û'u'û'b'Ucirc;'u'Ucirc;'b'ucirc;'u'ucirc;'u'У'b'Ucy;'u'Ucy;'u'у'b'ucy;'u'ucy;'u'⇅'b'udarr;'u'udarr;'u'Ű'b'Udblac;'u'Udblac;'u'ű'b'udblac;'u'udblac;'u'⥮'b'udhar;'u'udhar;'u'⥾'b'ufisht;'u'ufisht;'b'Ufr;'u'Ufr;'b'ufr;'u'ufr;'b'Ù'u'Ù'b'ù'u'ù'b'Ugrave;'u'Ugrave;'b'ugrave;'u'ugrave;'u'⥣'b'uHar;'u'uHar;'b'uharl;'u'uharl;'b'uharr;'u'uharr;'u'▀'b'uhblk;'u'uhblk;'u'⌜'b'ulcorn;'u'ulcorn;'b'ulcorner;'u'ulcorner;'u'⌏'b'ulcrop;'u'ulcrop;'u'◸'b'ultri;'u'ultri;'u'Ū'b'Umacr;'u'Umacr;'u'ū'b'umacr;'u'umacr;'b'uml;'u'uml;'b'UnderBar;'u'UnderBar;'u'⏟'b'UnderBrace;'u'UnderBrace;'b'UnderBracket;'u'UnderBracket;'u'⏝'b'UnderParenthesis;'u'UnderParenthesis;'b'Union;'u'Union;'u'⊎'b'UnionPlus;'u'UnionPlus;'u'Ų'b'Uogon;'u'Uogon;'u'ų'b'uogon;'u'uogon;'b'Uopf;'u'Uopf;'b'uopf;'u'uopf;'b'UpArrow;'u'UpArrow;'b'Uparrow;'u'Uparrow;'b'uparrow;'u'uparrow;'u'⤒'b'UpArrowBar;'u'UpArrowBar;'b'UpArrowDownArrow;'u'UpArrowDownArrow;'u'↕'b'UpDownArrow;'u'UpDownArrow;'b'Updownarrow;'u'Updownarrow;'b'updownarrow;'u'updownarrow;'b'UpEquilibrium;'u'UpEquilibrium;'b'upharpoonleft;'u'upharpoonleft;'b'upharpoonright;'u'upharpoonright;'b'uplus;'u'uplus;'b'UpperLeftArrow;'u'UpperLeftArrow;'b'UpperRightArrow;'u'UpperRightArrow;'u'ϒ'b'Upsi;'u'Upsi;'u'υ'b'upsi;'u'upsi;'b'upsih;'u'upsih;'u'Υ'b'Upsilon;'u'Upsilon;'b'upsilon;'u'upsilon;'b'UpTee;'u'UpTee;'b'UpTeeArrow;'u'UpTeeArrow;'u'⇈'b'upuparrows;'u'upuparrows;'u'⌝'b'urcorn;'u'urcorn;'b'urcorner;'u'urcorner;'u'⌎'b'urcrop;'u'urcrop;'u'Ů'b'Uring;'u'Uring;'u'ů'b'uring;'u'uring;'u'◹'b'urtri;'u'urtri;'b'Uscr;'u'Uscr;'b'uscr;'u'uscr;'u'⋰'b'utdot;'u'utdot;'u'Ũ'b'Utilde;'u'Utilde;'u'ũ'b'utilde;'u'utilde;'b'utri;'u'utri;'b'utrif;'u'utrif;'b'uuarr;'u'uuarr;'b'Ü'u'Ü'b'ü'u'ü'b'Uuml;'u'Uuml;'b'uuml;'u'uuml;'u'⦧'b'uwangle;'u'uwangle;'u'⦜'b'vangrt;'u'vangrt;'b'varepsilon;'u'varepsilon;'b'varkappa;'u'varkappa;'b'varnothing;'u'varnothing;'b'varphi;'u'varphi;'b'varpi;'u'varpi;'b'varpropto;'u'varpropto;'b'vArr;'u'vArr;'b'varr;'u'varr;'b'varrho;'u'varrho;'b'varsigma;'u'varsigma;'u'⊊︀'b'varsubsetneq;'u'varsubsetneq;'u'⫋︀'b'varsubsetneqq;'u'varsubsetneqq;'u'⊋︀'b'varsupsetneq;'u'varsupsetneq;'u'⫌︀'b'varsupsetneqq;'u'varsupsetneqq;'b'vartheta;'u'vartheta;'b'vartriangleleft;'u'vartriangleleft;'b'vartriangleright;'u'vartriangleright;'u'⫫'b'Vbar;'u'Vbar;'u'⫨'b'vBar;'u'vBar;'u'⫩'b'vBarv;'u'vBarv;'u'В'b'Vcy;'u'Vcy;'u'в'b'vcy;'u'vcy;'u'⊫'b'VDash;'u'VDash;'u'⊩'b'Vdash;'u'Vdash;'b'vDash;'u'vDash;'b'vdash;'u'vdash;'u'⫦'b'Vdashl;'u'Vdashl;'b'Vee;'u'Vee;'b'vee;'u'vee;'u'⊻'b'veebar;'u'veebar;'u'≚'b'veeeq;'u'veeeq;'u'⋮'b'vellip;'u'vellip;'u'‖'b'Verbar;'u'Verbar;'b'verbar;'u'verbar;'b'Vert;'u'Vert;'b'vert;'u'vert;'b'VerticalBar;'u'VerticalBar;'b'VerticalLine;'u'VerticalLine;'u'❘'b'VerticalSeparator;'u'VerticalSeparator;'u'≀'b'VerticalTilde;'u'VerticalTilde;'b'VeryThinSpace;'u'VeryThinSpace;'b'Vfr;'u'Vfr;'b'vfr;'u'vfr;'b'vltri;'u'vltri;'b'vnsub;'u'vnsub;'b'vnsup;'u'vnsup;'b'Vopf;'u'Vopf;'b'vopf;'u'vopf;'b'vprop;'u'vprop;'b'vrtri;'u'vrtri;'b'Vscr;'u'Vscr;'b'vscr;'u'vscr;'b'vsubnE;'u'vsubnE;'b'vsubne;'u'vsubne;'b'vsupnE;'u'vsupnE;'b'vsupne;'u'vsupne;'u'⊪'b'Vvdash;'u'Vvdash;'u'⦚'b'vzigzag;'u'vzigzag;'u'Ŵ'b'Wcirc;'u'Wcirc;'u'ŵ'b'wcirc;'u'wcirc;'u'⩟'b'wedbar;'u'wedbar;'b'Wedge;'u'Wedge;'b'wedge;'u'wedge;'u'≙'b'wedgeq;'u'wedgeq;'u'℘'b'weierp;'u'weierp;'b'Wfr;'u'Wfr;'b'wfr;'u'wfr;'b'Wopf;'u'Wopf;'b'wopf;'u'wopf;'b'wp;'u'wp;'b'wr;'u'wr;'b'wreath;'u'wreath;'b'Wscr;'u'Wscr;'b'wscr;'u'wscr;'b'xcap;'u'xcap;'b'xcirc;'u'xcirc;'b'xcup;'u'xcup;'b'xdtri;'u'xdtri;'b'Xfr;'u'Xfr;'b'xfr;'u'xfr;'b'xhArr;'u'xhArr;'b'xharr;'u'xharr;'u'Ξ'b'Xi;'u'Xi;'u'ξ'b'xi;'u'xi;'b'xlArr;'u'xlArr;'b'xlarr;'u'xlarr;'b'xmap;'u'xmap;'u'⋻'b'xnis;'u'xnis;'b'xodot;'u'xodot;'b'Xopf;'u'Xopf;'b'xopf;'u'xopf;'b'xoplus;'u'xoplus;'b'xotime;'u'xotime;'b'xrArr;'u'xrArr;'b'xrarr;'u'xrarr;'b'Xscr;'u'Xscr;'b'xscr;'u'xscr;'b'xsqcup;'u'xsqcup;'b'xuplus;'u'xuplus;'b'xutri;'u'xutri;'b'xvee;'u'xvee;'b'xwedge;'u'xwedge;'b'Ý'u'Ý'b'ý'u'ý'b'Yacute;'u'Yacute;'b'yacute;'u'yacute;'u'Я'b'YAcy;'u'YAcy;'u'я'b'yacy;'u'yacy;'u'Ŷ'b'Ycirc;'u'Ycirc;'u'ŷ'b'ycirc;'u'ycirc;'u'Ы'b'Ycy;'u'Ycy;'u'ы'b'ycy;'u'ycy;'b'¥'u'¥'b'yen;'u'yen;'b'Yfr;'u'Yfr;'b'yfr;'u'yfr;'u'Ї'b'YIcy;'u'YIcy;'u'ї'b'yicy;'u'yicy;'b'Yopf;'u'Yopf;'b'yopf;'u'yopf;'b'Yscr;'u'Yscr;'b'yscr;'u'yscr;'u'Ю'b'YUcy;'u'YUcy;'u'ю'b'yucy;'u'yucy;'u'ÿ'b'Yuml;'u'Yuml;'b'yuml;'u'yuml;'u'Ź'b'Zacute;'u'Zacute;'u'ź'b'zacute;'u'zacute;'b'Zcaron;'u'Zcaron;'b'zcaron;'u'zcaron;'u'З'b'Zcy;'u'Zcy;'u'з'b'zcy;'u'zcy;'u'Ż'b'Zdot;'u'Zdot;'u'ż'b'zdot;'u'zdot;'u'ℨ'b'zeetrf;'u'zeetrf;'b'ZeroWidthSpace;'u'ZeroWidthSpace;'u'Ζ'b'Zeta;'u'Zeta;'u'ζ'b'zeta;'u'zeta;'b'Zfr;'u'Zfr;'b'zfr;'u'zfr;'u'Ж'b'ZHcy;'u'ZHcy;'u'ж'b'zhcy;'u'zhcy;'u'⇝'b'zigrarr;'u'zigrarr;'b'Zopf;'u'Zopf;'b'zopf;'u'zopf;'b'Zscr;'u'Zscr;'b'zscr;'u'zscr;'u'‍'b'zwj;'u'zwj;'u'‌'b'zwnj;'u'zwnj;'u'entities'MappingProxyTypeDynamicClassAttributeEnumMetaFlagIntFlagunique_is_descriptor + Returns True if obj is a descriptor, False otherwise. + _is_dunder + Returns True if a __dunder__ name, False otherwise. + _is_sunder + Returns True if a _sunder_ name, False otherwise. + _make_class_unpicklable + Make the given class un-picklable. + _break_on_call_reduce%r cannot be pickled_auto_null + Instances are replaced with an appropriate value in Enum class suites. + _EnumDict + Track enum member order and ensure member names are not reused. + + EnumMeta will use the names found in self._member_names as the + enumeration member names. + _member_names_last_values_ignore_auto_called + Changes anything not dundered or not a descriptor. + + If an enum member name is used twice, an error is raised; duplicate + values are not checked for. + + Single underscore (sunder) names are reserved. + _order__create_pseudo_member__generate_next_value__missing__ignore__names_ are reserved for future Enum use_generate_next_value_ must be defined before members_generate_next_valuealready_ignore_ cannot specify already set names: %r__order__Attempted to reuse key: %r%r already defined as: %r + Metaclass for Enum + metacls_check_for_existing_membersenum_dict_get_mixins_member_typefirst_enumclassdict_find_new_save_newuse_argsenum_membersinvalid_namesInvalid enum member name: {0}An enumeration.enum_class_member_names__member_map__member_type_dynamic_attributes_value2member_map___getnewargs_ex__member_nameenum_member_name_canonical_memberclass_methodobj_methodenum_method__new_member__member order does not match _order_ + classes/types should always be True. + qualname + Either returns an existing member, or creates a new enum class. + + This method is used both when an enum class is given a value to match + to an enumeration member (i.e. Color(3)) and for the functional API + (i.e. Color = Enum('Color', names='RED GREEN BLUE')). + + When used for the functional API: + + `value` will be the name of the new class. + + `names` should be either a string of white-space/comma delimited names + (values will start at `start`), or an iterator/mapping of name, value pairs. + + `module` should be set to the module this class is being created in; + if it is not set, an attempt to find that module will be made, but if + it fails the class will not be picklable. + + `qualname` should be set to the actual location this class can be found + at in its module; by default it is set to the global scope. If this is + not correct, unpickling will fail in some circumstances. + + `type`, if set, will be mixed in as the first base class. + _create_unsupported operand type(s) for 'in': '%s' and '%s'%s: cannot delete Enum member. + Return the enum member matching `name` + + We use __getattr__ instead of descriptors or inserting into the enum + class' __dict__ in order to support `name` and `value` being both + properties for enum members (which live in the class' __dict__) and + enum members themselves. + + Returns members in definition order. + + Returns a mapping of member name->value. + + This mapping lists all enum members, including aliases. Note that this + is a read-only view of the internal mapping. + + Returns members in reverse definition order. + + Block attempts to reassign Enum members. + + A simple assignment to the class namespace only changes one of the + several possible ways to get an Enum member from the Enum class, + resulting in an inconsistent Enumeration. + member_mapCannot reassign members. + Convenience method to create a new Enum class. + + `names` can be: + + * A string containing member names, separated either with spaces or + commas. Values are incremented by 1 from `start`. + * An iterable of member names. Values are incremented by 1 from `start`. + * An iterable of (member name, value) pairs. + * A mapping of member name -> value pairs. + original_nameslast_valuesmember_value_convert_ + Create a new Enum subclass that replaces a collection of global constants + _reduce_ex_by_name_convert is deprecated and will be removed in 3.9, use _convert_ instead."_convert is deprecated and will be removed in 3.9, use ""_convert_ instead."%s: cannot extend enumeration %r + Returns the type for creating enum members, and the first inherited + enum class. + + bases: the tuple of bases that was given to __new__ + _find_data_typedata_typescandidate%r: too many data types: %rnew enumerations should be created as `EnumName([mixin_type, ...] [data_type,] enum_type)`"new enumerations should be created as ""`EnumName([mixin_type, ...] [data_type,] enum_type)`"Cannot extend enumerations + Returns the __new__ to be used for creating the enum members. + + classdict: the class dictionary given to __new__ + member_type: the data type whose __new__ will be used by default + first_enum: enumeration to check for an overriding __new__ + possible + Generic enumeration. + + Derive from this class to define new enumerations. + %r is not a valid %sve_excerror in %s._missing_: returned %r instead of None or a valid member + Generate the next value when not given. + + name: the name of the member + start: the initial start value or None + count: the number of existing members + last_value: the last value assigned or None + <%s.%s: %r> + Returns all members and all public methods + added_behavior + Returns format using actual value type unless __str__ has been overridden. + str_overriddenThe name of the Enum member.The value of the Enum member.Enum where members are also (and must be) ints + Support for flags + _high_bithigh_bitInvalid Flag value: %r + Returns member (possibly creating it) if one can be found for value. + possible_member + Create a composite member iff value contains only members. + pseudo_member_decomposeextra_flags + Returns True if self has at least the same flags set as other. + uncovered%s.%rinverted + Support for integer-based Flags + new_memberneed_to_createbitflag_value + returns index of highest bit, or -1 if value is zero or negative + enumeration + Class decorator for enumerations ensuring unique member values. + duplicates%s -> %salias_detailsduplicate values found in %r: %s + Extract all members from the value. + not_coverednegativeflags_to_check_power_of_two# check if members already defined as auto()# descriptor overwriting an enum?# enum overwriting a descriptor?# Dummy value for Enum as EnumMeta explicitly checks for it, but of course# until EnumMeta finishes running the first time the Enum class doesn't exist.# This is also why there are checks in EnumMeta like `if Enum is not None`# check that previous enum members do not exist# create the namespace dict# inherit previous flags and _generate_next_value_ function# an Enum class is final once enumeration items have been defined; it# cannot be mixed with other types (int, float, etc.) if it has an# inherited __new__ unless a new __new__ is defined (or the resulting# class will fail).# remove any keys listed in _ignore_# save enum items into separate mapping so they don't get baked into# the new class# adjust the sunders# check for illegal enum names (any others?)# create a default docstring if one has not been provided# create our new Enum type# names in definition order# name->value map# save DynamicClassAttribute attributes from super classes so we know# if we can take the shortcut of storing members in the class dict# Reverse value->name map for hashable values.# If a custom type is mixed into the Enum, and it does not know how# to pickle itself, pickle.dumps will succeed but pickle.loads will# fail. Rather than have the error show up later and possibly far# from the source, sabotage the pickle protocol for this class so# that pickle.dumps also fails.# However, if the new class implements its own __reduce_ex__, do not# sabotage -- it's on them to make sure it works correctly. We use# __reduce_ex__ instead of any of the others as it is preferred by# pickle over __reduce__, and it handles all pickle protocols.# instantiate them, checking for duplicates as we go# we instantiate first instead of checking for duplicates first in case# a custom __new__ is doing something funky with the values -- such as# auto-numbering ;)# special case for tuple enums# wrap it one more time# If another member with the same value was already defined, the# new member becomes an alias to the existing one.# Aliases don't appear in member names (only in __members__).# performance boost for any member that would not shadow# a DynamicClassAttribute# now add to _member_map_# This may fail if value is not hashable. We can't add the value# to the map, and by-value lookups for this value will be# linear.# double check that repr and friends are not the mixin's or various# things break (such as pickle)# however, if the method is defined in the Enum itself, don't replace# it# replace any other __new__ with our own (as long as Enum is not None,# anyway) -- again, this is to support pickle# if the user defined their own __new__, save it before it gets# clobbered in case they subclass later# py3 support for definition order (helps keep py2/py3 code in sync)# simple value lookup# otherwise, functional API: we're creating a new Enum type# nicer error message when someone tries to delete an attribute# (see issue19025).# special processing needed for names?# Here, names is either an iterable of (name, value) or a mapping.# TODO: replace the frame hack if a blessed way to know the calling# module is ever developed# convert all constants from source (or module) that pass filter() to# a new Enum called name, and export the enum and its members back to# module;# also, replace the __reduce_ex__ method so unpickling works in# previous Python versions# _value2member_map_ is populated in the same order every time# for a consistent reverse mapping of number to name when there# are multiple names for the same number.# sort by value# unless some values aren't comparable, in which case sort by name# ensure final parent class is an Enum derivative, find any concrete# data type, and check that Enum has no members# now find the correct __new__, checking to see of one was defined# by the user; also check earlier enum classes in case a __new__ was# saved as __new_member__# should __new__ be saved as __new_member__ later?# check all possibles for __new_member__ before falling back to# __new__# if a non-object.__new__ is used then whatever value/tuple was# assigned to the enum member name will be passed to __new__ and to the# new enum member's __init__# all enum instances are actually created during class construction# without calling this method; this method is called by the metaclass'# __call__ (i.e. Color(3) ), and by pickle# For lookups like Color(Color.RED)# by-value search for a matching enum member# see if it's in the reverse mapping (for hashable values)# Not found, no need to do long O(n) search# not there, now do long search -- O(n) behavior# still not found -- try _missing_ hook# ensure all variables that could hold an exception are destroyed# mixed-in Enums should use the mixed-in type's __format__, otherwise# we can get strange results with the Enum name showing up instead of# the value# pure Enum branch, or branch with __str__ explicitly overridden# mix-in branch# DynamicClassAttribute is used to provide access to the `name` and# `value` properties of enum members while keeping some measure of# protection from modification, while still allowing for an enumeration# to have members named `name` and `value`. This works because enumeration# members are not set directly on the enum class -- __getattr__ is# used to look them up.# verify all bits are accounted for# construct a singleton enum pseudo-member# use setdefault in case another thread already created a composite# with this value# get unaccounted for bits# timer = 10# timer -= 1# construct singleton pseudo-members# _decompose is only called if the value is not named# issue29167: wrap accesses to _value2member_map_ in a list to avoid race# conditions between iterating over it and having more pseudo-# members added to it# only check for named flags# check for named flags and powers-of-two flags# we have the breakdown, don't need the value member itselfb'EnumMeta'u'EnumMeta'b'Enum'u'Enum'b'IntEnum'u'IntEnum'b'Flag'u'Flag'b'IntFlag'u'IntFlag'b'unique'u'unique'b' + Returns True if obj is a descriptor, False otherwise. + 'u' + Returns True if obj is a descriptor, False otherwise. + 'b'__get__'u'__get__'b'__set__'u'__set__'b'__delete__'u'__delete__'b' + Returns True if a __dunder__ name, False otherwise. + 'u' + Returns True if a __dunder__ name, False otherwise. + 'b' + Returns True if a _sunder_ name, False otherwise. + 'u' + Returns True if a _sunder_ name, False otherwise. + 'b' + Make the given class un-picklable. + 'u' + Make the given class un-picklable. + 'b'%r cannot be pickled'u'%r cannot be pickled'b' + Instances are replaced with an appropriate value in Enum class suites. + 'u' + Instances are replaced with an appropriate value in Enum class suites. + 'b' + Track enum member order and ensure member names are not reused. + + EnumMeta will use the names found in self._member_names as the + enumeration member names. + 'u' + Track enum member order and ensure member names are not reused. + + EnumMeta will use the names found in self._member_names as the + enumeration member names. + 'b' + Changes anything not dundered or not a descriptor. + + If an enum member name is used twice, an error is raised; duplicate + values are not checked for. + + Single underscore (sunder) names are reserved. + 'u' + Changes anything not dundered or not a descriptor. + + If an enum member name is used twice, an error is raised; duplicate + values are not checked for. + + Single underscore (sunder) names are reserved. + 'b'_order_'u'_order_'b'_create_pseudo_member_'u'_create_pseudo_member_'b'_generate_next_value_'u'_generate_next_value_'b'_missing_'u'_missing_'b'_ignore_'u'_ignore_'b'_names_ are reserved for future Enum use'u'_names_ are reserved for future Enum use'b'_generate_next_value_ must be defined before members'u'_generate_next_value_ must be defined before members'b'_generate_next_value'u'_generate_next_value'b'_ignore_ cannot specify already set names: %r'u'_ignore_ cannot specify already set names: %r'b'__order__'u'__order__'b'Attempted to reuse key: %r'u'Attempted to reuse key: %r'b'%r already defined as: %r'u'%r already defined as: %r'b' + Metaclass for Enum + 'u' + Metaclass for Enum + 'b'mro'u'mro'b'Invalid enum member name: {0}'u'Invalid enum member name: {0}'b'An enumeration.'u'An enumeration.'b'__getnewargs_ex__'u'__getnewargs_ex__'b'_value_'u'_value_'b'__str__'u'__str__'b'__format__'u'__format__'b'member order does not match _order_'u'member order does not match _order_'b' + classes/types should always be True. + 'u' + classes/types should always be True. + 'b' + Either returns an existing member, or creates a new enum class. + + This method is used both when an enum class is given a value to match + to an enumeration member (i.e. Color(3)) and for the functional API + (i.e. Color = Enum('Color', names='RED GREEN BLUE')). + + When used for the functional API: + + `value` will be the name of the new class. + + `names` should be either a string of white-space/comma delimited names + (values will start at `start`), or an iterator/mapping of name, value pairs. + + `module` should be set to the module this class is being created in; + if it is not set, an attempt to find that module will be made, but if + it fails the class will not be picklable. + + `qualname` should be set to the actual location this class can be found + at in its module; by default it is set to the global scope. If this is + not correct, unpickling will fail in some circumstances. + + `type`, if set, will be mixed in as the first base class. + 'u' + Either returns an existing member, or creates a new enum class. + + This method is used both when an enum class is given a value to match + to an enumeration member (i.e. Color(3)) and for the functional API + (i.e. Color = Enum('Color', names='RED GREEN BLUE')). + + When used for the functional API: + + `value` will be the name of the new class. + + `names` should be either a string of white-space/comma delimited names + (values will start at `start`), or an iterator/mapping of name, value pairs. + + `module` should be set to the module this class is being created in; + if it is not set, an attempt to find that module will be made, but if + it fails the class will not be picklable. + + `qualname` should be set to the actual location this class can be found + at in its module; by default it is set to the global scope. If this is + not correct, unpickling will fail in some circumstances. + + `type`, if set, will be mixed in as the first base class. + 'b'unsupported operand type(s) for 'in': '%s' and '%s''u'unsupported operand type(s) for 'in': '%s' and '%s''b'%s: cannot delete Enum member.'u'%s: cannot delete Enum member.'b'__members__'u'__members__'b' + Return the enum member matching `name` + + We use __getattr__ instead of descriptors or inserting into the enum + class' __dict__ in order to support `name` and `value` being both + properties for enum members (which live in the class' __dict__) and + enum members themselves. + 'u' + Return the enum member matching `name` + + We use __getattr__ instead of descriptors or inserting into the enum + class' __dict__ in order to support `name` and `value` being both + properties for enum members (which live in the class' __dict__) and + enum members themselves. + 'b' + Returns members in definition order. + 'u' + Returns members in definition order. + 'b' + Returns a mapping of member name->value. + + This mapping lists all enum members, including aliases. Note that this + is a read-only view of the internal mapping. + 'u' + Returns a mapping of member name->value. + + This mapping lists all enum members, including aliases. Note that this + is a read-only view of the internal mapping. + 'b''u''b' + Returns members in reverse definition order. + 'u' + Returns members in reverse definition order. + 'b' + Block attempts to reassign Enum members. + + A simple assignment to the class namespace only changes one of the + several possible ways to get an Enum member from the Enum class, + resulting in an inconsistent Enumeration. + 'u' + Block attempts to reassign Enum members. + + A simple assignment to the class namespace only changes one of the + several possible ways to get an Enum member from the Enum class, + resulting in an inconsistent Enumeration. + 'b'_member_map_'u'_member_map_'b'Cannot reassign members.'u'Cannot reassign members.'b' + Convenience method to create a new Enum class. + + `names` can be: + + * A string containing member names, separated either with spaces or + commas. Values are incremented by 1 from `start`. + * An iterable of member names. Values are incremented by 1 from `start`. + * An iterable of (member name, value) pairs. + * A mapping of member name -> value pairs. + 'u' + Convenience method to create a new Enum class. + + `names` can be: + + * A string containing member names, separated either with spaces or + commas. Values are incremented by 1 from `start`. + * An iterable of member names. Values are incremented by 1 from `start`. + * An iterable of (member name, value) pairs. + * A mapping of member name -> value pairs. + 'b' + Create a new Enum subclass that replaces a collection of global constants + 'u' + Create a new Enum subclass that replaces a collection of global constants + 'b'_convert is deprecated and will be removed in 3.9, use _convert_ instead.'u'_convert is deprecated and will be removed in 3.9, use _convert_ instead.'b'%s: cannot extend enumeration %r'u'%s: cannot extend enumeration %r'b' + Returns the type for creating enum members, and the first inherited + enum class. + + bases: the tuple of bases that was given to __new__ + 'u' + Returns the type for creating enum members, and the first inherited + enum class. + + bases: the tuple of bases that was given to __new__ + 'b'%r: too many data types: %r'u'%r: too many data types: %r'b'new enumerations should be created as `EnumName([mixin_type, ...] [data_type,] enum_type)`'u'new enumerations should be created as `EnumName([mixin_type, ...] [data_type,] enum_type)`'b'Cannot extend enumerations'u'Cannot extend enumerations'b' + Returns the __new__ to be used for creating the enum members. + + classdict: the class dictionary given to __new__ + member_type: the data type whose __new__ will be used by default + first_enum: enumeration to check for an overriding __new__ + 'u' + Returns the __new__ to be used for creating the enum members. + + classdict: the class dictionary given to __new__ + member_type: the data type whose __new__ will be used by default + first_enum: enumeration to check for an overriding __new__ + 'b'__new_member__'u'__new_member__'b' + Generic enumeration. + + Derive from this class to define new enumerations. + 'u' + Generic enumeration. + + Derive from this class to define new enumerations. + 'b'%r is not a valid %s'u'%r is not a valid %s'b'error in %s._missing_: returned %r instead of None or a valid member'u'error in %s._missing_: returned %r instead of None or a valid member'b' + Generate the next value when not given. + + name: the name of the member + start: the initial start value or None + count: the number of existing members + last_value: the last value assigned or None + 'u' + Generate the next value when not given. + + name: the name of the member + start: the initial start value or None + count: the number of existing members + last_value: the last value assigned or None + 'b'<%s.%s: %r>'u'<%s.%s: %r>'b' + Returns all members and all public methods + 'u' + Returns all members and all public methods + 'b' + Returns format using actual value type unless __str__ has been overridden. + 'u' + Returns format using actual value type unless __str__ has been overridden. + 'b'The name of the Enum member.'u'The name of the Enum member.'b'The value of the Enum member.'u'The value of the Enum member.'b'Enum where members are also (and must be) ints'u'Enum where members are also (and must be) ints'b' + Support for flags + 'u' + Support for flags + 'b'Invalid Flag value: %r'u'Invalid Flag value: %r'b' + Returns member (possibly creating it) if one can be found for value. + 'u' + Returns member (possibly creating it) if one can be found for value. + 'b' + Create a composite member iff value contains only members. + 'u' + Create a composite member iff value contains only members. + 'b' + Returns True if self has at least the same flags set as other. + 'u' + Returns True if self has at least the same flags set as other. + 'b'%s.%r'u'%s.%r'b' + Support for integer-based Flags + 'u' + Support for integer-based Flags + 'b' + returns index of highest bit, or -1 if value is zero or negative + 'u' + returns index of highest bit, or -1 if value is zero or negative + 'b' + Class decorator for enumerations ensuring unique member values. + 'u' + Class decorator for enumerations ensuring unique member values. + 'b'%s -> %s'u'%s -> %s'b'duplicate values found in %r: %s'u'duplicate values found in %r: %s'b' + Extract all members from the value. + 'u' + Extract all members from the value. + 'u'enum'E2BIGEACCESEADDRINUSEEAGAINEALREADYEAUTH86EBADARCHEBADEXEC88EBADMACHO94EBADMSGEBADRPCEBUSY89ECANCELEDECHILD6154EDEADLKEDESTADDRREQ83EDEVERREDOM69EDQUOTEFAULTEFBIG79EFTYPEEHOSTDOWNEIDRM92EILSEQEINPROGRESSEINTREINVALEIOEISCONNEISDIRELOOPEMFILEEMLINKEMSGSIZE95EMULTIHOPENAMETOOLONG81ENEEDAUTHENETDOWNENETRESETENFILEENOATTR55ENOBUFS96ENODATAENODEVENOENTENOEXECENOLCK97ENOLINKENOMEM91ENOMSGENOPOLICYENOPROTOOPTENOSPC98ENOSRENOSTRENOSYSENOTBLK57ENOTCONNENOTDIR66ENOTEMPTYENOTRECOVERABLEENOTSOCKENOTSUPENOTTYENXIOEOPNOTSUPP84EOVERFLOW105EOWNERDEADEPERMEPFNOSUPPORT67EPROCLIMEPROCUNAVAIL75EPROGMISMATCH74EPROGUNAVAILEPROTOEPROTONOSUPPORTEPROTOTYPE82EPWROFFERANGEEREMOTEEROFS73ERPCMISMATCH87ESHLIBVERSESHUTDOWNESOCKTNOSUPPORTESPIPEESRCHESTALEETIMEETOOMANYREFSETXTBSYEUSERSEWOULDBLOCKEXDEVu'This module makes available standard errno system symbols. + +The value of each symbol is the corresponding integer value, +e.g., on most systems, errno.ENOENT equals the integer 2. + +The dictionary errno.errorcode maps numeric codes to symbol names, +e.g., errno.errorcode[2] could be the string 'ENOENT'. + +Symbols that are not relevant to the underlying system are not defined. + +To map error codes to error messages, use the function os.strerror(), +e.g. os.strerror(2) could return 'No such file or directory'.'errorcodeException classes raised by urllib. + +The base exception class is URLError, which inherits from OSError. It +doesn't define any behavior of its own, but is the base class for all +exceptions defined in this package. + +HTTPError is an exception class that is also a valid HTTP response +instance. It behaves this way because HTTP protocol errors are valid +responses, with a status code, headers, and a body. In some contexts, +an application may want to handle an exception like a regular +response. +urllib.responseaddinfourlRaised when HTTP error occurs, but also acts like non-error return__super_inithdrsHTTP Error %s: %sException raised when downloaded size does not match content-length.# URLError is a sub-type of OSError, but it doesn't share any of# the implementation. need to override __init__ and __str__.# It sets self.args for compatibility with other OSError# subclasses, but args doesn't have the typical format with errno in# slot 0 and strerror in slot 1. This may be better than nothing.# The addinfourl classes depend on fp being a valid file# object. In some cases, the HTTPError may not have a valid# file object. If this happens, the simplest workaround is to# not initialize the base classes.# since URLError specifies a .reason attribute, HTTPError should also# provide this attribute. See issue13211 for discussion.b'Exception classes raised by urllib. + +The base exception class is URLError, which inherits from OSError. It +doesn't define any behavior of its own, but is the base class for all +exceptions defined in this package. + +HTTPError is an exception class that is also a valid HTTP response +instance. It behaves this way because HTTP protocol errors are valid +responses, with a status code, headers, and a body. In some contexts, +an application may want to handle an exception like a regular +response. +'u'Exception classes raised by urllib. + +The base exception class is URLError, which inherits from OSError. It +doesn't define any behavior of its own, but is the base class for all +exceptions defined in this package. + +HTTPError is an exception class that is also a valid HTTP response +instance. It behaves this way because HTTP protocol errors are valid +responses, with a status code, headers, and a body. In some contexts, +an application may want to handle an exception like a regular +response. +'b''u''b'Raised when HTTP error occurs, but also acts like non-error return'u'Raised when HTTP error occurs, but also acts like non-error return'b'HTTP Error %s: %s'u'HTTP Error %s: %s'b''u''b'Exception raised when downloaded size does not match content-length.'u'Exception raised when downloaded size does not match content-length.'email package exception classes.MessageErrorBase class for errors in the email package.MessageParseErrorBase class for message parsing errors.HeaderParseErrorError while parsing headers.BoundaryErrorCouldn't find terminating boundary.MultipartConversionErrorConversion to a multipart is prohibited.An illegal charset was given.MessageDefectBase class for a message defect.NoBoundaryInMultipartDefectA message claimed to be a multipart but had no boundary parameter.StartBoundaryNotFoundDefectThe claimed start boundary was never found.CloseBoundaryNotFoundDefectA start boundary was found, but not the corresponding close boundary.FirstHeaderLineIsContinuationDefectA message had a continuation line as its first header line.MisplacedEnvelopeHeaderDefectA 'Unix-from' header was found in the middle of a header block.MissingHeaderBodySeparatorDefectFound line with no leading whitespace and no colon before blank line.MalformedHeaderDefectMultipartInvariantViolationDefectA message claimed to be a multipart but no subparts were found.InvalidMultipartContentTransferEncodingDefectAn invalid content transfer encoding was set on the multipart itself.Header contained bytes that could not be decodedbase64 encoded sequence had an incorrect lengthbase64 encoded sequence had characters not in base64 alphabetbase64 encoded sequence had invalid length (1 mod 4)HeaderDefectBase class for a header defect.InvalidHeaderDefectHeader is not valid, message gives details.HeaderMissingRequiredValueA header that must have a value had noneNonPrintableDefectASCII characters outside the ascii-printable range foundnon_printablesthe following ASCII non-printables found in header: {}"the following ASCII non-printables found in header: ""{}"ObsoleteHeaderDefectHeader uses syntax declared obsolete by RFC 5322NonASCIILocalPartDefectlocal_part contains non-ASCII characters# These are parsing defects which the parser was able to work around.# XXX: backward compatibility, just in case (it was never emitted).# These errors are specific to header parsing.# This defect only occurs during unicode parsing, not when# parsing messages decoded from binary.b'email package exception classes.'u'email package exception classes.'b'Base class for errors in the email package.'u'Base class for errors in the email package.'b'Base class for message parsing errors.'u'Base class for message parsing errors.'b'Error while parsing headers.'u'Error while parsing headers.'b'Couldn't find terminating boundary.'u'Couldn't find terminating boundary.'b'Conversion to a multipart is prohibited.'u'Conversion to a multipart is prohibited.'b'An illegal charset was given.'u'An illegal charset was given.'b'Base class for a message defect.'u'Base class for a message defect.'b'A message claimed to be a multipart but had no boundary parameter.'u'A message claimed to be a multipart but had no boundary parameter.'b'The claimed start boundary was never found.'u'The claimed start boundary was never found.'b'A start boundary was found, but not the corresponding close boundary.'u'A start boundary was found, but not the corresponding close boundary.'b'A message had a continuation line as its first header line.'u'A message had a continuation line as its first header line.'b'A 'Unix-from' header was found in the middle of a header block.'u'A 'Unix-from' header was found in the middle of a header block.'b'Found line with no leading whitespace and no colon before blank line.'u'Found line with no leading whitespace and no colon before blank line.'b'A message claimed to be a multipart but no subparts were found.'u'A message claimed to be a multipart but no subparts were found.'b'An invalid content transfer encoding was set on the multipart itself.'u'An invalid content transfer encoding was set on the multipart itself.'b'Header contained bytes that could not be decoded'u'Header contained bytes that could not be decoded'b'base64 encoded sequence had an incorrect length'u'base64 encoded sequence had an incorrect length'b'base64 encoded sequence had characters not in base64 alphabet'u'base64 encoded sequence had characters not in base64 alphabet'b'base64 encoded sequence had invalid length (1 mod 4)'u'base64 encoded sequence had invalid length (1 mod 4)'b'Base class for a header defect.'u'Base class for a header defect.'b'Header is not valid, message gives details.'u'Header is not valid, message gives details.'b'A header that must have a value had none'u'A header that must have a value had none'b'ASCII characters outside the ascii-printable range found'u'ASCII characters outside the ascii-printable range found'b'the following ASCII non-printables found in header: {}'u'the following ASCII non-printables found in header: {}'b'Header uses syntax declared obsolete by RFC 5322'u'Header uses syntax declared obsolete by RFC 5322'b'local_part contains non-ASCII characters'u'local_part contains non-ASCII characters'u'email.errors'distutils.errors + +Provides exceptions used by the Distutils modules. Note that Distutils +modules may raise standard exceptions; in particular, SystemExit is +usually raised for errors that are obviously the end-user's fault +(eg. bad command-line arguments). + +This module is safe to use in "from ... import *" mode; it only exports +symbols whose names start with "Distutils" and end with "Error".DistutilsErrorThe root of all Distutils evil.Unable to load an expected module, or to find an expected class + within some module (in particular, command modules and classes).DistutilsClassErrorSome command class (or possibly distribution class, if anyone + feels a need to subclass Distribution) is found not to be holding + up its end of the bargain, ie. implementing some part of the + "command "interface.DistutilsGetoptErrorThe option table provided to 'fancy_getopt()' is bogus.DistutilsArgErrorRaised by fancy_getopt in response to getopt.error -- ie. an + error in the command line usage.Any problems in the filesystem: expected file not found, etc. + Typically this is for problems that we detect before OSError + could be raised.DistutilsOptionErrorSyntactic/semantic errors in command options, such as use of + mutually conflicting options, or inconsistent options, + badly-spelled values, etc. No distinction is made between option + values originating in the setup script, the command line, config + files, or what-have-you -- but if we *know* something originated in + the setup script, we'll raise DistutilsSetupError instead.DistutilsSetupErrorFor errors that can be definitely blamed on the setup script, + such as invalid keyword arguments to 'setup()'.We don't know how to do something on the current platform (but + we do know how to do it on some platform) -- eg. trying to compile + C files on a platform not supported by a CCompiler subclass.DistutilsExecErrorAny problems executing an external program (such as the C + compiler, when compiling C files).Internal inconsistencies or impossibilities (obviously, this + should never be seen if the code is working!).DistutilsTemplateErrorSyntax error in a file list template.DistutilsByteCompileErrorByte compile error.CCompilerErrorSome compile/link operation failed.PreprocessErrorFailure to preprocess one or more C/C++ files.Failure to compile one or more C/C++ source files.LibErrorFailure to create a static library from one or more C/C++ object + files.Failure to link one or more C/C++ object files into an executable + or shared library file.Attempt to process an unknown file type.# Exception classes used by the CCompiler implementation classesb'distutils.errors + +Provides exceptions used by the Distutils modules. Note that Distutils +modules may raise standard exceptions; in particular, SystemExit is +usually raised for errors that are obviously the end-user's fault +(eg. bad command-line arguments). + +This module is safe to use in "from ... import *" mode; it only exports +symbols whose names start with "Distutils" and end with "Error".'u'distutils.errors + +Provides exceptions used by the Distutils modules. Note that Distutils +modules may raise standard exceptions; in particular, SystemExit is +usually raised for errors that are obviously the end-user's fault +(eg. bad command-line arguments). + +This module is safe to use in "from ... import *" mode; it only exports +symbols whose names start with "Distutils" and end with "Error".'b'The root of all Distutils evil.'u'The root of all Distutils evil.'b'Unable to load an expected module, or to find an expected class + within some module (in particular, command modules and classes).'u'Unable to load an expected module, or to find an expected class + within some module (in particular, command modules and classes).'b'Some command class (or possibly distribution class, if anyone + feels a need to subclass Distribution) is found not to be holding + up its end of the bargain, ie. implementing some part of the + "command "interface.'u'Some command class (or possibly distribution class, if anyone + feels a need to subclass Distribution) is found not to be holding + up its end of the bargain, ie. implementing some part of the + "command "interface.'b'The option table provided to 'fancy_getopt()' is bogus.'u'The option table provided to 'fancy_getopt()' is bogus.'b'Raised by fancy_getopt in response to getopt.error -- ie. an + error in the command line usage.'u'Raised by fancy_getopt in response to getopt.error -- ie. an + error in the command line usage.'b'Any problems in the filesystem: expected file not found, etc. + Typically this is for problems that we detect before OSError + could be raised.'u'Any problems in the filesystem: expected file not found, etc. + Typically this is for problems that we detect before OSError + could be raised.'b'Syntactic/semantic errors in command options, such as use of + mutually conflicting options, or inconsistent options, + badly-spelled values, etc. No distinction is made between option + values originating in the setup script, the command line, config + files, or what-have-you -- but if we *know* something originated in + the setup script, we'll raise DistutilsSetupError instead.'u'Syntactic/semantic errors in command options, such as use of + mutually conflicting options, or inconsistent options, + badly-spelled values, etc. No distinction is made between option + values originating in the setup script, the command line, config + files, or what-have-you -- but if we *know* something originated in + the setup script, we'll raise DistutilsSetupError instead.'b'For errors that can be definitely blamed on the setup script, + such as invalid keyword arguments to 'setup()'.'u'For errors that can be definitely blamed on the setup script, + such as invalid keyword arguments to 'setup()'.'b'We don't know how to do something on the current platform (but + we do know how to do it on some platform) -- eg. trying to compile + C files on a platform not supported by a CCompiler subclass.'u'We don't know how to do something on the current platform (but + we do know how to do it on some platform) -- eg. trying to compile + C files on a platform not supported by a CCompiler subclass.'b'Any problems executing an external program (such as the C + compiler, when compiling C files).'u'Any problems executing an external program (such as the C + compiler, when compiling C files).'b'Internal inconsistencies or impossibilities (obviously, this + should never be seen if the code is working!).'u'Internal inconsistencies or impossibilities (obviously, this + should never be seen if the code is working!).'b'Syntax error in a file list template.'u'Syntax error in a file list template.'b'Byte compile error.'u'Byte compile error.'b'Some compile/link operation failed.'u'Some compile/link operation failed.'b'Failure to preprocess one or more C/C++ files.'u'Failure to preprocess one or more C/C++ files.'b'Failure to compile one or more C/C++ source files.'u'Failure to compile one or more C/C++ source files.'b'Failure to create a static library from one or more C/C++ object + files.'u'Failure to create a static library from one or more C/C++ object + files.'b'Failure to link one or more C/C++ object files into an executable + or shared library file.'u'Failure to link one or more C/C++ object files into an executable + or shared library file.'b'Attempt to process an unknown file type.'u'Attempt to process an unknown file type.'u'distutils.errors'Event loop and event loop policy.AbstractEventLoopPolicyget_event_loop_policyset_event_loop_policyget_child_watcherset_child_watcherObject returned by callback registration methods._args_reprException in callback Object returned by timed callback registration methods.when=Return a scheduled callback time. + + The time is an absolute timestamp, using the same time + reference as loop.time(). + Abstract server returned by create_server().Stop serving. This leaves existing connections open.Get the event loop the Server object is attached to.Return True if the server is accepting connections.Start accepting connections. + + This method is idempotent, so it can be called when + the server is already being serving. + Start accepting connections until the coroutine is cancelled. + + The server is closed when the coroutine is cancelled. + Coroutine to wait until service is closed.Abstract event loop.Run the event loop until stop() is called.Run the event loop until a Future is done. + + Return the Future's result, or raise its exception. + Stop the event loop as soon as reasonable. + + Exactly how soon that is may depend on the implementation, but + no more I/O callbacks should be scheduled. + Return whether the event loop is currently running.Close the loop. + + The loop should not be running. + + This is idempotent and irreversible. + + No other methods should be called after this one. + A coroutine which creates a TCP server bound to host and port. + + The return value is a Server object which can be used to stop + the service. + + If host is an empty string or None all interfaces are assumed + and a list of multiple sockets will be returned (most likely + one for IPv4 and another one for IPv6). The host parameter can also be + a sequence (e.g. list) of hosts to bind to. + + family can be set to either AF_INET or AF_INET6 to force the + socket to use IPv4 or IPv6. If not set it will be determined + from host (defaults to AF_UNSPEC). + + flags is a bitmask for getaddrinfo(). + + sock can optionally be specified in order to use a preexisting + socket object. + + backlog is the maximum number of queued connections passed to + listen() (defaults to 100). + + ssl can be set to an SSLContext to enable SSL over the + accepted connections. + + reuse_address tells the kernel to reuse a local socket in + TIME_WAIT state, without waiting for its natural timeout to + expire. If not specified will automatically be set to True on + UNIX. + + reuse_port tells the kernel to allow this endpoint to be bound to + the same port as other existing endpoints are bound to, so long as + they all set this flag when being created. This option is not + supported on Windows. + + ssl_handshake_timeout is the time in seconds that an SSL server + will wait for completion of the SSL handshake before aborting the + connection. Default is 60s. + + start_serving set to True (default) causes the created server + to start accepting connections immediately. When set to False, + the user should await Server.start_serving() or Server.serve_forever() + to make the server to start accepting connections. + Send a file through a transport. + + Return an amount of sent bytes. + Upgrade a transport to TLS. + + Return a new transport that *protocol* should start using + immediately. + create_unix_connectioncreate_unix_serverA coroutine which creates a UNIX Domain Socket server. + + The return value is a Server object, which can be used to stop + the service. + + path is a str, representing a file systsem path to bind the + server socket to. + + sock can optionally be specified in order to use a preexisting + socket object. + + backlog is the maximum number of queued connections passed to + listen() (defaults to 100). + + ssl can be set to an SSLContext to enable SSL over the + accepted connections. + + ssl_handshake_timeout is the time in seconds that an SSL server + will wait for the SSL handshake to complete (defaults to 60s). + + start_serving set to True (default) causes the created server + to start accepting connections immediately. When set to False, + the user should await Server.start_serving() or Server.serve_forever() + to make the server to start accepting connections. + A coroutine which creates a datagram endpoint. + + This method will try to establish the endpoint in the background. + When successful, the coroutine returns a (transport, protocol) pair. + + protocol_factory must be a callable returning a protocol instance. + + socket family AF_INET, socket.AF_INET6 or socket.AF_UNIX depending on + host (or family if specified), socket type SOCK_DGRAM. + + reuse_address tells the kernel to reuse a local socket in + TIME_WAIT state, without waiting for its natural timeout to + expire. If not specified it will automatically be set to True on + UNIX. + + reuse_port tells the kernel to allow this endpoint to be bound to + the same port as other existing endpoints are bound to, so long as + they all set this flag when being created. This option is not + supported on Windows and some UNIX's. If the + :py:data:`~socket.SO_REUSEPORT` constant is not defined then this + capability is unsupported. + + allow_broadcast tells the kernel to allow this endpoint to send + messages to the broadcast address. + + sock can optionally be specified in order to use a preexisting + socket object. + Register read pipe in event loop. Set the pipe to non-blocking mode. + + protocol_factory should instantiate object with Protocol interface. + pipe is a file-like object. + Return pair (transport, protocol), where transport supports the + ReadTransport interface.Register write pipe in event loop. + + protocol_factory should instantiate object with BaseProtocol interface. + Pipe is file-like object already switched to nonblocking. + Return pair (transport, protocol), where transport support + WriteTransport interface.add_readerremove_readeradd_writerremove_writersock_recvsock_recv_intosock_acceptadd_signal_handlerremove_signal_handlerAbstract policy for accessing the event loop.Get the event loop for the current context. + + Returns an event loop object implementing the BaseEventLoop interface, + or raises an exception in case no event loop has been set for the + current context and the current policy does not specify to create one. + + It should never return None.Set the event loop for the current context to loop.Create and return a new event loop object according to this + policy's rules. If there's need to set this loop as the event loop for + the current context, set_event_loop must be called explicitly.Get the watcher for child processes.Set the watcher for child processes.BaseDefaultEventLoopPolicyDefault policy implementation for accessing the event loop. + + In this policy, each thread has its own event loop. However, we + only automatically create an event loop by default for the main + thread; other threads by default have no event loop. + + Other policies may have different rules (e.g. a single global + event loop, or automatically creating an event loop per thread, or + using some other notion of context to which an event loop is + associated). + _loop_factory_Local_set_calledGet the event loop for the current context. + + Returns an instance of EventLoop or raises an exception. + _MainThreadThere is no current event loop in thread %r.Set the event loop.Create a new event loop. + + You must call set_event_loop() to make this the current event + loop. + _RunningLooploop_pid_running_loopReturn the running event loop. Raise a RuntimeError if there is none. + + This function is thread-specific. + no running event loopReturn the running event loop or None. + + This is a low-level function intended to be used by event loops. + This function is thread-specific. + running_loopSet the running event loop. + + This is a low-level function intended to be used by event loops. + This function is thread-specific. + _init_event_loop_policyDefaultEventLoopPolicyGet the current event loop policy.policySet the current event loop policy. + + If policy is None, the default policy is restored.Return an asyncio event loop. + + When called from a coroutine or a callback (e.g. scheduled with call_soon + or similar API), this function will always return the running event loop. + + If there is no running event loop set, the function will return + the result of `get_event_loop_policy().get_event_loop()` call. + current_loopEquivalent to calling get_event_loop_policy().set_event_loop(loop).Equivalent to calling get_event_loop_policy().new_event_loop().Equivalent to calling get_event_loop_policy().get_child_watcher().Equivalent to calling + get_event_loop_policy().set_child_watcher(watcher)._py__get_running_loop_py__set_running_loop_py_get_running_loop_py_get_event_loop_c__get_running_loop_c__set_running_loop_c_get_running_loop_c_get_event_loop# Keep a representation in debug mode to keep callback and# parameters. For example, to log the warning# "Executing took 2.5 second"# Running and stopping the event loop.# Methods scheduling callbacks. All these return Handles.# Method scheduling a coroutine object: create a task.# Methods for interacting with threads.# Network I/O methods returning Futures.# Pipes and subprocesses.# The reason to accept file-like object instead of just file descriptor# is: we need to own pipe and close it at transport finishing# Can got complicated errors if pass f.fileno(),# close fd in pipe transport then close f and vise versa.# Ready-based callback registration methods.# The add_*() methods return None.# The remove_*() methods return True if something was removed,# False if there was nothing to delete.# Completion based I/O methods returning Futures.# Signal handling.# Task factory.# Error handlers.# Debug flag management.# Child processes handling (Unix only).# Event loop policy. The policy itself is always global, even if the# policy's rules say that there is an event loop per thread (or other# notion of context). The default policy is installed by the first# call to get_event_loop_policy().# Lock for protecting the on-the-fly creation of the event loop policy.# A TLS for the running event loop, used by _get_running_loop.# NOTE: this function is implemented in C (see _asynciomodule.c)# pragma: no branch# Alias pure-Python implementations for testing purposes.# get_event_loop() is one of the most frequently called# functions in asyncio. Pure Python implementation is# about 4 times slower than C-accelerated.# Alias C implementations for testing purposes.b'Event loop and event loop policy.'u'Event loop and event loop policy.'b'AbstractEventLoopPolicy'u'AbstractEventLoopPolicy'b'AbstractEventLoop'u'AbstractEventLoop'b'AbstractServer'u'AbstractServer'b'Handle'u'Handle'b'TimerHandle'u'TimerHandle'b'get_event_loop_policy'u'get_event_loop_policy'b'set_event_loop_policy'u'set_event_loop_policy'b'get_event_loop'u'get_event_loop'b'set_event_loop'u'set_event_loop'b'new_event_loop'u'new_event_loop'b'get_child_watcher'u'get_child_watcher'b'set_child_watcher'u'set_child_watcher'b'_set_running_loop'u'_set_running_loop'b'get_running_loop'u'get_running_loop'b'_get_running_loop'u'_get_running_loop'b'Object returned by callback registration methods.'u'Object returned by callback registration methods.'b'_callback'u'_callback'b'_args'u'_args'b'_cancelled'u'_cancelled'b'_loop'u'_loop'b'_repr'u'_repr'b'_context'u'_context'b'Exception in callback 'u'Exception in callback 'b'Object returned by timed callback registration methods.'u'Object returned by timed callback registration methods.'b'_scheduled'u'_scheduled'b'_when'u'_when'b'when='u'when='b'Return a scheduled callback time. + + The time is an absolute timestamp, using the same time + reference as loop.time(). + 'u'Return a scheduled callback time. + + The time is an absolute timestamp, using the same time + reference as loop.time(). + 'b'Abstract server returned by create_server().'u'Abstract server returned by create_server().'b'Stop serving. This leaves existing connections open.'u'Stop serving. This leaves existing connections open.'b'Get the event loop the Server object is attached to.'u'Get the event loop the Server object is attached to.'b'Return True if the server is accepting connections.'u'Return True if the server is accepting connections.'b'Start accepting connections. + + This method is idempotent, so it can be called when + the server is already being serving. + 'u'Start accepting connections. + + This method is idempotent, so it can be called when + the server is already being serving. + 'b'Start accepting connections until the coroutine is cancelled. + + The server is closed when the coroutine is cancelled. + 'u'Start accepting connections until the coroutine is cancelled. + + The server is closed when the coroutine is cancelled. + 'b'Coroutine to wait until service is closed.'u'Coroutine to wait until service is closed.'b'Abstract event loop.'u'Abstract event loop.'b'Run the event loop until stop() is called.'u'Run the event loop until stop() is called.'b'Run the event loop until a Future is done. + + Return the Future's result, or raise its exception. + 'u'Run the event loop until a Future is done. + + Return the Future's result, or raise its exception. + 'b'Stop the event loop as soon as reasonable. + + Exactly how soon that is may depend on the implementation, but + no more I/O callbacks should be scheduled. + 'u'Stop the event loop as soon as reasonable. + + Exactly how soon that is may depend on the implementation, but + no more I/O callbacks should be scheduled. + 'b'Return whether the event loop is currently running.'u'Return whether the event loop is currently running.'b'Close the loop. + + The loop should not be running. + + This is idempotent and irreversible. + + No other methods should be called after this one. + 'u'Close the loop. + + The loop should not be running. + + This is idempotent and irreversible. + + No other methods should be called after this one. + 'b'A coroutine which creates a TCP server bound to host and port. + + The return value is a Server object which can be used to stop + the service. + + If host is an empty string or None all interfaces are assumed + and a list of multiple sockets will be returned (most likely + one for IPv4 and another one for IPv6). The host parameter can also be + a sequence (e.g. list) of hosts to bind to. + + family can be set to either AF_INET or AF_INET6 to force the + socket to use IPv4 or IPv6. If not set it will be determined + from host (defaults to AF_UNSPEC). + + flags is a bitmask for getaddrinfo(). + + sock can optionally be specified in order to use a preexisting + socket object. + + backlog is the maximum number of queued connections passed to + listen() (defaults to 100). + + ssl can be set to an SSLContext to enable SSL over the + accepted connections. + + reuse_address tells the kernel to reuse a local socket in + TIME_WAIT state, without waiting for its natural timeout to + expire. If not specified will automatically be set to True on + UNIX. + + reuse_port tells the kernel to allow this endpoint to be bound to + the same port as other existing endpoints are bound to, so long as + they all set this flag when being created. This option is not + supported on Windows. + + ssl_handshake_timeout is the time in seconds that an SSL server + will wait for completion of the SSL handshake before aborting the + connection. Default is 60s. + + start_serving set to True (default) causes the created server + to start accepting connections immediately. When set to False, + the user should await Server.start_serving() or Server.serve_forever() + to make the server to start accepting connections. + 'u'A coroutine which creates a TCP server bound to host and port. + + The return value is a Server object which can be used to stop + the service. + + If host is an empty string or None all interfaces are assumed + and a list of multiple sockets will be returned (most likely + one for IPv4 and another one for IPv6). The host parameter can also be + a sequence (e.g. list) of hosts to bind to. + + family can be set to either AF_INET or AF_INET6 to force the + socket to use IPv4 or IPv6. If not set it will be determined + from host (defaults to AF_UNSPEC). + + flags is a bitmask for getaddrinfo(). + + sock can optionally be specified in order to use a preexisting + socket object. + + backlog is the maximum number of queued connections passed to + listen() (defaults to 100). + + ssl can be set to an SSLContext to enable SSL over the + accepted connections. + + reuse_address tells the kernel to reuse a local socket in + TIME_WAIT state, without waiting for its natural timeout to + expire. If not specified will automatically be set to True on + UNIX. + + reuse_port tells the kernel to allow this endpoint to be bound to + the same port as other existing endpoints are bound to, so long as + they all set this flag when being created. This option is not + supported on Windows. + + ssl_handshake_timeout is the time in seconds that an SSL server + will wait for completion of the SSL handshake before aborting the + connection. Default is 60s. + + start_serving set to True (default) causes the created server + to start accepting connections immediately. When set to False, + the user should await Server.start_serving() or Server.serve_forever() + to make the server to start accepting connections. + 'b'Send a file through a transport. + + Return an amount of sent bytes. + 'u'Send a file through a transport. + + Return an amount of sent bytes. + 'b'Upgrade a transport to TLS. + + Return a new transport that *protocol* should start using + immediately. + 'u'Upgrade a transport to TLS. + + Return a new transport that *protocol* should start using + immediately. + 'b'A coroutine which creates a UNIX Domain Socket server. + + The return value is a Server object, which can be used to stop + the service. + + path is a str, representing a file systsem path to bind the + server socket to. + + sock can optionally be specified in order to use a preexisting + socket object. + + backlog is the maximum number of queued connections passed to + listen() (defaults to 100). + + ssl can be set to an SSLContext to enable SSL over the + accepted connections. + + ssl_handshake_timeout is the time in seconds that an SSL server + will wait for the SSL handshake to complete (defaults to 60s). + + start_serving set to True (default) causes the created server + to start accepting connections immediately. When set to False, + the user should await Server.start_serving() or Server.serve_forever() + to make the server to start accepting connections. + 'u'A coroutine which creates a UNIX Domain Socket server. + + The return value is a Server object, which can be used to stop + the service. + + path is a str, representing a file systsem path to bind the + server socket to. + + sock can optionally be specified in order to use a preexisting + socket object. + + backlog is the maximum number of queued connections passed to + listen() (defaults to 100). + + ssl can be set to an SSLContext to enable SSL over the + accepted connections. + + ssl_handshake_timeout is the time in seconds that an SSL server + will wait for the SSL handshake to complete (defaults to 60s). + + start_serving set to True (default) causes the created server + to start accepting connections immediately. When set to False, + the user should await Server.start_serving() or Server.serve_forever() + to make the server to start accepting connections. + 'b'A coroutine which creates a datagram endpoint. + + This method will try to establish the endpoint in the background. + When successful, the coroutine returns a (transport, protocol) pair. + + protocol_factory must be a callable returning a protocol instance. + + socket family AF_INET, socket.AF_INET6 or socket.AF_UNIX depending on + host (or family if specified), socket type SOCK_DGRAM. + + reuse_address tells the kernel to reuse a local socket in + TIME_WAIT state, without waiting for its natural timeout to + expire. If not specified it will automatically be set to True on + UNIX. + + reuse_port tells the kernel to allow this endpoint to be bound to + the same port as other existing endpoints are bound to, so long as + they all set this flag when being created. This option is not + supported on Windows and some UNIX's. If the + :py:data:`~socket.SO_REUSEPORT` constant is not defined then this + capability is unsupported. + + allow_broadcast tells the kernel to allow this endpoint to send + messages to the broadcast address. + + sock can optionally be specified in order to use a preexisting + socket object. + 'u'A coroutine which creates a datagram endpoint. + + This method will try to establish the endpoint in the background. + When successful, the coroutine returns a (transport, protocol) pair. + + protocol_factory must be a callable returning a protocol instance. + + socket family AF_INET, socket.AF_INET6 or socket.AF_UNIX depending on + host (or family if specified), socket type SOCK_DGRAM. + + reuse_address tells the kernel to reuse a local socket in + TIME_WAIT state, without waiting for its natural timeout to + expire. If not specified it will automatically be set to True on + UNIX. + + reuse_port tells the kernel to allow this endpoint to be bound to + the same port as other existing endpoints are bound to, so long as + they all set this flag when being created. This option is not + supported on Windows and some UNIX's. If the + :py:data:`~socket.SO_REUSEPORT` constant is not defined then this + capability is unsupported. + + allow_broadcast tells the kernel to allow this endpoint to send + messages to the broadcast address. + + sock can optionally be specified in order to use a preexisting + socket object. + 'b'Register read pipe in event loop. Set the pipe to non-blocking mode. + + protocol_factory should instantiate object with Protocol interface. + pipe is a file-like object. + Return pair (transport, protocol), where transport supports the + ReadTransport interface.'u'Register read pipe in event loop. Set the pipe to non-blocking mode. + + protocol_factory should instantiate object with Protocol interface. + pipe is a file-like object. + Return pair (transport, protocol), where transport supports the + ReadTransport interface.'b'Register write pipe in event loop. + + protocol_factory should instantiate object with BaseProtocol interface. + Pipe is file-like object already switched to nonblocking. + Return pair (transport, protocol), where transport support + WriteTransport interface.'u'Register write pipe in event loop. + + protocol_factory should instantiate object with BaseProtocol interface. + Pipe is file-like object already switched to nonblocking. + Return pair (transport, protocol), where transport support + WriteTransport interface.'b'Abstract policy for accessing the event loop.'u'Abstract policy for accessing the event loop.'b'Get the event loop for the current context. + + Returns an event loop object implementing the BaseEventLoop interface, + or raises an exception in case no event loop has been set for the + current context and the current policy does not specify to create one. + + It should never return None.'u'Get the event loop for the current context. + + Returns an event loop object implementing the BaseEventLoop interface, + or raises an exception in case no event loop has been set for the + current context and the current policy does not specify to create one. + + It should never return None.'b'Set the event loop for the current context to loop.'u'Set the event loop for the current context to loop.'b'Create and return a new event loop object according to this + policy's rules. If there's need to set this loop as the event loop for + the current context, set_event_loop must be called explicitly.'u'Create and return a new event loop object according to this + policy's rules. If there's need to set this loop as the event loop for + the current context, set_event_loop must be called explicitly.'b'Get the watcher for child processes.'u'Get the watcher for child processes.'b'Set the watcher for child processes.'u'Set the watcher for child processes.'b'Default policy implementation for accessing the event loop. + + In this policy, each thread has its own event loop. However, we + only automatically create an event loop by default for the main + thread; other threads by default have no event loop. + + Other policies may have different rules (e.g. a single global + event loop, or automatically creating an event loop per thread, or + using some other notion of context to which an event loop is + associated). + 'u'Default policy implementation for accessing the event loop. + + In this policy, each thread has its own event loop. However, we + only automatically create an event loop by default for the main + thread; other threads by default have no event loop. + + Other policies may have different rules (e.g. a single global + event loop, or automatically creating an event loop per thread, or + using some other notion of context to which an event loop is + associated). + 'b'Get the event loop for the current context. + + Returns an instance of EventLoop or raises an exception. + 'u'Get the event loop for the current context. + + Returns an instance of EventLoop or raises an exception. + 'b'There is no current event loop in thread %r.'u'There is no current event loop in thread %r.'b'Set the event loop.'u'Set the event loop.'b'Create a new event loop. + + You must call set_event_loop() to make this the current event + loop. + 'u'Create a new event loop. + + You must call set_event_loop() to make this the current event + loop. + 'b'Return the running event loop. Raise a RuntimeError if there is none. + + This function is thread-specific. + 'u'Return the running event loop. Raise a RuntimeError if there is none. + + This function is thread-specific. + 'b'no running event loop'u'no running event loop'b'Return the running event loop or None. + + This is a low-level function intended to be used by event loops. + This function is thread-specific. + 'u'Return the running event loop or None. + + This is a low-level function intended to be used by event loops. + This function is thread-specific. + 'b'Set the running event loop. + + This is a low-level function intended to be used by event loops. + This function is thread-specific. + 'u'Set the running event loop. + + This is a low-level function intended to be used by event loops. + This function is thread-specific. + 'b'Get the current event loop policy.'u'Get the current event loop policy.'b'Set the current event loop policy. + + If policy is None, the default policy is restored.'u'Set the current event loop policy. + + If policy is None, the default policy is restored.'b'Return an asyncio event loop. + + When called from a coroutine or a callback (e.g. scheduled with call_soon + or similar API), this function will always return the running event loop. + + If there is no running event loop set, the function will return + the result of `get_event_loop_policy().get_event_loop()` call. + 'u'Return an asyncio event loop. + + When called from a coroutine or a callback (e.g. scheduled with call_soon + or similar API), this function will always return the running event loop. + + If there is no running event loop set, the function will return + the result of `get_event_loop_policy().get_event_loop()` call. + 'b'Equivalent to calling get_event_loop_policy().set_event_loop(loop).'u'Equivalent to calling get_event_loop_policy().set_event_loop(loop).'b'Equivalent to calling get_event_loop_policy().new_event_loop().'u'Equivalent to calling get_event_loop_policy().new_event_loop().'b'Equivalent to calling get_event_loop_policy().get_child_watcher().'u'Equivalent to calling get_event_loop_policy().get_child_watcher().'b'Equivalent to calling + get_event_loop_policy().set_child_watcher(watcher).'u'Equivalent to calling + get_event_loop_policy().set_child_watcher(watcher).'u'asyncio.events'u'events'asyncio exceptions.IncompleteReadErrorLimitOverrunErrorThe Future or Task was cancelled.Sendfile syscall is not available. + + Raised if OS does not support sendfile syscall for given socket or + file type. + + Incomplete read error. Attributes: + + - partial: read bytes string before the end of stream was reached + - expected: total number of expected bytes (or None if unknown) + undefinedr_expected bytes read on a total of ' bytes read on a total of ' expected bytesReached the buffer limit while looking for a separator. + + Attributes: + - consumed: total number of to be consumed bytes. + b'asyncio exceptions.'u'asyncio exceptions.'b'InvalidStateError'u'InvalidStateError'b'IncompleteReadError'u'IncompleteReadError'b'LimitOverrunError'u'LimitOverrunError'b'SendfileNotAvailableError'u'SendfileNotAvailableError'b'The Future or Task was cancelled.'u'The Future or Task was cancelled.'b'Sendfile syscall is not available. + + Raised if OS does not support sendfile syscall for given socket or + file type. + 'u'Sendfile syscall is not available. + + Raised if OS does not support sendfile syscall for given socket or + file type. + 'b' + Incomplete read error. Attributes: + + - partial: read bytes string before the end of stream was reached + - expected: total number of expected bytes (or None if unknown) + 'u' + Incomplete read error. Attributes: + + - partial: read bytes string before the end of stream was reached + - expected: total number of expected bytes (or None if unknown) + 'b'undefined'u'undefined'b' bytes read on a total of 'u' bytes read on a total of 'b' expected bytes'u' expected bytes'b'Reached the buffer limit while looking for a separator. + + Attributes: + - consumed: total number of to be consumed bytes. + 'u'Reached the buffer limit while looking for a separator. + + Attributes: + - consumed: total number of to be consumed bytes. + 'u'asyncio.exceptions'Interface to the Expat non-validating XML parser.xml.parsers.expat.modelxml.parsers.expat.errors# provide pyexpat submodules as xml.parsers.expat submodulesb'Interface to the Expat non-validating XML parser.'u'Interface to the Expat non-validating XML parser.'b'xml.parsers.expat.model'u'xml.parsers.expat.model'b'xml.parsers.expat.errors'u'xml.parsers.expat.errors'u'xml.parsers.expat'u'parsers.expat'u'expat'distutils.fancy_getopt + +Wrapper around the standard getopt module that provides the following +additional features: + * short and long options are tied together + * options have help strings, so fancy_getopt could potentially + create a complete usage summary + * options set attributes of a passed-in object +[a-zA-Z](?:[a-zA-Z0-9-]*)longopt_pat^%s$longopt_re^(%s)=!(%s)$neg_alias_relongopt_xlateWrapper around the standard 'getopt()' module that provides some + handy extra functionality: + * short and long options are tied together + * options have help strings, and help text can be assembled + from them + * options set attributes of a passed-in object + * boolean options can have "negative aliases" -- eg. if + --quiet is the "negative alias" of --verbose, then "--quiet" + on the command line sets 'verbose' to false + option_tableoption_index_build_indexnegative_aliasshort_optslong_optsshort2longtakes_argoption_orderset_option_tableadd_optionlong_optionshort_optionhelp_stringoption conflict: already an option '%s'has_optionReturn true if the option table for this parser has an + option with long name 'long_option'.get_attr_nameTranslate long option name 'long_option' to the form it + has as an attribute of some object: ie., translate hyphens + to underscores._check_alias_dictinvalid %s '%s': option '%s' not defined"invalid %s '%s': ""option '%s' not defined"invalid %s '%s': aliased option '%s' not defined"aliased option '%s' not defined"set_aliasesSet the aliases for this option parser.set_negative_aliasesSet the negative aliases for this option parser. + 'negative_alias' should be a dictionary mapping option names to + option names, both the key and value must already be defined + in the option table.negative alias_grok_option_tablePopulate the various data structures that keep tabs on the + option table. Called by 'getopt()' before it can do anything + worthwhile. + shortinvalid option tuple: %rinvalid long option '%s': must be a string of length >= 2"invalid long option '%s': ""must be a string of length >= 2"invalid short option '%s': must a single character or None"invalid short option '%s': ""must a single character or None"alias_toinvalid negative alias '%s': aliased option '%s' takes a value"invalid negative alias '%s': ""aliased option '%s' takes a value"invalid alias '%s': inconsistent with aliased option '%s' (one of them takes a value, the other doesn't"invalid alias '%s': inconsistent with ""aliased option '%s' (one of them takes a value, ""the other doesn't"invalid long option name '%s' (must be letters, numbers, hyphens only"invalid long option name '%s' ""(must be letters, numbers, hyphens only"Parse command-line options in args. Store as attributes on object. + + If 'args' is None or not supplied, uses 'sys.argv[1:]'. If + 'object' is None or not supplied, creates a new OptionDummy + object, stores option values there, and returns a tuple (args, + object). If 'object' is supplied, it is modified in place and + 'getopt()' just returns 'args'; in both cases, the returned + 'args' is a modified copy of the passed-in 'args' list, which + is left untouched. + OptionDummycreated_objectboolean option can't have valueget_option_orderReturns the list of (option, value) tuples processed by the + previous run of 'getopt()'. Raises RuntimeError if + 'getopt()' hasn't been called yet. + 'getopt()' hasn't been called yetgenerate_helpGenerate help text (a list of strings, one per suggested line of + output) from the option table for this FancyGetopt object. + max_optopt_widthline_widthbig_indentOption summary:wrap_text --%-*s %s --%-*s %s (-%s)opt_names --%-*sfancy_getoptnegative_opt_wscharwhitespaceWS_TRANSwrap_text(text : string, width : int) -> [string] + + Split 'text' into multiple lines of no more than 'width' characters + each, and return the list of strings that results. + ( +|-+)cur_linecur_lentranslate_longoptConvert a long option name to a valid Python identifier by + changing "-" to "_". + Dummy class just used as a place to hold command-line option + values as instance attributes.Create a new OptionDummy instance. The attributes listed in + 'options' will be initialized to None.Tra-la-la, supercalifragilisticexpialidocious. +How *do* you spell that odd word, anyways? +(Someone ask Mary -- she'll know [or she'll +say, "How should I know?"].)width: %d# Much like command_re in distutils.core, this is close to but not quite# the same as a Python NAME -- except, in the spirit of most GNU# utilities, we use '-' in place of '_'. (The spirit of LISP lives on!)# The similarities to NAME are again not a coincidence...# For recognizing "negative alias" options, eg. "quiet=!verbose"# This is used to translate long options to legitimate Python identifiers# (for use as attributes of some object).# The option table is (currently) a list of tuples. The# tuples may have 3 or four values:# (long_option, short_option, help_string [, repeatable])# if an option takes an argument, its long_option should have '='# appended; short_option should just be a single character, no ':'# in any case. If a long_option doesn't have a corresponding# short_option, short_option should be None. All option tuples# must have long options.# 'option_index' maps long option names to entries in the option# table (ie. those 3-tuples).# 'alias' records (duh) alias options; {'foo': 'bar'} means# --foo is an alias for --bar# 'negative_alias' keeps track of options that are the boolean# opposite of some other option# These keep track of the information in the option table. We# don't actually populate these structures until we're ready to# parse the command-line, since the 'option_table' passed in here# isn't necessarily the final word.# And 'option_order' is filled up in 'getopt()'; it records the# original order of options (and their values) on the command-line,# but expands short options, converts aliases, etc.# the option table is part of the code, so simply# assert that it is correct# Type- and value-check the option names# option takes an argument?# Is option is a "negative alias" for some other option (eg.# "quiet" == "!verbose")?# XXX redundant?!# If this is an alias option, make sure its "takes arg" flag is# the same as the option it's aliased to.# Now enforce some bondage on the long option name, so we can# later translate it to an attribute name on some object. Have# to do this a bit late to make sure we've removed any trailing# '='.# it's a short option# boolean option?# The only repeating option at the moment is 'verbose'.# It has a negative option -q quiet, which should set verbose = 0.# for opts# Blithely assume the option table is good: probably wouldn't call# 'generate_help()' unless you've already called 'getopt()'.# First pass: determine maximum length of long option names# " (-x)" where short == 'x'# room for indent + dashes + gutter# Typical help block looks like this:# --foo controls foonabulation# Help block for longest option looks like this:# --flimflam set the flim-flam level# and with wrapped text:# --flimflam set the flim-flam level (must be between# 0 and 100, except on Tuesdays)# Options with short names will have the short name shown (but# it doesn't contribute to max_opt):# --foo (-f) controls foonabulation# If adding the short option would make the left column too wide,# we push the explanation off to the next line# --flimflam (-l)# set the flim-flam level# Important parameters:# - 2 spaces before option block start lines# - 2 dashes for each long option name# - min. 2 spaces between option and explanation (gutter)# - 5 characters (incl. space) for short option name# Now generate lines of help text. (If 80 columns were good enough# for Jesus, then 78 columns are good enough for me!)# Case 1: no short option at all (makes life easy)# Case 2: we have a short option, so we have to include it# just after the long option# ' - ' results in empty strings# list of chunks (to-be-joined)# length of current line# can squeeze (at least) this chunk in# this line is full# drop last chunk if all space# any chunks left to process?# if the current line is still empty, then we had a single# chunk that's too big too fit on a line -- so we break# down and break it up at the line width# all-whitespace chunks at the end of a line can be discarded# (and we know from the re.split above that if a chunk has# *any* whitespace, it is *all* whitespace)# and store this line in the list-of-all-lines -- as a single# string, of course!b'distutils.fancy_getopt + +Wrapper around the standard getopt module that provides the following +additional features: + * short and long options are tied together + * options have help strings, so fancy_getopt could potentially + create a complete usage summary + * options set attributes of a passed-in object +'u'distutils.fancy_getopt + +Wrapper around the standard getopt module that provides the following +additional features: + * short and long options are tied together + * options have help strings, so fancy_getopt could potentially + create a complete usage summary + * options set attributes of a passed-in object +'b'[a-zA-Z](?:[a-zA-Z0-9-]*)'u'[a-zA-Z](?:[a-zA-Z0-9-]*)'b'^%s$'u'^%s$'b'^(%s)=!(%s)$'u'^(%s)=!(%s)$'b'Wrapper around the standard 'getopt()' module that provides some + handy extra functionality: + * short and long options are tied together + * options have help strings, and help text can be assembled + from them + * options set attributes of a passed-in object + * boolean options can have "negative aliases" -- eg. if + --quiet is the "negative alias" of --verbose, then "--quiet" + on the command line sets 'verbose' to false + 'u'Wrapper around the standard 'getopt()' module that provides some + handy extra functionality: + * short and long options are tied together + * options have help strings, and help text can be assembled + from them + * options set attributes of a passed-in object + * boolean options can have "negative aliases" -- eg. if + --quiet is the "negative alias" of --verbose, then "--quiet" + on the command line sets 'verbose' to false + 'b'option conflict: already an option '%s''u'option conflict: already an option '%s''b'Return true if the option table for this parser has an + option with long name 'long_option'.'u'Return true if the option table for this parser has an + option with long name 'long_option'.'b'Translate long option name 'long_option' to the form it + has as an attribute of some object: ie., translate hyphens + to underscores.'u'Translate long option name 'long_option' to the form it + has as an attribute of some object: ie., translate hyphens + to underscores.'b'invalid %s '%s': option '%s' not defined'u'invalid %s '%s': option '%s' not defined'b'invalid %s '%s': aliased option '%s' not defined'u'invalid %s '%s': aliased option '%s' not defined'b'Set the aliases for this option parser.'u'Set the aliases for this option parser.'b'alias'u'alias'b'Set the negative aliases for this option parser. + 'negative_alias' should be a dictionary mapping option names to + option names, both the key and value must already be defined + in the option table.'u'Set the negative aliases for this option parser. + 'negative_alias' should be a dictionary mapping option names to + option names, both the key and value must already be defined + in the option table.'b'negative alias'u'negative alias'b'Populate the various data structures that keep tabs on the + option table. Called by 'getopt()' before it can do anything + worthwhile. + 'u'Populate the various data structures that keep tabs on the + option table. Called by 'getopt()' before it can do anything + worthwhile. + 'b'invalid option tuple: %r'u'invalid option tuple: %r'b'invalid long option '%s': must be a string of length >= 2'u'invalid long option '%s': must be a string of length >= 2'b'invalid short option '%s': must a single character or None'u'invalid short option '%s': must a single character or None'b'invalid negative alias '%s': aliased option '%s' takes a value'u'invalid negative alias '%s': aliased option '%s' takes a value'b'invalid alias '%s': inconsistent with aliased option '%s' (one of them takes a value, the other doesn't'u'invalid alias '%s': inconsistent with aliased option '%s' (one of them takes a value, the other doesn't'b'invalid long option name '%s' (must be letters, numbers, hyphens only'u'invalid long option name '%s' (must be letters, numbers, hyphens only'b'Parse command-line options in args. Store as attributes on object. + + If 'args' is None or not supplied, uses 'sys.argv[1:]'. If + 'object' is None or not supplied, creates a new OptionDummy + object, stores option values there, and returns a tuple (args, + object). If 'object' is supplied, it is modified in place and + 'getopt()' just returns 'args'; in both cases, the returned + 'args' is a modified copy of the passed-in 'args' list, which + is left untouched. + 'u'Parse command-line options in args. Store as attributes on object. + + If 'args' is None or not supplied, uses 'sys.argv[1:]'. If + 'object' is None or not supplied, creates a new OptionDummy + object, stores option values there, and returns a tuple (args, + object). If 'object' is supplied, it is modified in place and + 'getopt()' just returns 'args'; in both cases, the returned + 'args' is a modified copy of the passed-in 'args' list, which + is left untouched. + 'b'boolean option can't have value'u'boolean option can't have value'b'Returns the list of (option, value) tuples processed by the + previous run of 'getopt()'. Raises RuntimeError if + 'getopt()' hasn't been called yet. + 'u'Returns the list of (option, value) tuples processed by the + previous run of 'getopt()'. Raises RuntimeError if + 'getopt()' hasn't been called yet. + 'b''getopt()' hasn't been called yet'u''getopt()' hasn't been called yet'b'Generate help text (a list of strings, one per suggested line of + output) from the option table for this FancyGetopt object. + 'u'Generate help text (a list of strings, one per suggested line of + output) from the option table for this FancyGetopt object. + 'b'Option summary:'u'Option summary:'b' --%-*s %s'u' --%-*s %s'b' --%-*s 'u' --%-*s 'b'%s (-%s)'u'%s (-%s)'b' --%-*s'u' --%-*s'b'wrap_text(text : string, width : int) -> [string] + + Split 'text' into multiple lines of no more than 'width' characters + each, and return the list of strings that results. + 'u'wrap_text(text : string, width : int) -> [string] + + Split 'text' into multiple lines of no more than 'width' characters + each, and return the list of strings that results. + 'b'( +|-+)'u'( +|-+)'b'Convert a long option name to a valid Python identifier by + changing "-" to "_". + 'u'Convert a long option name to a valid Python identifier by + changing "-" to "_". + 'b'Dummy class just used as a place to hold command-line option + values as instance attributes.'u'Dummy class just used as a place to hold command-line option + values as instance attributes.'b'Create a new OptionDummy instance. The attributes listed in + 'options' will be initialized to None.'u'Create a new OptionDummy instance. The attributes listed in + 'options' will be initialized to None.'b'Tra-la-la, supercalifragilisticexpialidocious. +How *do* you spell that odd word, anyways? +(Someone ask Mary -- she'll know [or she'll +say, "How should I know?"].)'u'Tra-la-la, supercalifragilisticexpialidocious. +How *do* you spell that odd word, anyways? +(Someone ask Mary -- she'll know [or she'll +say, "How should I know?"].)'b'width: %d'u'width: %d'u'distutils.fancy_getopt'u'fancy_getopt'u'faulthandler module.'_fatal_error_fatal_error_c_thread_read_null_sigabrt_sigfpe_sigsegv_stack_overflowcancel_dump_traceback_laterdump_traceback_laterFeedParser - An email feed parser. + +The feed parser implements an interface for incrementally parsing an email +message, line by line. This has advantages for certain applications, such as +those reading email messages off a socket. + +FeedParser.feed() is the primary interface for pushing new data into the +parser. It returns when there's nothing more it can do with the available +data. When you have no more data to push into the parser, call .close(). +This completes the parsing and returns the root message object. + +The other advantage of this parser is that it will never raise a parsing +exception. Instead, when it finds something unexpected, it adds a 'defect' to +the current message. Defects are just instances that live on the message +object's .defects attribute. +FeedParserBytesFeedParseremail._policybase\r\n|\r|\nNLCRE(\r\n|\r|\n)NLCRE_bol(\r\n|\r|\n)\ZNLCRE_eolNLCRE_crack^(From |[\041-\071\073-\176]*:|[\t ])headerRENeedMoreDataBufferedSubFileA file-ish object that can have new data loaded into it. + + You can also push and pop line-matching predicates onto a stack. When the + current predicate matches the current line, a false EOF response + (i.e. empty string) is returned instead. This lets the parser adhere to a + simple abstraction -- it parses until EOF closes the current message. + _partial_lines_eofstackpush_eof_matcherpop_eof_matcherpushlinesateofunreadlinePush some new data into this object.A feed-style parser of email._factory is called with no arguments to create a new message obj + + The policy keyword specifies a policy object that controls a number of + aspects of the parser's operation. The default policy maintains + backward compatibility. + + _old_style_factory_input_msgstack_parsegen_parse_cur_headersonly_set_headersonlyPush more data into the parser._call_parseParse all remaining data and return the root message object._pop_messageget_content_maintypemultipartis_multipart_new_messageget_content_typemultipart/digestset_default_typemessage/rfc822attach_parse_headersmessage/delivery-statusget_boundaryboundarycontent-transfer-encodingbinary(?P)(?P--)?(?P[ \t]*)(?P\r\n|\r|\n)?$boundaryrecapturing_preamblepreambleclose_boundary_seenmolastlineeolmoepilogue_payloadbolmolastheaderlastvalueset_rawFrom set_unixfromMissing header name._parse_headers fed line with no : and no leading WSLike FeedParser, but feed accepts bytes.# Copyright (C) 2004-2006 Python Software Foundation# Authors: Baxter, Wouters and Warsaw# RFC 2822 $3.6.8 Optional fields. ftext is %d33-57 / %d59-126, Any character# except controls, SP, and ":".# Text stream of the last partial line pushed into this object.# See issue 22233 for why this is a text stream and not a list.# A deque of full, pushed lines# The stack of false-EOF checking predicates.# A flag indicating whether the file has been closed or not.# Don't forget any trailing partial line.# Pop the line off the stack and see if it matches the current# false-EOF predicate.# RFC 2046, section 5.1.2 requires us to recognize outer level# boundaries at any level of inner nesting. Do this, but be sure it's# in the order of most to least nested.# We're at the false EOF. But push the last line back first.# Let the consumer push a line back into the buffer.# No new complete lines, wait for more.# Crack into lines, preserving the linesep characters.# If the last element of the list does not end in a newline, then treat# it as a partial line. We only check for '\n' here because a line# ending with '\r' might be a line that was split in the middle of a# '\r\n' sequence (see bugs 1555570 and 1721862).# Assume this is an old-style factory# Non-public interface for supporting Parser's headersonly flag# Look for final set of defects# Create a new message and start by parsing headers.# Collect the headers, searching for a line that doesn't match the RFC# 2822 header or continuation pattern (including an empty line).# If we saw the RFC defined header/body separator# (i.e. newline), just throw it away. Otherwise the line is# part of the body so push it back.# Done with the headers, so parse them and figure out what we're# supposed to see in the body of the message.# Headers-only parsing is a backwards compatibility hack, which was# necessary in the older parser, which could raise errors. All# remaining lines in the input are thrown into the message body.# message/delivery-status contains blocks of headers separated by# a blank line. We'll represent each header block as a separate# nested message object, but the processing is a bit different# than standard message/* types because there is no body for the# nested messages. A blank line separates the subparts.# We need to pop the EOF matcher in order to tell if we're at# the end of the current file, not the end of the last block# of message headers.# The input stream must be sitting at the newline or at the# EOF. We want to see if we're at the end of this subpart, so# first consume the blank line, then test the next line to see# if we're at this subpart's EOF.# Not at EOF so this is a line we're going to need.# The message claims to be a message/* type, then what follows is# another RFC 2822 message.# The message /claims/ to be a multipart but it has not# defined a boundary. That's a problem which we'll handle by# reading everything until the EOF and marking the message as# defective.# Make sure a valid content type was specified per RFC 2045:6.4.# Create a line match predicate which matches the inter-part# boundary as well as the end-of-multipart boundary. Don't push# this onto the input stream until we've scanned past the# preamble.# If we're looking at the end boundary, we're done with# this multipart. If there was a newline at the end of# the closing boundary, then we need to initialize the# epilogue with the empty string (see below).# We saw an inter-part boundary. Were we in the preamble?# According to RFC 2046, the last newline belongs# to the boundary.# We saw a boundary separating two parts. Consume any# multiple boundary lines that may be following. Our# interpretation of RFC 2046 BNF grammar does not produce# body parts within such double boundaries.# Recurse to parse this subpart; the input stream points# at the subpart's first line.# Because of RFC 2046, the newline preceding the boundary# separator actually belongs to the boundary, not the# previous subpart's payload (or epilogue if the previous# part is a multipart).# Set the multipart up for newline cleansing, which will# happen if we're in a nested multipart.# I think we must be in the preamble# We've seen either the EOF or the end boundary. If we're still# capturing the preamble, we never saw the start boundary. Note# that as a defect and store the captured text as the payload.# If we're not processing the preamble, then we might have seen# EOF without seeing that end boundary...that is also a defect.# Everything from here to the EOF is epilogue. If the end boundary# ended in a newline, we'll need to make sure the epilogue isn't# Any CRLF at the front of the epilogue is not technically part of# the epilogue. Also, watch out for an empty string epilogue,# which means a single newline.# Otherwise, it's some non-multipart type, so the entire rest of the# file contents becomes the payload.# Passed a list of lines that make up the headers for the current msg# Check for continuation# The first line of the headers was a continuation. This# is illegal, so let's note the defect, store the illegal# line, and ignore it for purposes of headers.# Check for envelope header, i.e. unix-from# Strip off the trailing newline# Something looking like a unix-from at the end - it's# probably the first line of the body, so push back the# line and stop.# Weirdly placed unix-from line. Note this as a defect# and ignore it.# Split the line on the colon separating field name from value.# There will always be a colon, because if there wasn't the part of# the parser that calls us would have started parsing the body.# If the colon is on the start of the line the header is clearly# malformed, but we might be able to salvage the rest of the# message. Track the error but keep going.# Done with all the lines, so handle the last header.b'FeedParser - An email feed parser. + +The feed parser implements an interface for incrementally parsing an email +message, line by line. This has advantages for certain applications, such as +those reading email messages off a socket. + +FeedParser.feed() is the primary interface for pushing new data into the +parser. It returns when there's nothing more it can do with the available +data. When you have no more data to push into the parser, call .close(). +This completes the parsing and returns the root message object. + +The other advantage of this parser is that it will never raise a parsing +exception. Instead, when it finds something unexpected, it adds a 'defect' to +the current message. Defects are just instances that live on the message +object's .defects attribute. +'u'FeedParser - An email feed parser. + +The feed parser implements an interface for incrementally parsing an email +message, line by line. This has advantages for certain applications, such as +those reading email messages off a socket. + +FeedParser.feed() is the primary interface for pushing new data into the +parser. It returns when there's nothing more it can do with the available +data. When you have no more data to push into the parser, call .close(). +This completes the parsing and returns the root message object. + +The other advantage of this parser is that it will never raise a parsing +exception. Instead, when it finds something unexpected, it adds a 'defect' to +the current message. Defects are just instances that live on the message +object's .defects attribute. +'b'FeedParser'u'FeedParser'b'BytesFeedParser'u'BytesFeedParser'b'\r\n|\r|\n'u'\r\n|\r|\n'b'(\r\n|\r|\n)'u'(\r\n|\r|\n)'b'(\r\n|\r|\n)\Z'u'(\r\n|\r|\n)\Z'b'^(From |[\041-\071\073-\176]*:|[\t ])'u'^(From |[\041-\071\073-\176]*:|[\t ])'b'A file-ish object that can have new data loaded into it. + + You can also push and pop line-matching predicates onto a stack. When the + current predicate matches the current line, a false EOF response + (i.e. empty string) is returned instead. This lets the parser adhere to a + simple abstraction -- it parses until EOF closes the current message. + 'u'A file-ish object that can have new data loaded into it. + + You can also push and pop line-matching predicates onto a stack. When the + current predicate matches the current line, a false EOF response + (i.e. empty string) is returned instead. This lets the parser adhere to a + simple abstraction -- it parses until EOF closes the current message. + 'b'Push some new data into this object.'u'Push some new data into this object.'b'A feed-style parser of email.'u'A feed-style parser of email.'b'_factory is called with no arguments to create a new message obj + + The policy keyword specifies a policy object that controls a number of + aspects of the parser's operation. The default policy maintains + backward compatibility. + + 'u'_factory is called with no arguments to create a new message obj + + The policy keyword specifies a policy object that controls a number of + aspects of the parser's operation. The default policy maintains + backward compatibility. + + 'b'Push more data into the parser.'u'Push more data into the parser.'b'Parse all remaining data and return the root message object.'u'Parse all remaining data and return the root message object.'b'multipart'u'multipart'b'multipart/digest'u'multipart/digest'b'message/rfc822'u'message/rfc822'b'message/delivery-status'u'message/delivery-status'b'content-transfer-encoding'u'content-transfer-encoding'b'binary'u'binary'b'(?P'u'(?P'b')(?P--)?(?P[ \t]*)(?P\r\n|\r|\n)?$'u')(?P--)?(?P[ \t]*)(?P\r\n|\r|\n)?$'b'linesep'u'linesep'b'From 'u'From 'b'Missing header name.'u'Missing header name.'b'_parse_headers fed line with no : and no leading WS'u'_parse_headers fed line with no : and no leading WS'b'Like FeedParser, but feed accepts bytes.'u'Like FeedParser, but feed accepts bytes.'u'email.feedparser'distutils.file_util + +Utility functions for operating on single files. +copyinghard linkinghardsymbolically linkingsym_copy_action_copy_file_contentsbuffer_sizeCopy the file 'src' to 'dst'; both must be filenames. Any error + opening either file, reading from 'src', or writing to 'dst', raises + DistutilsFileError. Data is read/written in chunks of 'buffer_size' + bytes (default 16k). No attempt is made to handle anything apart from + regular files. + fsrcfdstcould not open '%s': %scould not delete '%s': %scould not read from '%s': %scould not write to '%s': %sCopy a file 'src' to 'dst'. If 'dst' is a directory, then 'src' is + copied there with the same name; otherwise, it must be a filename. (If + the file exists, it will be ruthlessly clobbered.) If 'preserve_mode' + is true (the default), the file's mode (type and permission bits, or + whatever is analogous on the current platform) is copied. If + 'preserve_times' is true (the default), the last-modified and + last-access times are copied as well. If 'update' is true, 'src' will + only be copied if 'dst' does not exist, or if 'dst' does exist but is + older than 'src'. + + 'link' allows you to make hard links (os.link) or symbolic links + (os.symlink) instead of copying: set it to "hard" or "sym"; if it is + None (the default), files are copied. Don't set 'link' on systems that + don't support it: 'copy_file()' doesn't check if hard or symbolic + linking is available. If hardlink fails, falls back to + _copy_file_contents(). + + Under Mac OS, uses the native file copy function in macostools; on + other systems, uses '_copy_file_contents()' to copy file contents. + + Return a tuple (dest_name, copied): 'dest_name' is the actual name of + the output file, and 'copied' is true if the file was copied (or would + have been copied, if 'dry_run' true). + can't copy '%s': doesn't exist or not a regular filenot copying %s (output up-to-date)invalid value '%s' for 'link' argument%s %s -> %sutimeMove a file 'src' to 'dst'. If 'dst' is a directory, the file will + be moved into it with the same name; otherwise, 'src' is just renamed + to 'dst'. Return the new full name of the file. + + Handles cross-device moves on Unix using 'copy_file()'. What about + other systems??? + moving %s -> %scan't move '%s': not a regular filecan't move '%s': destination '%s' already existscan't move '%s': destination '%s' not a valid pathcopy_itcouldn't move '%s' to '%s': %scouldn't move '%s' to '%s' by copy/delete: delete '%s' failed: %s"couldn't move '%s' to '%s' by copy/delete: ""delete '%s' failed: %s"write_fileCreate a file with the specified name and write 'contents' (a + sequence of strings without line terminators) to it. + # for generating verbose output in 'copy_file()'# Stolen from shutil module in the standard library, but with# custom error-handling added.# XXX if the destination file already exists, we clobber it if# copying, but blow up if linking. Hmmm. And I don't know what# macostools.copyfile() does. Should definitely be consistent, and# should probably blow up if destination exists and we would be# changing it (ie. it's not already a hard/soft link to src OR# (not update) and (src newer than dst).# If linking (hard or symbolic), use the appropriate system call# (Unix only, of course, but that's the caller's responsibility)# If hard linking fails, fall back on copying file# (some special filesystems don't support hard linking# even under Unix, see issue #8876).# Otherwise (non-Mac, not linking), copy the file contents and# (optionally) copy the times and mode.# According to David Ascher , utime() should be done# before chmod() (at least under NT).# XXX I suspect this is Unix-specific -- need porting help!b'distutils.file_util + +Utility functions for operating on single files. +'u'distutils.file_util + +Utility functions for operating on single files. +'b'copying'u'copying'b'hard linking'u'hard linking'b'hard'u'hard'b'symbolically linking'u'symbolically linking'b'sym'u'sym'b'Copy the file 'src' to 'dst'; both must be filenames. Any error + opening either file, reading from 'src', or writing to 'dst', raises + DistutilsFileError. Data is read/written in chunks of 'buffer_size' + bytes (default 16k). No attempt is made to handle anything apart from + regular files. + 'u'Copy the file 'src' to 'dst'; both must be filenames. Any error + opening either file, reading from 'src', or writing to 'dst', raises + DistutilsFileError. Data is read/written in chunks of 'buffer_size' + bytes (default 16k). No attempt is made to handle anything apart from + regular files. + 'b'could not open '%s': %s'u'could not open '%s': %s'b'could not delete '%s': %s'u'could not delete '%s': %s'b'could not read from '%s': %s'u'could not read from '%s': %s'b'could not write to '%s': %s'u'could not write to '%s': %s'b'Copy a file 'src' to 'dst'. If 'dst' is a directory, then 'src' is + copied there with the same name; otherwise, it must be a filename. (If + the file exists, it will be ruthlessly clobbered.) If 'preserve_mode' + is true (the default), the file's mode (type and permission bits, or + whatever is analogous on the current platform) is copied. If + 'preserve_times' is true (the default), the last-modified and + last-access times are copied as well. If 'update' is true, 'src' will + only be copied if 'dst' does not exist, or if 'dst' does exist but is + older than 'src'. + + 'link' allows you to make hard links (os.link) or symbolic links + (os.symlink) instead of copying: set it to "hard" or "sym"; if it is + None (the default), files are copied. Don't set 'link' on systems that + don't support it: 'copy_file()' doesn't check if hard or symbolic + linking is available. If hardlink fails, falls back to + _copy_file_contents(). + + Under Mac OS, uses the native file copy function in macostools; on + other systems, uses '_copy_file_contents()' to copy file contents. + + Return a tuple (dest_name, copied): 'dest_name' is the actual name of + the output file, and 'copied' is true if the file was copied (or would + have been copied, if 'dry_run' true). + 'u'Copy a file 'src' to 'dst'. If 'dst' is a directory, then 'src' is + copied there with the same name; otherwise, it must be a filename. (If + the file exists, it will be ruthlessly clobbered.) If 'preserve_mode' + is true (the default), the file's mode (type and permission bits, or + whatever is analogous on the current platform) is copied. If + 'preserve_times' is true (the default), the last-modified and + last-access times are copied as well. If 'update' is true, 'src' will + only be copied if 'dst' does not exist, or if 'dst' does exist but is + older than 'src'. + + 'link' allows you to make hard links (os.link) or symbolic links + (os.symlink) instead of copying: set it to "hard" or "sym"; if it is + None (the default), files are copied. Don't set 'link' on systems that + don't support it: 'copy_file()' doesn't check if hard or symbolic + linking is available. If hardlink fails, falls back to + _copy_file_contents(). + + Under Mac OS, uses the native file copy function in macostools; on + other systems, uses '_copy_file_contents()' to copy file contents. + + Return a tuple (dest_name, copied): 'dest_name' is the actual name of + the output file, and 'copied' is true if the file was copied (or would + have been copied, if 'dry_run' true). + 'b'can't copy '%s': doesn't exist or not a regular file'u'can't copy '%s': doesn't exist or not a regular file'b'not copying %s (output up-to-date)'u'not copying %s (output up-to-date)'b'invalid value '%s' for 'link' argument'u'invalid value '%s' for 'link' argument'b'%s %s -> %s'u'%s %s -> %s'b'Move a file 'src' to 'dst'. If 'dst' is a directory, the file will + be moved into it with the same name; otherwise, 'src' is just renamed + to 'dst'. Return the new full name of the file. + + Handles cross-device moves on Unix using 'copy_file()'. What about + other systems??? + 'u'Move a file 'src' to 'dst'. If 'dst' is a directory, the file will + be moved into it with the same name; otherwise, 'src' is just renamed + to 'dst'. Return the new full name of the file. + + Handles cross-device moves on Unix using 'copy_file()'. What about + other systems??? + 'b'moving %s -> %s'u'moving %s -> %s'b'can't move '%s': not a regular file'u'can't move '%s': not a regular file'b'can't move '%s': destination '%s' already exists'u'can't move '%s': destination '%s' already exists'b'can't move '%s': destination '%s' not a valid path'u'can't move '%s': destination '%s' not a valid path'b'couldn't move '%s' to '%s': %s'u'couldn't move '%s' to '%s': %s'b'couldn't move '%s' to '%s' by copy/delete: delete '%s' failed: %s'u'couldn't move '%s' to '%s' by copy/delete: delete '%s' failed: %s'b'Create a file with the specified name and write 'contents' (a + sequence of strings without line terminators) to it. + 'u'Create a file with the specified name and write 'contents' (a + sequence of strings without line terminators) to it. + 'u'distutils.file_util'u'file_util'distutils.filelist + +Provides the FileList class, used for poking about the filesystem +and building lists of files. +convert_pathFileListA list of files built by on exploring the filesystem and filtered by + applying various patterns to what we find there. + + Instance attributes: + dir + directory from which files will be taken -- only used if + 'allfiles' not supplied to constructor + files + list of filenames currently being built/filtered/manipulated + allfiles + complete list of files under consideration (ie. without any + filtering applied) + allfilesset_allfilesPrint 'msg' to stdout if the global DEBUG (taken from the + DISTUTILS_DEBUG environment variable) flag is true. + sortable_filessort_tupleremove_duplicates_parse_template_linedir_patternincludeexcludeglobal-includeglobal-exclude'%s' expects ...recursive-includerecursive-exclude'%s' expects

...graftprune'%s' expects a single unknown action '%s'process_template_lineinclude include_patternwarning: no files found matching '%s'exclude exclude_patternwarning: no previously-included files found matching '%s'"warning: no previously-included files ""found matching '%s'"global-include warning: no files found matching '%s' anywhere in distribution"warning: no files found matching '%s' ""anywhere in distribution"global-exclude warning: no previously-included files matching '%s' found anywhere in distribution"warning: no previously-included files matching ""'%s' found anywhere in distribution"recursive-include %s %swarning: no files found matching '%s' under directory '%s'"under directory '%s'"recursive-exclude %s %swarning: no previously-included files matching '%s' found under directory '%s'"'%s' found under directory '%s'"graft warning: no directories found matching '%s'prune no previously-included directories found matching '%s'"no previously-included directories found ""matching '%s'"this cannot happen: invalid action '%s'is_regexSelect strings (presumably filenames) from 'self.files' that + match 'pattern', a Unix-style wildcard (glob) pattern. Patterns + are not quite the same as implemented by the 'fnmatch' module: '*' + and '?' match non-special characters, where "special" is platform- + dependent: slash on Unix; colon, slash, and backslash on + DOS/Windows; and colon on Mac OS. + + If 'anchor' is true (the default), then the pattern match is more + stringent: "*.py" will match "foo.py" but not "foo/bar.py". If + 'anchor' is false, both of these will match. + + If 'prefix' is supplied, then only filenames starting with 'prefix' + (itself a pattern) and ending with 'pattern', with anything in between + them, will match. 'anchor' is ignored in this case. + + If 'is_regex' is true, 'anchor' and 'prefix' are ignored, and + 'pattern' is assumed to be either a string containing a regex or a + regex object -- no translation is done, the regex is just compiled + and used as-is. + + Selected strings will be added to self.files. + + Return True if files are found, False otherwise. + files_foundtranslate_patternpattern_reinclude_pattern: applying regex r'%s' adding Remove strings (presumably filenames) from 'files' that match + 'pattern'. Other parameters are the same as for + 'include_pattern()', above. + The list 'self.files' is modified in place. + Return True if files are found, False otherwise. + exclude_pattern: applying regex r'%s' removing _find_all_simple + Find all files under 'path' + followlinks + Find all files under 'dir' and return the list of full filenames. + Unless dir is '.', return full filenames with dir prepended. + relpathmake_relglob_to_reTranslate a shell-like glob pattern to a regular expression; return + a string containing the regex. Differs from 'fnmatch.translate()' in + that '*' does not match "special characters" (which are + platform-specific). + \\\\\1[^%s]escaped((? ...'u''%s' expects ...'b'recursive-include'u'recursive-include'b'recursive-exclude'u'recursive-exclude'b''%s' expects ...'u''%s' expects ...'b'graft'u'graft'b'prune'u'prune'b''%s' expects a single 'u''%s' expects a single 'b'unknown action '%s''u'unknown action '%s''b'include 'u'include 'b'warning: no files found matching '%s''u'warning: no files found matching '%s''b'exclude 'u'exclude 'b'warning: no previously-included files found matching '%s''u'warning: no previously-included files found matching '%s''b'global-include 'u'global-include 'b'warning: no files found matching '%s' anywhere in distribution'u'warning: no files found matching '%s' anywhere in distribution'b'global-exclude 'u'global-exclude 'b'warning: no previously-included files matching '%s' found anywhere in distribution'u'warning: no previously-included files matching '%s' found anywhere in distribution'b'recursive-include %s %s'u'recursive-include %s %s'b'warning: no files found matching '%s' under directory '%s''u'warning: no files found matching '%s' under directory '%s''b'recursive-exclude %s %s'u'recursive-exclude %s %s'b'warning: no previously-included files matching '%s' found under directory '%s''u'warning: no previously-included files matching '%s' found under directory '%s''b'graft 'u'graft 'b'warning: no directories found matching '%s''u'warning: no directories found matching '%s''b'prune 'u'prune 'b'no previously-included directories found matching '%s''u'no previously-included directories found matching '%s''b'this cannot happen: invalid action '%s''u'this cannot happen: invalid action '%s''b'Select strings (presumably filenames) from 'self.files' that + match 'pattern', a Unix-style wildcard (glob) pattern. Patterns + are not quite the same as implemented by the 'fnmatch' module: '*' + and '?' match non-special characters, where "special" is platform- + dependent: slash on Unix; colon, slash, and backslash on + DOS/Windows; and colon on Mac OS. + + If 'anchor' is true (the default), then the pattern match is more + stringent: "*.py" will match "foo.py" but not "foo/bar.py". If + 'anchor' is false, both of these will match. + + If 'prefix' is supplied, then only filenames starting with 'prefix' + (itself a pattern) and ending with 'pattern', with anything in between + them, will match. 'anchor' is ignored in this case. + + If 'is_regex' is true, 'anchor' and 'prefix' are ignored, and + 'pattern' is assumed to be either a string containing a regex or a + regex object -- no translation is done, the regex is just compiled + and used as-is. + + Selected strings will be added to self.files. + + Return True if files are found, False otherwise. + 'u'Select strings (presumably filenames) from 'self.files' that + match 'pattern', a Unix-style wildcard (glob) pattern. Patterns + are not quite the same as implemented by the 'fnmatch' module: '*' + and '?' match non-special characters, where "special" is platform- + dependent: slash on Unix; colon, slash, and backslash on + DOS/Windows; and colon on Mac OS. + + If 'anchor' is true (the default), then the pattern match is more + stringent: "*.py" will match "foo.py" but not "foo/bar.py". If + 'anchor' is false, both of these will match. + + If 'prefix' is supplied, then only filenames starting with 'prefix' + (itself a pattern) and ending with 'pattern', with anything in between + them, will match. 'anchor' is ignored in this case. + + If 'is_regex' is true, 'anchor' and 'prefix' are ignored, and + 'pattern' is assumed to be either a string containing a regex or a + regex object -- no translation is done, the regex is just compiled + and used as-is. + + Selected strings will be added to self.files. + + Return True if files are found, False otherwise. + 'b'include_pattern: applying regex r'%s''u'include_pattern: applying regex r'%s''b' adding 'u' adding 'b'Remove strings (presumably filenames) from 'files' that match + 'pattern'. Other parameters are the same as for + 'include_pattern()', above. + The list 'self.files' is modified in place. + Return True if files are found, False otherwise. + 'u'Remove strings (presumably filenames) from 'files' that match + 'pattern'. Other parameters are the same as for + 'include_pattern()', above. + The list 'self.files' is modified in place. + Return True if files are found, False otherwise. + 'b'exclude_pattern: applying regex r'%s''u'exclude_pattern: applying regex r'%s''b' removing 'u' removing 'b' + Find all files under 'path' + 'u' + Find all files under 'path' + 'b' + Find all files under 'dir' and return the list of full filenames. + Unless dir is '.', return full filenames with dir prepended. + 'u' + Find all files under 'dir' and return the list of full filenames. + Unless dir is '.', return full filenames with dir prepended. + 'b'Translate a shell-like glob pattern to a regular expression; return + a string containing the regex. Differs from 'fnmatch.translate()' in + that '*' does not match "special characters" (which are + platform-specific). + 'u'Translate a shell-like glob pattern to a regular expression; return + a string containing the regex. Differs from 'fnmatch.translate()' in + that '*' does not match "special characters" (which are + platform-specific). + 'b'\\\\'u'\\\\'b'\1[^%s]'u'\1[^%s]'b'((? b, b.foo-> c, etc, + use this to iterate over all objects in the chain. Iteration is + terminated by getattr(x, attr) is None. + + Args: + obj: the starting object + attr: the name of the chaining attribute + + Yields: + Each successive object in the chain. + for_stmt< 'for' any 'in' node=any ':' any* > + | comp_for< 'for' any 'in' node=any any* > + p0 +power< + ( 'iter' | 'list' | 'tuple' | 'sorted' | 'set' | 'sum' | + 'any' | 'all' | 'enumerate' | (any* trailer< '.' 'join' >) ) + trailer< '(' node=any ')' > + any* +> +p1 +power< + ( 'sorted' | 'enumerate' ) + trailer< '(' arglist ')' > + any* +> +p2pats_builtin_special_context Returns true if node is in an environment where all that is required + of it is being iterable (ie, it doesn't matter if it returns a list + or an iterator). + See test_map_nochange in test_fixers.py for some examples and tests. + compile_patternis_probably_builtin + Check that something isn't an attribute or function name etc. + prev_siblingfuncdefclassdefexpr_stmtparameterstypedargslistfind_indentationFind the indentation of *node*.INDENTmake_suitefind_rootFind the top level namespace.file_inputroot found before file_input node was found.does_tree_import Returns true if name is imported from package at the + top level of the tree which node belongs to. + To cover the case of an import like 'import foo', use + None for the package and 'foo' for the name. find_bindingbindingis_importReturns true if the node is an import statement.import_nametouch_import Works like `does_tree_import` but adds an import statement + if it was not imported. is_import_stmtsimple_stmtinsert_posnode2_def_syms Returns the node which binds variable name, otherwise None. + If optional argument package is supplied, only imports will + be returned. + See test cases for examples.for_stmtif_stmtwhile_stmttry_stmtkidCOLON_is_import_binding_block_syms Will reuturn node if node will import name, or node + will import * from package. None is returned otherwise. + See test cases for examples. dotted_as_namesdotted_as_nameasimport_as_nameSTAR# Author: Collin Winter# Local imports############################################################## Common node-construction "macros"# XXX: May not handle dotted imports properly (eg, package_name='foo.bar')#assert package_name == '.' or '.' not in package_name, "FromImport has "\# "not been tested with dotted package names -- use at your own "\# "peril!"# Pull the leaves out of their old tree### Determine whether a node represents a given literal### Misc# Attribute lookup.# Assignment.# The name of an argument.### The following functions are to find bindings in a suite# Scamper up to the top level namespace# figure out where to insert the new import. First try to find# the first import and then skip to the last one.# if there are no imports where we can insert, find the docstring.# if that also fails, we stick to the beginning of the file# i+3 is the colon, i+4 is the suite# str(...) is used to make life easier here, because# from a.b import parses to ['import', ['a', '.', 'b'], ...]# See test_from_import_as for explanationb'Utility functions, node construction macros, etc.'u'Utility functions, node construction macros, etc.'b'Build an assignment statement'u'Build an assignment statement'b'Return a NAME leaf'u'Return a NAME leaf'b'A node tuple for obj.attr'u'A node tuple for obj.attr'b'A comma leaf'u'A comma leaf'b'A period (.) leaf'u'A period (.) leaf'b'A parenthesised argument list, used by Call()'u'A parenthesised argument list, used by Call()'b'A function call'u'A function call'b'A newline literal'u'A newline literal'b'A blank line'u'A blank line'b'A numeric or string subscript'u'A numeric or string subscript'b'A string leaf'u'A string leaf'b'A list comprehension of the form [xp for fp in it if test]. + + If test is None, the "if test" part is omitted. + 'u'A list comprehension of the form [xp for fp in it if test]. + + If test is None, the "if test" part is omitted. + 'b' Return an import statement in the form: + from package import name_leafs'u' Return an import statement in the form: + from package import name_leafs'b'import'u'import'b'Returns an import statement and calls a method + of the module: + + import module + module.name()'u'Returns an import statement and calls a method + of the module: + + import module + module.name()'b'obj'u'obj'b'lpar'u'lpar'b'rpar'u'rpar'b'Does the node represent a tuple literal?'u'Does the node represent a tuple literal?'b'Does the node represent a list literal?'u'Does the node represent a list literal?'b'sorted'u'sorted'b'list'u'list'b'tuple'u'tuple'b'min'u'min'b'max'u'max'b'enumerate'u'enumerate'b'Follow an attribute chain. + + If you have a chain of objects where a.foo -> b, b.foo-> c, etc, + use this to iterate over all objects in the chain. Iteration is + terminated by getattr(x, attr) is None. + + Args: + obj: the starting object + attr: the name of the chaining attribute + + Yields: + Each successive object in the chain. + 'u'Follow an attribute chain. + + If you have a chain of objects where a.foo -> b, b.foo-> c, etc, + use this to iterate over all objects in the chain. Iteration is + terminated by getattr(x, attr) is None. + + Args: + obj: the starting object + attr: the name of the chaining attribute + + Yields: + Each successive object in the chain. + 'b'for_stmt< 'for' any 'in' node=any ':' any* > + | comp_for< 'for' any 'in' node=any any* > + 'u'for_stmt< 'for' any 'in' node=any ':' any* > + | comp_for< 'for' any 'in' node=any any* > + 'b' +power< + ( 'iter' | 'list' | 'tuple' | 'sorted' | 'set' | 'sum' | + 'any' | 'all' | 'enumerate' | (any* trailer< '.' 'join' >) ) + trailer< '(' node=any ')' > + any* +> +'u' +power< + ( 'iter' | 'list' | 'tuple' | 'sorted' | 'set' | 'sum' | + 'any' | 'all' | 'enumerate' | (any* trailer< '.' 'join' >) ) + trailer< '(' node=any ')' > + any* +> +'b' +power< + ( 'sorted' | 'enumerate' ) + trailer< '(' arglist ')' > + any* +> +'u' +power< + ( 'sorted' | 'enumerate' ) + trailer< '(' arglist ')' > + any* +> +'b' Returns true if node is in an environment where all that is required + of it is being iterable (ie, it doesn't matter if it returns a list + or an iterator). + See test_map_nochange in test_fixers.py for some examples and tests. + 'u' Returns true if node is in an environment where all that is required + of it is being iterable (ie, it doesn't matter if it returns a list + or an iterator). + See test_map_nochange in test_fixers.py for some examples and tests. + 'b'node'u'node'b' + Check that something isn't an attribute or function name etc. + 'u' + Check that something isn't an attribute or function name etc. + 'b'Find the indentation of *node*.'u'Find the indentation of *node*.'b'Find the top level namespace.'u'Find the top level namespace.'b'root found before file_input node was found.'u'root found before file_input node was found.'b' Returns true if name is imported from package at the + top level of the tree which node belongs to. + To cover the case of an import like 'import foo', use + None for the package and 'foo' for the name. 'u' Returns true if name is imported from package at the + top level of the tree which node belongs to. + To cover the case of an import like 'import foo', use + None for the package and 'foo' for the name. 'b'Returns true if the node is an import statement.'u'Returns true if the node is an import statement.'b' Works like `does_tree_import` but adds an import statement + if it was not imported. 'u' Works like `does_tree_import` but adds an import statement + if it was not imported. 'b' Returns the node which binds variable name, otherwise None. + If optional argument package is supplied, only imports will + be returned. + See test cases for examples.'u' Returns the node which binds variable name, otherwise None. + If optional argument package is supplied, only imports will + be returned. + See test cases for examples.'b' Will reuturn node if node will import name, or node + will import * from package. None is returned otherwise. + See test cases for examples. 'u' Will reuturn node if node will import name, or node + will import * from package. None is returned otherwise. + See test cases for examples. 'b'as'u'as'u'lib2to3.fixer_util'u'fixer_util'Filename matching with shell patterns. + +fnmatch(FILENAME, PATTERN) matches according to the local convention. +fnmatchcase(FILENAME, PATTERN) always takes case in account. + +The functions operate by translating the pattern into a regular +expression. They cache the compiled regular expressions for speed. + +The function translate(PATTERN) returns a regular expression +corresponding to PATTERN. (It does not compile it.) +posixpathfnmatchcaseTest whether FILENAME matches PATTERN. + + Patterns are Unix shell style: + + * matches everything + ? matches any single character + [seq] matches any character in seq + [!seq] matches any char not in seq + + An initial period in FILENAME is not special. + Both FILENAME and PATTERN are first case-normalized + if the operating system requires it. + If you don't want this, use fnmatchcase(FILENAME, PATTERN). + lru_cachetyped_compile_patternISO-8859-1pat_strres_strConstruct a list from those elements of the iterable NAMES that match PAT.Test whether FILENAME matches PATTERN, including case. + + This is a version of fnmatch() which doesn't case-normalize + its arguments. + Translate a shell PATTERN to a regular expression. + + There is no way to quote meta-characters. + .*\[\-([&~|])%s[%s](?s:%s)\Z# normcase on posix is NOP. Optimize it away from the loop.# Escape backslashes and hyphens for set difference (--).# Hyphens that create ranges shouldn't be escaped.# Escape set operations (&&, ~~ and ||).b'Filename matching with shell patterns. + +fnmatch(FILENAME, PATTERN) matches according to the local convention. +fnmatchcase(FILENAME, PATTERN) always takes case in account. + +The functions operate by translating the pattern into a regular +expression. They cache the compiled regular expressions for speed. + +The function translate(PATTERN) returns a regular expression +corresponding to PATTERN. (It does not compile it.) +'u'Filename matching with shell patterns. + +fnmatch(FILENAME, PATTERN) matches according to the local convention. +fnmatchcase(FILENAME, PATTERN) always takes case in account. + +The functions operate by translating the pattern into a regular +expression. They cache the compiled regular expressions for speed. + +The function translate(PATTERN) returns a regular expression +corresponding to PATTERN. (It does not compile it.) +'b'fnmatch'u'fnmatch'b'fnmatchcase'u'fnmatchcase'b'translate'u'translate'b'Test whether FILENAME matches PATTERN. + + Patterns are Unix shell style: + + * matches everything + ? matches any single character + [seq] matches any character in seq + [!seq] matches any char not in seq + + An initial period in FILENAME is not special. + Both FILENAME and PATTERN are first case-normalized + if the operating system requires it. + If you don't want this, use fnmatchcase(FILENAME, PATTERN). + 'u'Test whether FILENAME matches PATTERN. + + Patterns are Unix shell style: + + * matches everything + ? matches any single character + [seq] matches any character in seq + [!seq] matches any char not in seq + + An initial period in FILENAME is not special. + Both FILENAME and PATTERN are first case-normalized + if the operating system requires it. + If you don't want this, use fnmatchcase(FILENAME, PATTERN). + 'b'ISO-8859-1'u'ISO-8859-1'b'Construct a list from those elements of the iterable NAMES that match PAT.'u'Construct a list from those elements of the iterable NAMES that match PAT.'b'Test whether FILENAME matches PATTERN, including case. + + This is a version of fnmatch() which doesn't case-normalize + its arguments. + 'u'Test whether FILENAME matches PATTERN, including case. + + This is a version of fnmatch() which doesn't case-normalize + its arguments. + 'b'Translate a shell PATTERN to a regular expression. + + There is no way to quote meta-characters. + 'u'Translate a shell PATTERN to a regular expression. + + There is no way to quote meta-characters. + 'b'.*'u'.*'b'\['u'\['b'\-'u'\-'b'([&~|])'u'([&~|])'b'%s[%s]'u'%s[%s]'b'(?s:%s)\Z'u'(?s:%s)\Z'resource_trackerensure_runningget_inherited_fdsconnect_to_new_processMAXFDS_TO_SENDSIGNED_STRUCTForkServer_forkserver_address_forkserver_alive_fd_forkserver_pid_inherited_fds_preload_modules_stop_stop_unlockedmodules_namesSet list of module names to try to load in forkserver process.module_names must be a list of stringsReturn list of fds inherited from parent process. + + This returns None if the current process was not started by fork + server. + fdsRequest forkserver to create a child process. + + Returns a pair of fds (status_r, data_w). The calling process can read + the child process's pid and (eventually) its returncode from status_r. + The calling process should write to data_w the pickled preparation and + process data. + too many fdsparent_rchild_wchild_rparent_wgetfdallfdssendfdsMake sure that a fork server is running. + + This can be called from any process. Note that usually a child + process will just reuse the forkserver started by its parent, so + ensure_running() will do nothing. + from multiprocessing.forkserver import main; main(%d, %d, %r, **%r)main_pathsys_pathdesired_keysget_preparation_data3840o600alive_ralive_wfds_to_passget_executableexespawnv_passfdslistener_fdpreloadRun forkserver._inheritingimport_main_path_close_stdinsig_rsig_wset_blockingsigchld_handler_unusedpid_to_fdDefaultSelector_forkserverrfdsNot at EOF?stsWIFSIGNALEDWTERMSIGWIFEXITEDChild {0:n} status is {1:n}WEXITSTATUSwrite_signedforkserver: waitpid returned unexpected pid %d'forkserver: waitpid returned ''unexpected pid %d'recvfdsToo many ({0:n}) fds to sendunused_fds_serve_one_resource_tracker_fdparent_sentinel_mainread_signedunexpected EOFshould not get here# large enough for pid_t# Forkserver class# Method used by unit tests to stop the server# close the "alive" file descriptor asks the server to stop# forkserver was launched before, is it still running?# still alive# dead, launch it again# all client processes own the write end of the "alive" pipe;# when they all terminate the read end becomes ready.# Dummy signal handler, doesn't do anything# unblocking SIGCHLD allows the wakeup fd to notify our event loop# protect the process from ^C# calling os.write() in the Python signal handler is racy# map child pids to client fds# EOF because no more client processes left# Got SIGCHLD# exhaust# Scan for child processes# Send exit code to client process# client vanished# This shouldn't happen really# Incoming fork request# Receive fds from client# Child# Send pid to client process# close unnecessary stuff and reset signal handlers# Run process object received over pipe# Read and write signed numbersb'ensure_running'u'ensure_running'b'get_inherited_fds'u'get_inherited_fds'b'connect_to_new_process'u'connect_to_new_process'b'set_forkserver_preload'u'set_forkserver_preload'b'Set list of module names to try to load in forkserver process.'u'Set list of module names to try to load in forkserver process.'b'module_names must be a list of strings'u'module_names must be a list of strings'b'Return list of fds inherited from parent process. + + This returns None if the current process was not started by fork + server. + 'u'Return list of fds inherited from parent process. + + This returns None if the current process was not started by fork + server. + 'b'Request forkserver to create a child process. + + Returns a pair of fds (status_r, data_w). The calling process can read + the child process's pid and (eventually) its returncode from status_r. + The calling process should write to data_w the pickled preparation and + process data. + 'u'Request forkserver to create a child process. + + Returns a pair of fds (status_r, data_w). The calling process can read + the child process's pid and (eventually) its returncode from status_r. + The calling process should write to data_w the pickled preparation and + process data. + 'b'too many fds'u'too many fds'b'Make sure that a fork server is running. + + This can be called from any process. Note that usually a child + process will just reuse the forkserver started by its parent, so + ensure_running() will do nothing. + 'u'Make sure that a fork server is running. + + This can be called from any process. Note that usually a child + process will just reuse the forkserver started by its parent, so + ensure_running() will do nothing. + 'b'from multiprocessing.forkserver import main; 'u'from multiprocessing.forkserver import main; 'b'main(%d, %d, %r, **%r)'u'main(%d, %d, %r, **%r)'b'main_path'u'main_path'b'sys_path'u'sys_path'b'Run forkserver.'u'Run forkserver.'b'Not at EOF?'u'Not at EOF?'b'Child {0:n} status is {1:n}'u'Child {0:n} status is {1:n}'b'forkserver: waitpid returned unexpected pid %d'u'forkserver: waitpid returned unexpected pid %d'b'Too many ({0:n}) fds to send'u'Too many ({0:n}) fds to send'b'unexpected EOF'u'unexpected EOF'b'should not get here'u'should not get here'u'multiprocessing.forkserver'partialmethodfunc_repr at _format_args_and_kwargsFormat function arguments and keyword arguments. + + Special case for a single parameter: ('hello',) is formatted as ('hello'). + Replacement for traceback.extract_stack() that only does the + necessary work for asyncio debug mode. + StackSummaryextractwalk_stacklookup_lines# use reprlib to limit the length of the output# Limit the amount of work to a reasonable amount, as extract_stack()# can be called for each coroutine and future in debug mode.b' at 'u' at 'b'Format function arguments and keyword arguments. + + Special case for a single parameter: ('hello',) is formatted as ('hello'). + 'u'Format function arguments and keyword arguments. + + Special case for a single parameter: ('hello',) is formatted as ('hello'). + 'b'Replacement for traceback.extract_stack() that only does the + necessary work for asyncio debug mode. + 'u'Replacement for traceback.extract_stack() that only does the + necessary work for asyncio debug mode. + 'u'asyncio.format_helpers'u'format_helpers' +Generic framework path manipulation +(?x) +(?P^.*)(?:^|/) +(?P + (?P\w+).framework/ + (?:Versions/(?P[^/]+)/)? + (?P=shortname) + (?:_(?P[^_]+))? +)$ +STRICT_FRAMEWORK_RE + A framework name can take one of the following four forms: + Location/Name.framework/Versions/SomeVersion/Name_Suffix + Location/Name.framework/Versions/SomeVersion/Name + Location/Name.framework/Name_Suffix + Location/Name.framework/Name + + returns None if not found, or a mapping equivalent to: + dict( + location='Location', + name='Name.framework/Versions/SomeVersion/Name_Suffix', + shortname='Name', + version='SomeVersion', + suffix='Suffix', + ) + + Note that SomeVersion and Suffix are optional and may be None + if not present + is_frameworktest_framework_infocompletely/invalid/_debugP/F.frameworkP/F.framework/_debugP/F.framework/FF.framework/FP/F.framework/F_debugF.framework/F_debugP/F.framework/VersionsP/F.framework/Versions/AP/F.framework/Versions/A/FF.framework/Versions/A/FP/F.framework/Versions/A/F_debugF.framework/Versions/A/F_debugb' +Generic framework path manipulation +'u' +Generic framework path manipulation +'b'(?x) +(?P^.*)(?:^|/) +(?P + (?P\w+).framework/ + (?:Versions/(?P[^/]+)/)? + (?P=shortname) + (?:_(?P[^_]+))? +)$ +'u'(?x) +(?P^.*)(?:^|/) +(?P + (?P\w+).framework/ + (?:Versions/(?P[^/]+)/)? + (?P=shortname) + (?:_(?P[^_]+))? +)$ +'b' + A framework name can take one of the following four forms: + Location/Name.framework/Versions/SomeVersion/Name_Suffix + Location/Name.framework/Versions/SomeVersion/Name + Location/Name.framework/Name_Suffix + Location/Name.framework/Name + + returns None if not found, or a mapping equivalent to: + dict( + location='Location', + name='Name.framework/Versions/SomeVersion/Name_Suffix', + shortname='Name', + version='SomeVersion', + suffix='Suffix', + ) + + Note that SomeVersion and Suffix are optional and may be None + if not present + 'u' + A framework name can take one of the following four forms: + Location/Name.framework/Versions/SomeVersion/Name_Suffix + Location/Name.framework/Versions/SomeVersion/Name + Location/Name.framework/Name_Suffix + Location/Name.framework/Name + + returns None if not found, or a mapping equivalent to: + dict( + location='Location', + name='Name.framework/Versions/SomeVersion/Name_Suffix', + shortname='Name', + version='SomeVersion', + suffix='Suffix', + ) + + Note that SomeVersion and Suffix are optional and may be None + if not present + 'b'completely/invalid/_debug'u'completely/invalid/_debug'b'P/F.framework'u'P/F.framework'b'P/F.framework/_debug'u'P/F.framework/_debug'b'P/F.framework/F'u'P/F.framework/F'b'F.framework/F'u'F.framework/F'b'P/F.framework/F_debug'u'P/F.framework/F_debug'b'F.framework/F_debug'u'F.framework/F_debug'b'P/F.framework/Versions'u'P/F.framework/Versions'b'P/F.framework/Versions/A'u'P/F.framework/Versions/A'b'P/F.framework/Versions/A/F'u'P/F.framework/Versions/A/F'b'F.framework/Versions/A/F'u'F.framework/Versions/A/F'b'P/F.framework/Versions/A/F_debug'u'P/F.framework/Versions/A/F_debug'b'F.framework/Versions/A/F_debug'u'F.framework/Versions/A/F_debug'u'ctypes.macholib.framework'u'macholib.framework'u'framework'An FTP client class and some helper functions. + +Based on RFC 959: File Transfer Protocol (FTP), by J. Postel and J. Reynolds + +Example: + +>>> from ftplib import FTP +>>> ftp = FTP('ftp.python.org') # connect to host, default port +>>> ftp.login() # default, i.e.: user anonymous, passwd anonymous@ +'230 Guest login ok, access restrictions apply.' +>>> ftp.retrlines('LIST') # list directory contents +total 9 +drwxr-xr-x 8 root wheel 1024 Jan 3 1994 . +drwxr-xr-x 8 root wheel 1024 Jan 3 1994 .. +drwxr-xr-x 2 root wheel 1024 Jan 3 1994 bin +drwxr-xr-x 2 root wheel 1024 Jan 3 1994 etc +d-wxrwxr-x 2 ftp wheel 1024 Sep 5 13:43 incoming +drwxr-xr-x 2 root wheel 1024 Nov 17 1993 lib +drwxr-xr-x 6 1094 wheel 1024 Sep 13 19:07 pub +drwxr-xr-x 3 root wheel 1024 Jan 3 1994 usr +-rw-r--r-- 1 root root 312 Aug 1 1994 welcome.msg +'226 Transfer complete.' +>>> ftp.quit() +'221 Goodbye.' +>>> + +A nice test that reveals some of the network dialogue would be: +python ftplib.py -d localhost -l -p -l +FTPerror_replyerror_temperror_permerror_protoall_errorsFTP_PORTMAXLINEB_CRLFAn FTP client class. + + To create a connection, call the class using these arguments: + host, user, passwd, acct, timeout + + The first four arguments are all strings, and have default value ''. + timeout must be numeric and defaults to None if not passed, + meaning that no timeout will be set on any ftp socket(s) + If a timeout is passed, then this is now the default timeout for all ftp + socket operations for this instance. + + Then use self.connect() with optional host and port argument. + + To download a file, use ftp.retrlines('RETR ' + filename), + or ftp.retrbinary() with slightly different arguments. + To upload a file, use ftp.storlines() or ftp.storbinary(), + which have an open file as argument (see their definitions + below for details). + The download/upload functions first issue appropriate TYPE + and PORT or PASV commands. + debuggingmaxlinewelcomepassiveservertrust_server_pasv_ipv4_addressuserpasswdacctloginConnect to host. Arguments are: + - host: hostname to connect to (string, default previous host) + - port: port to connect to (integer, default previous port) + - timeout: the timeout to set against the ftp socket(s) + - source_address: a 2-tuple (host, port) for the socket to bind + to as its source address before connecting. + ftplib.connectgetrespgetwelcomeGet the welcome message from the server. + (this is read and squirreled away by connect())*welcome*Set the debugging level. + The required argument level means: + 0: no debugging output (default) + 1: print commands and responses but not body text etc. + 2: also print raw lines read and sent before stripping CR/LFset_pasvUse passive or active mode for data transfers. + With a false argument, use the normal PORT mode, + With a true argument, use the PASV command.pass PASS putlinean illegal newline character should not be containedftplib.sendcmd*put*putcmd*cmd*got more than %d bytes*get*getmultilinenextline*resp*lastrespvoidrespExpect a response beginning with '2'.abortAbort a file transfer. Uses out-of-band data. + This does not follow the procedure from the RFC to send Telnet + IP and Synch; that doesn't seem to work with the servers I've + tried. Instead, just send the ABOR command as OOB data.ABOR*put urgent*sendcmdSend a command and return the response.voidcmdSend a command and expect a response beginning with '2'.sendportSend a PORT command with the current host and the given + port number. + hbytespbytesPORT sendeprtSend an EPRT command with the current host and the given port number.unsupported address familyEPRT makeportCreate a new socket and send a PORT command for it.makepasvInternal: Does the PASV or EPSV handshake -> (address, port)parse227PASVuntrusted_hostparse229EPSVntransfercmdInitiate a transfer over the data connection. + + If the transfer is active, send a port command and the + transfer command, and accept the connection. If the server is + passive, send a pasv command, connect to it, and start the + transfer command. Either way, return the socket for the + connection and the expected size of the transfer. The + expected size may be None if it could not be determined. + + Optional `rest' argument can be a string that is sent as the + argument to a REST command. This is essentially a server + marker used to tell the server to skip over any data up to the + given marker. + REST %sparse150transfercmdLike ntransfercmd() but returns only the socket.Login, default anonymous.anonymousanonymous@USER ACCT retrbinaryRetrieve data in binary mode. A new port is created for you. + + Args: + cmd: A RETR command. + callback: A single parameter callable to be called on each + block of data read. + blocksize: The maximum number of bytes to read from the + socket at one time. [default: 8192] + rest: Passed to transfercmd(). [default: None] + + Returns: + The response code. + TYPE IretrlinesRetrieve data in line mode. A new port is created for you. + + Args: + cmd: A RETR, LIST, or NLST command. + callback: An optional single parameter callable that is called + for each line with the trailing CRLF stripped. + [default: print_line()] + + Returns: + The response code. + print_lineTYPE A*retr*storbinaryStore a file in binary mode. A new port is created for you. + + Args: + cmd: A STOR command. + fp: A file-like object with a read(num_bytes) method. + blocksize: The maximum data size to read from fp and send over + the connection at once. [default: 8192] + callback: An optional single parameter callable that is called on + each block of data after it is sent. [default: None] + rest: Passed to transfercmd(). [default: None] + + Returns: + The response code. + storlinesStore a file in line mode. A new port is created for you. + + Args: + cmd: A STOR command. + fp: A file-like object with a readline() method. + callback: An optional single parameter callable that is called on + each line after it is sent. [default: None] + + Returns: + The response code. + passwordSend new account name.nlstReturn a list of files in a given directory (default the current).NLSTList a directory in long form. + By default list current directory to stdout. + Optional last argument is callback function; all + non-empty arguments before it are concatenated to the + LIST command. (This *should* only be used for a pathname.)LISTmlsdfactsList a directory in a standardized format by using MLSD + command (RFC-3659). If path is omitted the current directory + is assumed. "facts" is a list of strings representing the type + of information desired (e.g. ["type", "size", "perm"]). + + Return a generator object yielding a tuple of two elements + for every file found in path. + First element is the file name, the second one is a dictionary + including a variable number of "facts" depending on the server + and whether "facts" argument has been provided. + OPTS MLST MLSD %sMLSDfacts_foundfactfromnametonameRename a file.RNFR RNTO Delete a file.DELE cwdChange to a directory.CDUPCWD Retrieve the size of a file.SIZE mkdMake a directory, return its full pathname.MKD parse257rmdRemove a directory.RMD pwdReturn current working directory.PWDQuit, and close the connection.Close the connection without assuming anything about it.SSLSocketFTP_TLSA FTP subclass which adds TLS support to FTP as described + in RFC-4217. + + Connect as usual to port 21 implicitly securing the FTP control + connection before authenticating. + + Securing the data connection requires user to explicitly ask + for it by calling prot_p() method. + + Usage example: + >>> from ftplib import FTP_TLS + >>> ftps = FTP_TLS('ftp.python.org') + >>> ftps.login() # login anonymously previously securing control channel + '230 Guest login ok, access restrictions apply.' + >>> ftps.prot_p() # switch to secure data connection + '200 Protection level set to P' + >>> ftps.retrlines('LIST') # list directory content securely + total 9 + drwxr-xr-x 8 root wheel 1024 Jan 3 1994 . + drwxr-xr-x 8 root wheel 1024 Jan 3 1994 .. + drwxr-xr-x 2 root wheel 1024 Jan 3 1994 bin + drwxr-xr-x 2 root wheel 1024 Jan 3 1994 etc + d-wxrwxr-x 2 ftp wheel 1024 Sep 5 13:43 incoming + drwxr-xr-x 2 root wheel 1024 Nov 17 1993 lib + drwxr-xr-x 6 1094 wheel 1024 Sep 13 19:07 pub + drwxr-xr-x 3 root wheel 1024 Jan 3 1994 usr + -rw-r--r-- 1 root root 312 Aug 1 1994 welcome.msg + '226 Transfer complete.' + >>> ftps.quit() + '221 Goodbye.' + >>> + ssl_versionkeyfilecertfilecontext and keyfile arguments are mutually exclusive"context and keyfile arguments are mutually ""exclusive"context and certfile arguments are mutually exclusive"context and certfile arguments are mutually "keyfile and certfile are deprecated, use a custom context instead"keyfile and certfile are deprecated, use a ""custom context instead"_create_stdlib_context_prot_psecureSet up secure control connection by using TLS/SSL.Already using TLSAUTH TLSAUTH SSLcccSwitch back to a clear-text control connection.not using TLSCCCprot_pSet up secure data connection.PBSZ 0PROT Pprot_cSet up clear text data connection.PROT C_150_reParse the '150' response for a RETR request. + Returns the expected transfer size or None; size is not guaranteed to + be present in the 150 message. + 150 .* \((\d+) bytes\)_227_reParse the '227' response for a PASV request. + Raises error_proto if it does not contain '(h1,h2,h3,h4,p1,p2)' + Return ('host.addr.as.numbers', port#) tuple.(\d+),(\d+),(\d+),(\d+),(\d+),(\d+)Parse the '229' response for an EPSV request. + Raises error_proto if it does not contain '(|||port|)' + Return ('host.addr.as.numbers', port#) tuple.Parse the '257' response for a MKD or PWD request. + This is a response to a MKD or PWD request: a directory name. + Returns the directoryname in the 257 reply. "Default retrlines callback to print a line.ftpcpsourcenametargetnameCopy file from one FTP-instance to another.TYPE sourcehostsourceportSTOR treply125RETR sreplyTest program. + Usage: ftp [-d] [-r[file]] host [-l[dir]] [-d[dir]] [-p] [file] ... + + -d dir + -l list + -p password + netrcrcfile-rftpuseridnetrcobjauthenticatorsNo account -- using anonymous login.Could not open account file -- using anonymous login."Could not open account file"" -- using anonymous login."CWD-p# Changes and improvements suggested by Steve Majewski.# Modified by Jack to work on the mac.# Modified by Siebren to support docstrings and PASV.# Modified by Phil Schwartz to add storbinary and storlines callbacks.# Modified by Giampaolo Rodola' to add TLS support.# Magic number from # Process data out of band# The standard FTP server control port# The sizehint parameter passed to readline() calls# Exception raised when an error or invalid response is received# unexpected [123]xx reply# 4xx errors# 5xx errors# response does not begin with [1-5]# All exceptions (hopefully) that may be raised here and that aren't# (always) programming errors on our side# Line terminators (we always output CRLF, but accept any of CRLF, CR, LF)# The class itself# Disables https://bugs.python.org/issue43285 security if set to True.# Initialization method (called by class instantiation).# Initialize host to localhost, port to standard ftp port# Optional arguments are host (for connect()),# and user, passwd, acct (for login())# Context management protocol: try to quit() if active# Internal: "sanitize" a string for printing# Internal: send one line to the server, appending CRLF# Internal: send one command to the server (through putline())# Internal: return one line from the server, stripping CRLF.# Raise EOFError if the connection is closed# Internal: get a response from the server, which may possibly# consist of multiple lines. Return a single string with no# trailing CRLF. If the response consists of multiple lines,# these are separated by '\n' characters in the string# Internal: get a response from the server.# Raise various errors if the response indicates an error# Get proper port# Get proper host# Some servers apparently send a 200 reply to# a LIST or STOR command, before the 150 reply# (and way before the 226 reply). This seems to# be in violation of the protocol (which only allows# 1xx or error messages for LIST), so we just discard# this response.# See above.# this is conditional in case we received a 125# If there is no anonymous ftp password specified# then we'll just use anonymous@# We don't send any other thing because:# - We want to remain anonymous# - We want to stop SPAM# - We don't want to let ftp sites to discriminate by the user,# host or country.# shutdown ssl layer# does nothing, but could return error# The SIZE command is defined in RFC-3659# fix around non-compliant implementations such as IIS shipped# with Windows server 2003# PROT defines whether or not the data channel is to be protected.# Though RFC-2228 defines four possible protection levels,# RFC-4217 only recommends two, Clear and Private.# Clear (PROT C) means that no security is to be used on the# data-channel, Private (PROT P) means that the data-channel# should be protected by TLS.# PBSZ command MUST still be issued, but must have a parameter of# '0' to indicate that no buffering is taking place and the data# connection should not be encapsulated.# --- Overridden FTP methods# overridden as we can't pass MSG_OOB flag to sendall()# should contain '(|||port|)'# Not compliant to RFC 959, but UNIX ftpd does this# RFC 959: the user must "listen" [...] BEFORE sending the# transfer request.# So: STOR before RETR, because here the target is a "user".# RFC 959# get name of alternate ~/.netrc file:# no account for hostb'An FTP client class and some helper functions. + +Based on RFC 959: File Transfer Protocol (FTP), by J. Postel and J. Reynolds + +Example: + +>>> from ftplib import FTP +>>> ftp = FTP('ftp.python.org') # connect to host, default port +>>> ftp.login() # default, i.e.: user anonymous, passwd anonymous@ +'230 Guest login ok, access restrictions apply.' +>>> ftp.retrlines('LIST') # list directory contents +total 9 +drwxr-xr-x 8 root wheel 1024 Jan 3 1994 . +drwxr-xr-x 8 root wheel 1024 Jan 3 1994 .. +drwxr-xr-x 2 root wheel 1024 Jan 3 1994 bin +drwxr-xr-x 2 root wheel 1024 Jan 3 1994 etc +d-wxrwxr-x 2 ftp wheel 1024 Sep 5 13:43 incoming +drwxr-xr-x 2 root wheel 1024 Nov 17 1993 lib +drwxr-xr-x 6 1094 wheel 1024 Sep 13 19:07 pub +drwxr-xr-x 3 root wheel 1024 Jan 3 1994 usr +-rw-r--r-- 1 root root 312 Aug 1 1994 welcome.msg +'226 Transfer complete.' +>>> ftp.quit() +'221 Goodbye.' +>>> + +A nice test that reveals some of the network dialogue would be: +python ftplib.py -d localhost -l -p -l +'u'An FTP client class and some helper functions. + +Based on RFC 959: File Transfer Protocol (FTP), by J. Postel and J. Reynolds + +Example: + +>>> from ftplib import FTP +>>> ftp = FTP('ftp.python.org') # connect to host, default port +>>> ftp.login() # default, i.e.: user anonymous, passwd anonymous@ +'230 Guest login ok, access restrictions apply.' +>>> ftp.retrlines('LIST') # list directory contents +total 9 +drwxr-xr-x 8 root wheel 1024 Jan 3 1994 . +drwxr-xr-x 8 root wheel 1024 Jan 3 1994 .. +drwxr-xr-x 2 root wheel 1024 Jan 3 1994 bin +drwxr-xr-x 2 root wheel 1024 Jan 3 1994 etc +d-wxrwxr-x 2 ftp wheel 1024 Sep 5 13:43 incoming +drwxr-xr-x 2 root wheel 1024 Nov 17 1993 lib +drwxr-xr-x 6 1094 wheel 1024 Sep 13 19:07 pub +drwxr-xr-x 3 root wheel 1024 Jan 3 1994 usr +-rw-r--r-- 1 root root 312 Aug 1 1994 welcome.msg +'226 Transfer complete.' +>>> ftp.quit() +'221 Goodbye.' +>>> + +A nice test that reveals some of the network dialogue would be: +python ftplib.py -d localhost -l -p -l +'b'FTP'u'FTP'b'error_reply'u'error_reply'b'error_temp'u'error_temp'b'error_perm'u'error_perm'b'error_proto'u'error_proto'b'all_errors'u'all_errors'b'An FTP client class. + + To create a connection, call the class using these arguments: + host, user, passwd, acct, timeout + + The first four arguments are all strings, and have default value ''. + timeout must be numeric and defaults to None if not passed, + meaning that no timeout will be set on any ftp socket(s) + If a timeout is passed, then this is now the default timeout for all ftp + socket operations for this instance. + + Then use self.connect() with optional host and port argument. + + To download a file, use ftp.retrlines('RETR ' + filename), + or ftp.retrbinary() with slightly different arguments. + To upload a file, use ftp.storlines() or ftp.storbinary(), + which have an open file as argument (see their definitions + below for details). + The download/upload functions first issue appropriate TYPE + and PORT or PASV commands. + 'u'An FTP client class. + + To create a connection, call the class using these arguments: + host, user, passwd, acct, timeout + + The first four arguments are all strings, and have default value ''. + timeout must be numeric and defaults to None if not passed, + meaning that no timeout will be set on any ftp socket(s) + If a timeout is passed, then this is now the default timeout for all ftp + socket operations for this instance. + + Then use self.connect() with optional host and port argument. + + To download a file, use ftp.retrlines('RETR ' + filename), + or ftp.retrbinary() with slightly different arguments. + To upload a file, use ftp.storlines() or ftp.storbinary(), + which have an open file as argument (see their definitions + below for details). + The download/upload functions first issue appropriate TYPE + and PORT or PASV commands. + 'b'Connect to host. Arguments are: + - host: hostname to connect to (string, default previous host) + - port: port to connect to (integer, default previous port) + - timeout: the timeout to set against the ftp socket(s) + - source_address: a 2-tuple (host, port) for the socket to bind + to as its source address before connecting. + 'u'Connect to host. Arguments are: + - host: hostname to connect to (string, default previous host) + - port: port to connect to (integer, default previous port) + - timeout: the timeout to set against the ftp socket(s) + - source_address: a 2-tuple (host, port) for the socket to bind + to as its source address before connecting. + 'b'ftplib.connect'u'ftplib.connect'b'Get the welcome message from the server. + (this is read and squirreled away by connect())'u'Get the welcome message from the server. + (this is read and squirreled away by connect())'b'*welcome*'u'*welcome*'b'Set the debugging level. + The required argument level means: + 0: no debugging output (default) + 1: print commands and responses but not body text etc. + 2: also print raw lines read and sent before stripping CR/LF'u'Set the debugging level. + The required argument level means: + 0: no debugging output (default) + 1: print commands and responses but not body text etc. + 2: also print raw lines read and sent before stripping CR/LF'b'Use passive or active mode for data transfers. + With a false argument, use the normal PORT mode, + With a true argument, use the PASV command.'u'Use passive or active mode for data transfers. + With a false argument, use the normal PORT mode, + With a true argument, use the PASV command.'b'pass 'u'pass 'b'PASS 'u'PASS 'b'an illegal newline character should not be contained'u'an illegal newline character should not be contained'b'ftplib.sendcmd'u'ftplib.sendcmd'b'*put*'u'*put*'b'*cmd*'u'*cmd*'b'got more than %d bytes'u'got more than %d bytes'b'*get*'u'*get*'b'*resp*'u'*resp*'b'Expect a response beginning with '2'.'u'Expect a response beginning with '2'.'b'Abort a file transfer. Uses out-of-band data. + This does not follow the procedure from the RFC to send Telnet + IP and Synch; that doesn't seem to work with the servers I've + tried. Instead, just send the ABOR command as OOB data.'u'Abort a file transfer. Uses out-of-band data. + This does not follow the procedure from the RFC to send Telnet + IP and Synch; that doesn't seem to work with the servers I've + tried. Instead, just send the ABOR command as OOB data.'b'ABOR'b'*put urgent*'u'*put urgent*'b'426'u'426'b'225'u'225'b'226'u'226'b'Send a command and return the response.'u'Send a command and return the response.'b'Send a command and expect a response beginning with '2'.'u'Send a command and expect a response beginning with '2'.'b'Send a PORT command with the current host and the given + port number. + 'u'Send a PORT command with the current host and the given + port number. + 'b'PORT 'u'PORT 'b'Send an EPRT command with the current host and the given port number.'u'Send an EPRT command with the current host and the given port number.'b'unsupported address family'u'unsupported address family'b'EPRT 'u'EPRT 'b'Create a new socket and send a PORT command for it.'u'Create a new socket and send a PORT command for it.'b'Internal: Does the PASV or EPSV handshake -> (address, port)'u'Internal: Does the PASV or EPSV handshake -> (address, port)'b'PASV'u'PASV'b'EPSV'u'EPSV'b'Initiate a transfer over the data connection. + + If the transfer is active, send a port command and the + transfer command, and accept the connection. If the server is + passive, send a pasv command, connect to it, and start the + transfer command. Either way, return the socket for the + connection and the expected size of the transfer. The + expected size may be None if it could not be determined. + + Optional `rest' argument can be a string that is sent as the + argument to a REST command. This is essentially a server + marker used to tell the server to skip over any data up to the + given marker. + 'u'Initiate a transfer over the data connection. + + If the transfer is active, send a port command and the + transfer command, and accept the connection. If the server is + passive, send a pasv command, connect to it, and start the + transfer command. Either way, return the socket for the + connection and the expected size of the transfer. The + expected size may be None if it could not be determined. + + Optional `rest' argument can be a string that is sent as the + argument to a REST command. This is essentially a server + marker used to tell the server to skip over any data up to the + given marker. + 'b'REST %s'u'REST %s'b'150'u'150'b'Like ntransfercmd() but returns only the socket.'u'Like ntransfercmd() but returns only the socket.'b'Login, default anonymous.'u'Login, default anonymous.'b'anonymous'u'anonymous'b'anonymous@'u'anonymous@'b'USER 'u'USER 'b'ACCT 'u'ACCT 'b'Retrieve data in binary mode. A new port is created for you. + + Args: + cmd: A RETR command. + callback: A single parameter callable to be called on each + block of data read. + blocksize: The maximum number of bytes to read from the + socket at one time. [default: 8192] + rest: Passed to transfercmd(). [default: None] + + Returns: + The response code. + 'u'Retrieve data in binary mode. A new port is created for you. + + Args: + cmd: A RETR command. + callback: A single parameter callable to be called on each + block of data read. + blocksize: The maximum number of bytes to read from the + socket at one time. [default: 8192] + rest: Passed to transfercmd(). [default: None] + + Returns: + The response code. + 'b'TYPE I'u'TYPE I'b'Retrieve data in line mode. A new port is created for you. + + Args: + cmd: A RETR, LIST, or NLST command. + callback: An optional single parameter callable that is called + for each line with the trailing CRLF stripped. + [default: print_line()] + + Returns: + The response code. + 'u'Retrieve data in line mode. A new port is created for you. + + Args: + cmd: A RETR, LIST, or NLST command. + callback: An optional single parameter callable that is called + for each line with the trailing CRLF stripped. + [default: print_line()] + + Returns: + The response code. + 'b'TYPE A'u'TYPE A'b'*retr*'u'*retr*'b'Store a file in binary mode. A new port is created for you. + + Args: + cmd: A STOR command. + fp: A file-like object with a read(num_bytes) method. + blocksize: The maximum data size to read from fp and send over + the connection at once. [default: 8192] + callback: An optional single parameter callable that is called on + each block of data after it is sent. [default: None] + rest: Passed to transfercmd(). [default: None] + + Returns: + The response code. + 'u'Store a file in binary mode. A new port is created for you. + + Args: + cmd: A STOR command. + fp: A file-like object with a read(num_bytes) method. + blocksize: The maximum data size to read from fp and send over + the connection at once. [default: 8192] + callback: An optional single parameter callable that is called on + each block of data after it is sent. [default: None] + rest: Passed to transfercmd(). [default: None] + + Returns: + The response code. + 'b'Store a file in line mode. A new port is created for you. + + Args: + cmd: A STOR command. + fp: A file-like object with a readline() method. + callback: An optional single parameter callable that is called on + each line after it is sent. [default: None] + + Returns: + The response code. + 'u'Store a file in line mode. A new port is created for you. + + Args: + cmd: A STOR command. + fp: A file-like object with a readline() method. + callback: An optional single parameter callable that is called on + each line after it is sent. [default: None] + + Returns: + The response code. + 'b'Send new account name.'u'Send new account name.'b'Return a list of files in a given directory (default the current).'u'Return a list of files in a given directory (default the current).'b'NLST'u'NLST'b'List a directory in long form. + By default list current directory to stdout. + Optional last argument is callback function; all + non-empty arguments before it are concatenated to the + LIST command. (This *should* only be used for a pathname.)'u'List a directory in long form. + By default list current directory to stdout. + Optional last argument is callback function; all + non-empty arguments before it are concatenated to the + LIST command. (This *should* only be used for a pathname.)'b'LIST'u'LIST'b'List a directory in a standardized format by using MLSD + command (RFC-3659). If path is omitted the current directory + is assumed. "facts" is a list of strings representing the type + of information desired (e.g. ["type", "size", "perm"]). + + Return a generator object yielding a tuple of two elements + for every file found in path. + First element is the file name, the second one is a dictionary + including a variable number of "facts" depending on the server + and whether "facts" argument has been provided. + 'u'List a directory in a standardized format by using MLSD + command (RFC-3659). If path is omitted the current directory + is assumed. "facts" is a list of strings representing the type + of information desired (e.g. ["type", "size", "perm"]). + + Return a generator object yielding a tuple of two elements + for every file found in path. + First element is the file name, the second one is a dictionary + including a variable number of "facts" depending on the server + and whether "facts" argument has been provided. + 'b'OPTS MLST 'u'OPTS MLST 'b'MLSD %s'u'MLSD %s'b'MLSD'u'MLSD'b'Rename a file.'u'Rename a file.'b'RNFR 'u'RNFR 'b'RNTO 'u'RNTO 'b'Delete a file.'u'Delete a file.'b'DELE 'u'DELE 'b'250'u'250'b'200'u'200'b'Change to a directory.'u'Change to a directory.'b'CDUP'u'CDUP'b'CWD 'u'CWD 'b'Retrieve the size of a file.'u'Retrieve the size of a file.'b'SIZE 'u'SIZE 'b'213'u'213'b'Make a directory, return its full pathname.'u'Make a directory, return its full pathname.'b'MKD 'u'MKD 'b'257'u'257'b'Remove a directory.'u'Remove a directory.'b'RMD 'u'RMD 'b'Return current working directory.'u'Return current working directory.'b'PWD'u'PWD'b'Quit, and close the connection.'u'Quit, and close the connection.'b'Close the connection without assuming anything about it.'u'Close the connection without assuming anything about it.'b'A FTP subclass which adds TLS support to FTP as described + in RFC-4217. + + Connect as usual to port 21 implicitly securing the FTP control + connection before authenticating. + + Securing the data connection requires user to explicitly ask + for it by calling prot_p() method. + + Usage example: + >>> from ftplib import FTP_TLS + >>> ftps = FTP_TLS('ftp.python.org') + >>> ftps.login() # login anonymously previously securing control channel + '230 Guest login ok, access restrictions apply.' + >>> ftps.prot_p() # switch to secure data connection + '200 Protection level set to P' + >>> ftps.retrlines('LIST') # list directory content securely + total 9 + drwxr-xr-x 8 root wheel 1024 Jan 3 1994 . + drwxr-xr-x 8 root wheel 1024 Jan 3 1994 .. + drwxr-xr-x 2 root wheel 1024 Jan 3 1994 bin + drwxr-xr-x 2 root wheel 1024 Jan 3 1994 etc + d-wxrwxr-x 2 ftp wheel 1024 Sep 5 13:43 incoming + drwxr-xr-x 2 root wheel 1024 Nov 17 1993 lib + drwxr-xr-x 6 1094 wheel 1024 Sep 13 19:07 pub + drwxr-xr-x 3 root wheel 1024 Jan 3 1994 usr + -rw-r--r-- 1 root root 312 Aug 1 1994 welcome.msg + '226 Transfer complete.' + >>> ftps.quit() + '221 Goodbye.' + >>> + 'u'A FTP subclass which adds TLS support to FTP as described + in RFC-4217. + + Connect as usual to port 21 implicitly securing the FTP control + connection before authenticating. + + Securing the data connection requires user to explicitly ask + for it by calling prot_p() method. + + Usage example: + >>> from ftplib import FTP_TLS + >>> ftps = FTP_TLS('ftp.python.org') + >>> ftps.login() # login anonymously previously securing control channel + '230 Guest login ok, access restrictions apply.' + >>> ftps.prot_p() # switch to secure data connection + '200 Protection level set to P' + >>> ftps.retrlines('LIST') # list directory content securely + total 9 + drwxr-xr-x 8 root wheel 1024 Jan 3 1994 . + drwxr-xr-x 8 root wheel 1024 Jan 3 1994 .. + drwxr-xr-x 2 root wheel 1024 Jan 3 1994 bin + drwxr-xr-x 2 root wheel 1024 Jan 3 1994 etc + d-wxrwxr-x 2 ftp wheel 1024 Sep 5 13:43 incoming + drwxr-xr-x 2 root wheel 1024 Nov 17 1993 lib + drwxr-xr-x 6 1094 wheel 1024 Sep 13 19:07 pub + drwxr-xr-x 3 root wheel 1024 Jan 3 1994 usr + -rw-r--r-- 1 root root 312 Aug 1 1994 welcome.msg + '226 Transfer complete.' + >>> ftps.quit() + '221 Goodbye.' + >>> + 'b'context and keyfile arguments are mutually exclusive'u'context and keyfile arguments are mutually exclusive'b'context and certfile arguments are mutually exclusive'u'context and certfile arguments are mutually exclusive'b'keyfile and certfile are deprecated, use a custom context instead'u'keyfile and certfile are deprecated, use a custom context instead'b'Set up secure control connection by using TLS/SSL.'u'Set up secure control connection by using TLS/SSL.'b'Already using TLS'u'Already using TLS'b'AUTH TLS'u'AUTH TLS'b'AUTH SSL'u'AUTH SSL'b'Switch back to a clear-text control connection.'u'Switch back to a clear-text control connection.'b'not using TLS'u'not using TLS'b'CCC'u'CCC'b'Set up secure data connection.'u'Set up secure data connection.'b'PBSZ 0'u'PBSZ 0'b'PROT P'u'PROT P'b'Set up clear text data connection.'u'Set up clear text data connection.'b'PROT C'u'PROT C'b'FTP_TLS'u'FTP_TLS'b'Parse the '150' response for a RETR request. + Returns the expected transfer size or None; size is not guaranteed to + be present in the 150 message. + 'u'Parse the '150' response for a RETR request. + Returns the expected transfer size or None; size is not guaranteed to + be present in the 150 message. + 'b'150 .* \((\d+) bytes\)'u'150 .* \((\d+) bytes\)'b'Parse the '227' response for a PASV request. + Raises error_proto if it does not contain '(h1,h2,h3,h4,p1,p2)' + Return ('host.addr.as.numbers', port#) tuple.'u'Parse the '227' response for a PASV request. + Raises error_proto if it does not contain '(h1,h2,h3,h4,p1,p2)' + Return ('host.addr.as.numbers', port#) tuple.'b'227'u'227'b'(\d+),(\d+),(\d+),(\d+),(\d+),(\d+)'u'(\d+),(\d+),(\d+),(\d+),(\d+),(\d+)'b'Parse the '229' response for an EPSV request. + Raises error_proto if it does not contain '(|||port|)' + Return ('host.addr.as.numbers', port#) tuple.'u'Parse the '229' response for an EPSV request. + Raises error_proto if it does not contain '(|||port|)' + Return ('host.addr.as.numbers', port#) tuple.'b'229'u'229'b'Parse the '257' response for a MKD or PWD request. + This is a response to a MKD or PWD request: a directory name. + Returns the directoryname in the 257 reply.'u'Parse the '257' response for a MKD or PWD request. + This is a response to a MKD or PWD request: a directory name. + Returns the directoryname in the 257 reply.'b' "'u' "'b'Default retrlines callback to print a line.'u'Default retrlines callback to print a line.'b'Copy file from one FTP-instance to another.'u'Copy file from one FTP-instance to another.'b'TYPE 'u'TYPE 'b'STOR 'u'STOR 'b'125'u'125'b'RETR 'u'RETR 'b'Test program. + Usage: ftp [-d] [-r[file]] host [-l[dir]] [-d[dir]] [-p] [file] ... + + -d dir + -l list + -p password + 'u'Test program. + Usage: ftp [-d] [-r[file]] host [-l[dir]] [-d[dir]] [-p] [file] ... + + -d dir + -l list + -p password + 'b'-r'u'-r'b'No account -- using anonymous login.'u'No account -- using anonymous login.'b'Could not open account file -- using anonymous login.'u'Could not open account file -- using anonymous login.'b'CWD'u'CWD'b'-p'u'ftplib'functools.py - Tools for working with functions and callable objects +update_wrapperWRAPPER_ASSIGNMENTSWRAPPER_UPDATESsingledispatchsingledispatchmethodcached_propertywrappedassignedupdatedUpdate a wrapper function to look like the wrapped function + + wrapper is the function to be updated + wrapped is the original function + assigned is a tuple naming the attributes assigned directly + from the wrapped function to the wrapper function (defaults to + functools.WRAPPER_ASSIGNMENTS) + updated is a tuple naming the attributes of the wrapper that + are updated with the corresponding attribute from the wrapped + function (defaults to functools.WRAPPER_UPDATES) + Decorator factory to apply update_wrapper() to a wrapper function + + Returns a decorator that invokes update_wrapper() with the decorated + function as the wrapper argument and the arguments to wraps() as the + remaining arguments. Default arguments are as for update_wrapper(). + This is a convenience function to simplify applying partial() to + update_wrapper(). + _gt_from_ltReturn a > b. Computed by @total_ordering from (not a < b) and (a != b).op_result_le_from_ltReturn a <= b. Computed by @total_ordering from (a < b) or (a == b)._ge_from_ltReturn a >= b. Computed by @total_ordering from (not a < b)._ge_from_leReturn a >= b. Computed by @total_ordering from (not a <= b) or (a == b)._lt_from_leReturn a < b. Computed by @total_ordering from (a <= b) and (a != b)._gt_from_leReturn a > b. Computed by @total_ordering from (not a <= b)._lt_from_gtReturn a < b. Computed by @total_ordering from (not a > b) and (a != b)._ge_from_gtReturn a >= b. Computed by @total_ordering from (a > b) or (a == b)._le_from_gtReturn a <= b. Computed by @total_ordering from (not a > b)._le_from_geReturn a <= b. Computed by @total_ordering from (not a >= b) or (a == b)._gt_from_geReturn a > b. Computed by @total_ordering from (a >= b) and (a != b)._lt_from_geReturn a < b. Computed by @total_ordering from (not a >= b).Class decorator that fills in missing ordering methodsrootsmust define at least one ordering operation: < > <= >=opfuncmycmpConvert a cmp= function into a key= function_initial_missinginitial + reduce(function, sequence[, initial]) -> value + + Apply a function of two arguments cumulatively to the items of a sequence, + from left to right, so as to reduce the sequence to a single value. + For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates + ((((1+2)+3)+4)+5). If initial is present, it is placed before the items + of the sequence in the calculation, and serves as a default when the + sequence is empty. + reduce() of empty sequence with no initial valueNew function with partial application of the given arguments + and keywords. + the first argument must be callablefunctools.argument to __setstate__ must be a tupleexpected 4 items in state, got invalid partial stateMethod descriptor with partial application of the given arguments + and keywords. + + Supports wrapping existing descriptors and handles non-descriptor + callables as instance methods. + descriptor '__init__' of partialmethod needs an argument"descriptor '__init__' of partialmethod "type 'partialmethod' takes at least one argument, got %d"type 'partialmethod' takes at least one argument, ""got %d"{!r} is not callable or a descriptor($self, func, /, *args, **keywords){module}.{cls}({func}, {args}, {keywords})format_string_make_unbound_methodcls_or_self_partialmethodnew_func_unwrap_partialCacheInfomissescurrsize_CacheInfo_HashedSeq This class guarantees that hash() will be called no more than once + per element. This is important because the lru_cache() will hash + the key multiple times on a cache miss. + + hashvalue_make_keykwd_markfasttypesMake a cache key from optionally typed positional and keyword arguments + + The key is constructed in a way that is flat as possible rather than + as a nested structure that would take more memory. + + If there is only a single argument and its data type is known to cache + its hash value, then that argument is returned without a wrapper. This + saves space and improves lookup speed. + + Least-recently-used cache decorator. + + If *maxsize* is set to None, the LRU features are disabled and the cache + can grow without bound. + + If *typed* is True, arguments of different types will be cached separately. + For example, f(3.0) and f(3) will be treated as distinct calls with + distinct results. + + Arguments to the cached function must be hashable. + + View the cache statistics named tuple (hits, misses, maxsize, currsize) + with f.cache_info(). Clear the cache and statistics with f.cache_clear(). + Access the underlying function with f.__wrapped__. + + See: http://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU) + + user_functionExpected first argument to be an integer, a callable, or Nonedecorating_functionsentinelmake_keyPREVNEXTKEYRESULTfullcache_getcache_len_keyoldrootoldkeyoldresultReport cache statisticsClear the cache and cache statistics_c3_mergeMerges MROs in *sequences* to a single MRO using the C3 algorithm. + + Adapted from http://www.python.org/download/releases/2.3/mro/. + + Inconsistent hierarchy_c3_mroabcsComputes the method resolution order using extended C3 linearization. + + If no *abcs* are given, the algorithm works exactly like the built-in C3 + linearization used for method resolution. + + If given, *abcs* is a list of abstract base classes that should be inserted + into the resulting MRO. Unrelated ABCs are ignored and don't end up in the + result. The algorithm inserts ABCs where their functionality is introduced, + i.e. issubclass(cls, abc) returns True for the class itself but returns + False for all its direct base classes. Implicit ABCs for a given class + (either registered or inferred from the presence of a special method like + __len__) are inserted directly after the last ABC explicitly listed in the + MRO of said class. If two implicit ABCs end up next to each other in the + resulting MRO, their ordering depends on the order of types in *abcs*. + + explicit_basesabstract_basesother_basesexplicit_c3_mrosabstract_c3_mrosother_c3_mros_compose_mroCalculates the method resolution order for a given class *cls*. + + Includes relevant abstract base classes (with their respective bases) from + the *types* iterable. Uses a modified C3 linearization algorithm. + + is_relatedis_strict_basetype_setsubcls_find_implReturns the best matching implementation from *registry* for type *cls*. + + Where there is no registered implementation for a specific type, its method + resolution order is used to find a more generic implementation. + + Note: if *registry* does not contain an implementation for the base + *object* type, this function may return None. + + Ambiguous dispatch: {} or {}Single-dispatch generic function decorator. + + Transforms a function into a generic function, which can have different + behaviours depending upon the type of its first argument. The decorated + function acts as the default implementation, and additional + implementations can be registered using the register() attribute of the + generic function. + WeakKeyDictionarydispatch_cachecache_tokengeneric_func.dispatch(cls) -> + + Runs the dispatch algorithm to return the best available implementation + for the given *cls* registered on *generic_func*. + + current_tokengeneric_func.register(cls, func) -> func + + Registers a new implementation for the given *cls* on a *generic_func*. + + annInvalid first argument to `register()`: . Use either `@register(some_class)` or plain `@register` on an annotated function.". ""Use either `@register(some_class)` or plain `@register` ""on an annotated function."typingget_type_hintsargnameInvalid annotation for . is not a class. requires at least 1 positional argument' requires at least ''1 positional argument'singledispatch functionSingle-dispatch generic method descriptor. + + Supports wrapping existing descriptors and handles non-descriptor + callables as instance methods. + is not callable or a descriptordispatchergeneric_method.register(cls, func) -> func + + Registers a new implementation for the given *cls* on a *generic_method*. + _NOT_FOUND__set_name__Cannot assign the same cached_property to two different names ("Cannot assign the same cached_property to two different names " and ).Cannot use cached_property instance without calling __set_name__ on it.No '__dict__' attribute on instance to cache "instance to cache " property.The '__dict__' attribute on instance does not support item assignment for caching " instance ""does not support item assignment for caching "# Python module wrapper for _functools C module# to allow utilities written in Python to be added# to the functools module.# Written by Nick Coghlan ,# Raymond Hettinger ,# and Łukasz Langa .# Copyright (C) 2006-2013 Python Software Foundation.# See C source code for _functools credits/copyright# import types, weakref # Deferred to single_dispatch()### update_wrapper() and wraps() decorator# update_wrapper() and wraps() are tools to help write# wrapper functions that can handle naive introspection# Issue #17482: set __wrapped__ last so we don't inadvertently copy it# from the wrapped function when updating __dict__# Return the wrapper so this can be used as a decorator via partial()### total_ordering class decorator# The total ordering functions all invoke the root magic method directly# rather than using the corresponding operator. This avoids possible# infinite recursion that could occur when the operator dispatch logic# detects a NotImplemented result and then calls a reflected method.# Find user-defined comparisons (not those inherited from object).# prefer __lt__ to __le__ to __gt__ to __ge__### cmp_to_key() function converter### reduce() sequence to a single item### partial() argument application# Purely functional, no descriptor behaviour# just in case it's a subclass# XXX does it need to be *exactly* dict?# Descriptor version# func could be a descriptor like classmethod which isn't callable,# so we can't inherit from partial (it verifies func is callable)# flattening is mandatory in order to place cls/self before all# other arguments# it's also more efficient since only one function will be called# Assume __get__ returning something new indicates the# creation of an appropriate callable# If the underlying descriptor didn't do anything, treat this# like an instance method# Helper functions### LRU Cache function decorator# All of code below relies on kwds preserving the order input by the user.# Formerly, we sorted() the kwds before looping. The new way is *much*# faster; however, it means that f(x=1, y=2) will now be treated as a# distinct call from f(y=2, x=1) which will be cached separately.# Users should only access the lru_cache through its public API:# cache_info, cache_clear, and f.__wrapped__# The internals of the lru_cache are encapsulated for thread safety and# to allow the implementation to change (including a possible C version).# Negative maxsize is treated as 0# The user_function was passed in directly via the maxsize argument# Constants shared by all lru cache instances:# unique object used to signal cache misses# build a key from the function arguments# names for the link fields# bound method to lookup a key or return None# get cache size without calling len()# because linkedlist updates aren't threadsafe# root of the circular doubly linked list# initialize by pointing to self# No caching -- just a statistics update# Simple caching without ordering or size limit# Size limited caching that tracks accesses by recency# Move the link to the front of the circular queue# Getting here means that this same key was added to the# cache while the lock was released. Since the link# update is already done, we need only return the# computed result and update the count of misses.# Use the old root to store the new key and result.# Empty the oldest link and make it the new root.# Keep a reference to the old key and old result to# prevent their ref counts from going to zero during the# update. That will prevent potentially arbitrary object# clean-up code (i.e. __del__) from running while we're# still adjusting the links.# Now update the cache dictionary.# Save the potentially reentrant cache[key] assignment# for last, after the root and links have been put in# a consistent state.# Put result in a new link at the front of the queue.# Use the cache_len bound method instead of the len() function# which could potentially be wrapped in an lru_cache itself.### singledispatch() - single-dispatch generic function decorator# purge empty sequences# find merge candidates among seq heads# reject the current head, it appears later# remove the chosen candidate# Bases up to the last explicit ABC are considered first.# If *cls* is the class that introduces behaviour described by# an ABC *base*, insert said ABC to its MRO.# Remove entries which are already present in the __mro__ or unrelated.# Remove entries which are strict bases of other entries (they will end up# in the MRO anyway.# Subclasses of the ABCs in *types* which are also implemented by# *cls* can be used to stabilize ABC ordering.# Favor subclasses with the biggest number of useful bases# If *match* is an implicit ABC but there is another unrelated,# equally matching implicit ABC, refuse the temptation to guess.# There are many programs that use functools without singledispatch, so we# trade-off making singledispatch marginally slower for the benefit of# making start-up of such applications slightly faster.# only import typing if annotation parsing is necessary### cached_property() - computed once per instance, cached as attribute# not all objects have __dict__ (e.g. class defines slots)# check if another thread filled cache while we awaited lockb'functools.py - Tools for working with functions and callable objects +'u'functools.py - Tools for working with functions and callable objects +'b'update_wrapper'u'update_wrapper'b'wraps'u'wraps'b'WRAPPER_ASSIGNMENTS'u'WRAPPER_ASSIGNMENTS'b'WRAPPER_UPDATES'u'WRAPPER_UPDATES'b'total_ordering'u'total_ordering'b'cmp_to_key'u'cmp_to_key'b'lru_cache'u'lru_cache'b'partial'u'partial'b'partialmethod'u'partialmethod'b'singledispatch'u'singledispatch'b'singledispatchmethod'u'singledispatchmethod'b'cached_property'u'cached_property'b'__annotations__'u'__annotations__'b'Update a wrapper function to look like the wrapped function + + wrapper is the function to be updated + wrapped is the original function + assigned is a tuple naming the attributes assigned directly + from the wrapped function to the wrapper function (defaults to + functools.WRAPPER_ASSIGNMENTS) + updated is a tuple naming the attributes of the wrapper that + are updated with the corresponding attribute from the wrapped + function (defaults to functools.WRAPPER_UPDATES) + 'u'Update a wrapper function to look like the wrapped function + + wrapper is the function to be updated + wrapped is the original function + assigned is a tuple naming the attributes assigned directly + from the wrapped function to the wrapper function (defaults to + functools.WRAPPER_ASSIGNMENTS) + updated is a tuple naming the attributes of the wrapper that + are updated with the corresponding attribute from the wrapped + function (defaults to functools.WRAPPER_UPDATES) + 'b'Decorator factory to apply update_wrapper() to a wrapper function + + Returns a decorator that invokes update_wrapper() with the decorated + function as the wrapper argument and the arguments to wraps() as the + remaining arguments. Default arguments are as for update_wrapper(). + This is a convenience function to simplify applying partial() to + update_wrapper(). + 'u'Decorator factory to apply update_wrapper() to a wrapper function + + Returns a decorator that invokes update_wrapper() with the decorated + function as the wrapper argument and the arguments to wraps() as the + remaining arguments. Default arguments are as for update_wrapper(). + This is a convenience function to simplify applying partial() to + update_wrapper(). + 'b'Return a > b. Computed by @total_ordering from (not a < b) and (a != b).'u'Return a > b. Computed by @total_ordering from (not a < b) and (a != b).'b'Return a <= b. Computed by @total_ordering from (a < b) or (a == b).'u'Return a <= b. Computed by @total_ordering from (a < b) or (a == b).'b'Return a >= b. Computed by @total_ordering from (not a < b).'u'Return a >= b. Computed by @total_ordering from (not a < b).'b'Return a >= b. Computed by @total_ordering from (not a <= b) or (a == b).'u'Return a >= b. Computed by @total_ordering from (not a <= b) or (a == b).'b'Return a < b. Computed by @total_ordering from (a <= b) and (a != b).'u'Return a < b. Computed by @total_ordering from (a <= b) and (a != b).'b'Return a > b. Computed by @total_ordering from (not a <= b).'u'Return a > b. Computed by @total_ordering from (not a <= b).'b'Return a < b. Computed by @total_ordering from (not a > b) and (a != b).'u'Return a < b. Computed by @total_ordering from (not a > b) and (a != b).'b'Return a >= b. Computed by @total_ordering from (a > b) or (a == b).'u'Return a >= b. Computed by @total_ordering from (a > b) or (a == b).'b'Return a <= b. Computed by @total_ordering from (not a > b).'u'Return a <= b. Computed by @total_ordering from (not a > b).'b'Return a <= b. Computed by @total_ordering from (not a >= b) or (a == b).'u'Return a <= b. Computed by @total_ordering from (not a >= b) or (a == b).'b'Return a > b. Computed by @total_ordering from (a >= b) and (a != b).'u'Return a > b. Computed by @total_ordering from (a >= b) and (a != b).'b'Return a < b. Computed by @total_ordering from (not a >= b).'u'Return a < b. Computed by @total_ordering from (not a >= b).'b'__gt__'u'__gt__'b'__le__'u'__le__'b'__ge__'u'__ge__'b'__lt__'u'__lt__'b'Class decorator that fills in missing ordering methods'u'Class decorator that fills in missing ordering methods'b'must define at least one ordering operation: < > <= >='u'must define at least one ordering operation: < > <= >='b'Convert a cmp= function into a key= function'u'Convert a cmp= function into a key= function'b' + reduce(function, sequence[, initial]) -> value + + Apply a function of two arguments cumulatively to the items of a sequence, + from left to right, so as to reduce the sequence to a single value. + For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates + ((((1+2)+3)+4)+5). If initial is present, it is placed before the items + of the sequence in the calculation, and serves as a default when the + sequence is empty. + 'u' + reduce(function, sequence[, initial]) -> value + + Apply a function of two arguments cumulatively to the items of a sequence, + from left to right, so as to reduce the sequence to a single value. + For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates + ((((1+2)+3)+4)+5). If initial is present, it is placed before the items + of the sequence in the calculation, and serves as a default when the + sequence is empty. + 'b'reduce() of empty sequence with no initial value'u'reduce() of empty sequence with no initial value'b'New function with partial application of the given arguments + and keywords. + 'u'New function with partial application of the given arguments + and keywords. + 'b'args'b'keywords'b'the first argument must be callable'u'the first argument must be callable'b'functools.'u'functools.'b'argument to __setstate__ must be a tuple'u'argument to __setstate__ must be a tuple'b'expected 4 items in state, got 'u'expected 4 items in state, got 'b'invalid partial state'u'invalid partial state'b'Method descriptor with partial application of the given arguments + and keywords. + + Supports wrapping existing descriptors and handles non-descriptor + callables as instance methods. + 'u'Method descriptor with partial application of the given arguments + and keywords. + + Supports wrapping existing descriptors and handles non-descriptor + callables as instance methods. + 'b'descriptor '__init__' of partialmethod needs an argument'u'descriptor '__init__' of partialmethod needs an argument'b'type 'partialmethod' takes at least one argument, got %d'u'type 'partialmethod' takes at least one argument, got %d'b'{!r} is not callable or a descriptor'u'{!r} is not callable or a descriptor'b'($self, func, /, *args, **keywords)'u'($self, func, /, *args, **keywords)'b'{module}.{cls}({func}, {args}, {keywords})'u'{module}.{cls}({func}, {args}, {keywords})'b'CacheInfo'u'CacheInfo'b'hits'u'hits'b'misses'u'misses'b'currsize'u'currsize'b' This class guarantees that hash() will be called no more than once + per element. This is important because the lru_cache() will hash + the key multiple times on a cache miss. + + 'u' This class guarantees that hash() will be called no more than once + per element. This is important because the lru_cache() will hash + the key multiple times on a cache miss. + + 'b'hashvalue'u'hashvalue'b'Make a cache key from optionally typed positional and keyword arguments + + The key is constructed in a way that is flat as possible rather than + as a nested structure that would take more memory. + + If there is only a single argument and its data type is known to cache + its hash value, then that argument is returned without a wrapper. This + saves space and improves lookup speed. + + 'u'Make a cache key from optionally typed positional and keyword arguments + + The key is constructed in a way that is flat as possible rather than + as a nested structure that would take more memory. + + If there is only a single argument and its data type is known to cache + its hash value, then that argument is returned without a wrapper. This + saves space and improves lookup speed. + + 'b'Least-recently-used cache decorator. + + If *maxsize* is set to None, the LRU features are disabled and the cache + can grow without bound. + + If *typed* is True, arguments of different types will be cached separately. + For example, f(3.0) and f(3) will be treated as distinct calls with + distinct results. + + Arguments to the cached function must be hashable. + + View the cache statistics named tuple (hits, misses, maxsize, currsize) + with f.cache_info(). Clear the cache and statistics with f.cache_clear(). + Access the underlying function with f.__wrapped__. + + See: http://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU) + + 'u'Least-recently-used cache decorator. + + If *maxsize* is set to None, the LRU features are disabled and the cache + can grow without bound. + + If *typed* is True, arguments of different types will be cached separately. + For example, f(3.0) and f(3) will be treated as distinct calls with + distinct results. + + Arguments to the cached function must be hashable. + + View the cache statistics named tuple (hits, misses, maxsize, currsize) + with f.cache_info(). Clear the cache and statistics with f.cache_clear(). + Access the underlying function with f.__wrapped__. + + See: http://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU) + + 'b'Expected first argument to be an integer, a callable, or None'u'Expected first argument to be an integer, a callable, or None'b'Report cache statistics'u'Report cache statistics'b'Clear the cache and cache statistics'u'Clear the cache and cache statistics'b'Merges MROs in *sequences* to a single MRO using the C3 algorithm. + + Adapted from http://www.python.org/download/releases/2.3/mro/. + + 'u'Merges MROs in *sequences* to a single MRO using the C3 algorithm. + + Adapted from http://www.python.org/download/releases/2.3/mro/. + + 'b'Inconsistent hierarchy'u'Inconsistent hierarchy'b'Computes the method resolution order using extended C3 linearization. + + If no *abcs* are given, the algorithm works exactly like the built-in C3 + linearization used for method resolution. + + If given, *abcs* is a list of abstract base classes that should be inserted + into the resulting MRO. Unrelated ABCs are ignored and don't end up in the + result. The algorithm inserts ABCs where their functionality is introduced, + i.e. issubclass(cls, abc) returns True for the class itself but returns + False for all its direct base classes. Implicit ABCs for a given class + (either registered or inferred from the presence of a special method like + __len__) are inserted directly after the last ABC explicitly listed in the + MRO of said class. If two implicit ABCs end up next to each other in the + resulting MRO, their ordering depends on the order of types in *abcs*. + + 'u'Computes the method resolution order using extended C3 linearization. + + If no *abcs* are given, the algorithm works exactly like the built-in C3 + linearization used for method resolution. + + If given, *abcs* is a list of abstract base classes that should be inserted + into the resulting MRO. Unrelated ABCs are ignored and don't end up in the + result. The algorithm inserts ABCs where their functionality is introduced, + i.e. issubclass(cls, abc) returns True for the class itself but returns + False for all its direct base classes. Implicit ABCs for a given class + (either registered or inferred from the presence of a special method like + __len__) are inserted directly after the last ABC explicitly listed in the + MRO of said class. If two implicit ABCs end up next to each other in the + resulting MRO, their ordering depends on the order of types in *abcs*. + + 'b'Calculates the method resolution order for a given class *cls*. + + Includes relevant abstract base classes (with their respective bases) from + the *types* iterable. Uses a modified C3 linearization algorithm. + + 'u'Calculates the method resolution order for a given class *cls*. + + Includes relevant abstract base classes (with their respective bases) from + the *types* iterable. Uses a modified C3 linearization algorithm. + + 'b'Returns the best matching implementation from *registry* for type *cls*. + + Where there is no registered implementation for a specific type, its method + resolution order is used to find a more generic implementation. + + Note: if *registry* does not contain an implementation for the base + *object* type, this function may return None. + + 'u'Returns the best matching implementation from *registry* for type *cls*. + + Where there is no registered implementation for a specific type, its method + resolution order is used to find a more generic implementation. + + Note: if *registry* does not contain an implementation for the base + *object* type, this function may return None. + + 'b'Ambiguous dispatch: {} or {}'u'Ambiguous dispatch: {} or {}'b'Single-dispatch generic function decorator. + + Transforms a function into a generic function, which can have different + behaviours depending upon the type of its first argument. The decorated + function acts as the default implementation, and additional + implementations can be registered using the register() attribute of the + generic function. + 'u'Single-dispatch generic function decorator. + + Transforms a function into a generic function, which can have different + behaviours depending upon the type of its first argument. The decorated + function acts as the default implementation, and additional + implementations can be registered using the register() attribute of the + generic function. + 'b'generic_func.dispatch(cls) -> + + Runs the dispatch algorithm to return the best available implementation + for the given *cls* registered on *generic_func*. + + 'u'generic_func.dispatch(cls) -> + + Runs the dispatch algorithm to return the best available implementation + for the given *cls* registered on *generic_func*. + + 'b'generic_func.register(cls, func) -> func + + Registers a new implementation for the given *cls* on a *generic_func*. + + 'u'generic_func.register(cls, func) -> func + + Registers a new implementation for the given *cls* on a *generic_func*. + + 'b'Invalid first argument to `register()`: 'u'Invalid first argument to `register()`: 'b'. Use either `@register(some_class)` or plain `@register` on an annotated function.'u'. Use either `@register(some_class)` or plain `@register` on an annotated function.'b'Invalid annotation for 'u'Invalid annotation for 'b'. 'u'. 'b' is not a class.'u' is not a class.'b' requires at least 1 positional argument'u' requires at least 1 positional argument'b'singledispatch function'u'singledispatch function'b'Single-dispatch generic method descriptor. + + Supports wrapping existing descriptors and handles non-descriptor + callables as instance methods. + 'u'Single-dispatch generic method descriptor. + + Supports wrapping existing descriptors and handles non-descriptor + callables as instance methods. + 'b' is not callable or a descriptor'u' is not callable or a descriptor'b'generic_method.register(cls, func) -> func + + Registers a new implementation for the given *cls* on a *generic_method*. + 'u'generic_method.register(cls, func) -> func + + Registers a new implementation for the given *cls* on a *generic_method*. + 'b'Cannot assign the same cached_property to two different names ('u'Cannot assign the same cached_property to two different names ('b' and 'u' and 'b').'u').'b'Cannot use cached_property instance without calling __set_name__ on it.'u'Cannot use cached_property instance without calling __set_name__ on it.'b'No '__dict__' attribute on 'u'No '__dict__' attribute on 'b' instance to cache 'u' instance to cache 'b' property.'u' property.'b'The '__dict__' attribute on 'u'The '__dict__' attribute on 'b' instance does not support item assignment for caching 'u' instance does not support item assignment for caching 'A Future class similar to the one in PEP 3148.STACK_DEBUGThis class is *almost* compatible with concurrent.futures.Future. + + Differences: + + - This class is not thread-safe. + + - result() and exception() do not take a timeout argument and + raise an exception when the future isn't done yet. + + - Callbacks registered with add_done_callback() are always called + via the event loop's call_soon(). + + - This class is not compatible with the wait() and as_completed() + methods in the concurrent.futures package. + + (In Python 3.4 or later we may be able to unify the implementations.) + __log_tracebackInitialize the future. + + The optional event_loop argument allows explicitly setting the event + loop object used by the future. If it's not provided, the future uses + the default event loop. + <{} {}> exception was never retrieved_log_traceback can only be set to FalseReturn the event loop the Future is bound to.Future object is not initialized.Cancel the future and schedule callbacks. + + If the future is already done or cancelled, return False. Otherwise, + change the future's state to cancelled, schedule the callbacks and + return True. + __schedule_callbacksInternal: Ask the event loop to call all callbacks. + + The callbacks are scheduled to be called as soon as possible. Also + clears the callback list. + callbacksReturn True if the future is done. + + Done means either that a result / exception are available, or that the + future was cancelled. + Return the result this future represents. + + If the future has been cancelled, raises CancelledError. If the + future's result isn't yet available, raises InvalidStateError. If + the future is done and has an exception set, this exception is raised. + Result is not ready.Return the exception that was set on this future. + + The exception (or None if no exception was set) is returned only if + the future is done. If the future has been cancelled, raises + CancelledError. If the future isn't done yet, raises + InvalidStateError. + Exception is not set.Add a callback to be run when the future becomes done. + + The callback is called with a single argument - the future object. If + the future is already done when this is called, the callback is + scheduled with call_soon. + Remove all instances of a callback from the "call when done" list. + + Returns the number of callbacks removed. + filtered_callbacksremoved_countMark the future done and set its result. + + If the future is already done when this method is called, raises + InvalidStateError. + Mark the future done and set an exception. + + If the future is already done when this method is called, raises + InvalidStateError. + StopIteration interacts badly with generators and cannot be raised into a Future"StopIteration interacts badly with generators ""and cannot be raised into a Future"await wasn't used with future_PyFuture_set_result_unless_cancelledHelper setting the result only if the future was not cancelled._convert_future_excexc_class_set_concurrent_future_stateCopy state from a future to a concurrent.futures.Future._copy_future_stateInternal helper to copy state from another Future. + + The other Future may be a concurrent.futures.Future. + _chain_futuredestinationChain two futures so that when one completes, so does the other. + + The result (or exception) of source will be copied to destination. + If destination is cancelled, source gets cancelled too. + Compatible with both asyncio.Future and concurrent.futures.Future. + A future is required for source argumentA future is required for destination argumentsource_loopdest_loop_set_state_call_check_cancel_call_set_stateWrap concurrent.futures.Future object.concurrent.futures.Future is expected, got new_future_CFuture# heavy-duty debugging# Class variables serving as defaults for instance variables.# This field is used for a dual purpose:# - Its presence is a marker to declare that a class implements# the Future protocol (i.e. is intended to be duck-type compatible).# The value must also be not-None, to enable a subclass to declare# that it is not compatible by setting this to None.# - It is set by __iter__() below so that Task._step() can tell# the difference between# `await Future()` or`yield from Future()` (correct) vs.# `yield Future()` (incorrect).# set_exception() was not called, or result() or exception()# has consumed the exception# Don't implement running(); see http://bugs.python.org/issue18699# New method not in PEP 3148.# So-called internal methods (note: no set_running_or_notify_cancel()).# This tells Task to wait for completion.# May raise too.# make compatible with 'yield from'.# Needed for testing purposes.# Tries to call Future.get_loop() if it's available.# Otherwise fallbacks to using the old '_loop' property.# _CFuture is needed for tests.b'A Future class similar to the one in PEP 3148.'u'A Future class similar to the one in PEP 3148.'b'wrap_future'u'wrap_future'b'isfuture'u'isfuture'b'This class is *almost* compatible with concurrent.futures.Future. + + Differences: + + - This class is not thread-safe. + + - result() and exception() do not take a timeout argument and + raise an exception when the future isn't done yet. + + - Callbacks registered with add_done_callback() are always called + via the event loop's call_soon(). + + - This class is not compatible with the wait() and as_completed() + methods in the concurrent.futures package. + + (In Python 3.4 or later we may be able to unify the implementations.) + 'u'This class is *almost* compatible with concurrent.futures.Future. + + Differences: + + - This class is not thread-safe. + + - result() and exception() do not take a timeout argument and + raise an exception when the future isn't done yet. + + - Callbacks registered with add_done_callback() are always called + via the event loop's call_soon(). + + - This class is not compatible with the wait() and as_completed() + methods in the concurrent.futures package. + + (In Python 3.4 or later we may be able to unify the implementations.) + 'b'Initialize the future. + + The optional event_loop argument allows explicitly setting the event + loop object used by the future. If it's not provided, the future uses + the default event loop. + 'u'Initialize the future. + + The optional event_loop argument allows explicitly setting the event + loop object used by the future. If it's not provided, the future uses + the default event loop. + 'b'<{} {}>'u'<{} {}>'b' exception was never retrieved'u' exception was never retrieved'b'future'u'future'b'_log_traceback can only be set to False'u'_log_traceback can only be set to False'b'Return the event loop the Future is bound to.'u'Return the event loop the Future is bound to.'b'Future object is not initialized.'u'Future object is not initialized.'b'Cancel the future and schedule callbacks. + + If the future is already done or cancelled, return False. Otherwise, + change the future's state to cancelled, schedule the callbacks and + return True. + 'u'Cancel the future and schedule callbacks. + + If the future is already done or cancelled, return False. Otherwise, + change the future's state to cancelled, schedule the callbacks and + return True. + 'b'Internal: Ask the event loop to call all callbacks. + + The callbacks are scheduled to be called as soon as possible. Also + clears the callback list. + 'u'Internal: Ask the event loop to call all callbacks. + + The callbacks are scheduled to be called as soon as possible. Also + clears the callback list. + 'b'Return True if the future is done. + + Done means either that a result / exception are available, or that the + future was cancelled. + 'u'Return True if the future is done. + + Done means either that a result / exception are available, or that the + future was cancelled. + 'b'Return the result this future represents. + + If the future has been cancelled, raises CancelledError. If the + future's result isn't yet available, raises InvalidStateError. If + the future is done and has an exception set, this exception is raised. + 'u'Return the result this future represents. + + If the future has been cancelled, raises CancelledError. If the + future's result isn't yet available, raises InvalidStateError. If + the future is done and has an exception set, this exception is raised. + 'b'Result is not ready.'u'Result is not ready.'b'Return the exception that was set on this future. + + The exception (or None if no exception was set) is returned only if + the future is done. If the future has been cancelled, raises + CancelledError. If the future isn't done yet, raises + InvalidStateError. + 'u'Return the exception that was set on this future. + + The exception (or None if no exception was set) is returned only if + the future is done. If the future has been cancelled, raises + CancelledError. If the future isn't done yet, raises + InvalidStateError. + 'b'Exception is not set.'u'Exception is not set.'b'Add a callback to be run when the future becomes done. + + The callback is called with a single argument - the future object. If + the future is already done when this is called, the callback is + scheduled with call_soon. + 'u'Add a callback to be run when the future becomes done. + + The callback is called with a single argument - the future object. If + the future is already done when this is called, the callback is + scheduled with call_soon. + 'b'Remove all instances of a callback from the "call when done" list. + + Returns the number of callbacks removed. + 'u'Remove all instances of a callback from the "call when done" list. + + Returns the number of callbacks removed. + 'b'Mark the future done and set its result. + + If the future is already done when this method is called, raises + InvalidStateError. + 'u'Mark the future done and set its result. + + If the future is already done when this method is called, raises + InvalidStateError. + 'b'Mark the future done and set an exception. + + If the future is already done when this method is called, raises + InvalidStateError. + 'u'Mark the future done and set an exception. + + If the future is already done when this method is called, raises + InvalidStateError. + 'b'StopIteration interacts badly with generators and cannot be raised into a Future'u'StopIteration interacts badly with generators and cannot be raised into a Future'b'await wasn't used with future'u'await wasn't used with future'b'Helper setting the result only if the future was not cancelled.'u'Helper setting the result only if the future was not cancelled.'b'Copy state from a future to a concurrent.futures.Future.'u'Copy state from a future to a concurrent.futures.Future.'b'Internal helper to copy state from another Future. + + The other Future may be a concurrent.futures.Future. + 'u'Internal helper to copy state from another Future. + + The other Future may be a concurrent.futures.Future. + 'b'Chain two futures so that when one completes, so does the other. + + The result (or exception) of source will be copied to destination. + If destination is cancelled, source gets cancelled too. + Compatible with both asyncio.Future and concurrent.futures.Future. + 'u'Chain two futures so that when one completes, so does the other. + + The result (or exception) of source will be copied to destination. + If destination is cancelled, source gets cancelled too. + Compatible with both asyncio.Future and concurrent.futures.Future. + 'b'A future is required for source argument'u'A future is required for source argument'b'A future is required for destination argument'u'A future is required for destination argument'b'Wrap concurrent.futures.Future object.'u'Wrap concurrent.futures.Future object.'b'concurrent.futures.Future is expected, got 'u'concurrent.futures.Future is expected, got 'u'asyncio.futures'DEBUG_COLLECTABLEDEBUG_LEAKDEBUG_SAVEALLDEBUG_STATSDEBUG_UNCOLLECTABLEu'This module provides access to the garbage collector for reference cycles. + +enable() -- Enable automatic garbage collection. +disable() -- Disable automatic garbage collection. +isenabled() -- Returns true if automatic collection is enabled. +collect() -- Do a full collection right now. +get_count() -- Return the current collection counts. +get_stats() -- Return list of dictionaries containing per-generation stats. +set_debug() -- Set debugging flags. +get_debug() -- Get debugging flags. +set_threshold() -- Set the collection thresholds. +get_threshold() -- Return the current the collection thresholds. +get_objects() -- Return a list of all objects tracked by the collector. +is_tracked() -- Returns true if a given object is tracked. +get_referrers() -- Return the list of objects that refer to an object. +get_referents() -- Return the list of objects that an object refers to. +freeze() -- Freeze all tracked objects and ignore them for future collections. +unfreeze() -- Unfreeze all objects in the permanent generation. +get_freeze_count() -- Return the number of objects in the permanent generation. +'freezegarbageget_countget_freeze_countget_objectsget_referentsget_referrersget_statsget_thresholdis_trackedset_thresholdunfreeze +Path operations common to more than one OS +Do not use directly. The OS specific modules import the appropriate +functions from this module themselves. +commonprefixgetatimegetctimegetsizesameopenfilesamestatTest whether a path exists. Returns False for broken symbolic linksTest whether a path is a regular fileReturn true if the pathname refers to an existing directory.Return the size of a file, reported by os.stat().Return the last modification time of a file, reported by os.stat().Return the last access time of a file, reported by os.stat().st_atimeReturn the metadata change time of a file, reported by os.stat().st_ctimeGiven a list of pathnames, returns the longest common leading componentTest whether two stat buffers reference the same filest_inost_devf1Test whether two pathnames reference the same actual file or directory + + This is determined by the device number and i-node number and + raises an exception if an os.stat() call on either pathname fails. + fp1fp2Test whether two open file objects reference the same filefstat_splitextaltsepextsepSplit the extension from a pathname. + + Extension is everything from the last dot to the end, ignoring + leading dots. Returns "(root, ext)"; ext may be empty.sepIndexaltsepIndexdotIndexfilenameIndex_check_arg_typeshasstrhasbytes() argument must be str, bytes, or os.PathLike object, not '() argument must be str, bytes, or ''os.PathLike object, not 'Can't mix strings and bytes in path components# Does a path exist?# This is false for dangling symbolic links on systems that support them.# This follows symbolic links, so both islink() and isdir() can be true# for the same path on systems that support symlinks# Is a path a directory?# This follows symbolic links, so both islink() and isdir()# can be true for the same path on systems that support symlinks# Return the longest prefix of all list elements.# Some people pass in a list of pathname parts to operate in an OS-agnostic# fashion; don't try to translate in that case as that's an abuse of the# API and they are already doing what they need to be OS-agnostic and so# they most likely won't be using an os.PathLike object in the sublists.# Are two stat buffers (obtained from stat, fstat or lstat)# describing the same file?# Are two filenames really pointing to the same file?# Are two open files really referencing the same file?# (Not necessarily the same file descriptor!)# Split a path in root and extension.# The extension is everything starting at the last dot in the last# pathname component; the root is everything before that.# It is always true that root + ext == p.# Generic implementation of splitext, to be parametrized with# the separators# NOTE: This code must work for text and bytes strings.# skip all leading dotsb' +Path operations common to more than one OS +Do not use directly. The OS specific modules import the appropriate +functions from this module themselves. +'u' +Path operations common to more than one OS +Do not use directly. The OS specific modules import the appropriate +functions from this module themselves. +'b'commonprefix'u'commonprefix'b'getatime'u'getatime'b'getctime'u'getctime'b'getmtime'u'getmtime'b'getsize'u'getsize'b'isdir'u'isdir'b'isfile'u'isfile'b'samefile'u'samefile'b'sameopenfile'u'sameopenfile'b'samestat'u'samestat'b'Test whether a path exists. Returns False for broken symbolic links'u'Test whether a path exists. Returns False for broken symbolic links'b'Test whether a path is a regular file'u'Test whether a path is a regular file'b'Return true if the pathname refers to an existing directory.'u'Return true if the pathname refers to an existing directory.'b'Return the size of a file, reported by os.stat().'u'Return the size of a file, reported by os.stat().'b'Return the last modification time of a file, reported by os.stat().'u'Return the last modification time of a file, reported by os.stat().'b'Return the last access time of a file, reported by os.stat().'u'Return the last access time of a file, reported by os.stat().'b'Return the metadata change time of a file, reported by os.stat().'u'Return the metadata change time of a file, reported by os.stat().'b'Given a list of pathnames, returns the longest common leading component'u'Given a list of pathnames, returns the longest common leading component'b'Test whether two stat buffers reference the same file'u'Test whether two stat buffers reference the same file'b'Test whether two pathnames reference the same actual file or directory + + This is determined by the device number and i-node number and + raises an exception if an os.stat() call on either pathname fails. + 'u'Test whether two pathnames reference the same actual file or directory + + This is determined by the device number and i-node number and + raises an exception if an os.stat() call on either pathname fails. + 'b'Test whether two open file objects reference the same file'u'Test whether two open file objects reference the same file'b'Split the extension from a pathname. + + Extension is everything from the last dot to the end, ignoring + leading dots. Returns "(root, ext)"; ext may be empty.'u'Split the extension from a pathname. + + Extension is everything from the last dot to the end, ignoring + leading dots. Returns "(root, ext)"; ext may be empty.'b'() argument must be str, bytes, or os.PathLike object, not 'u'() argument must be str, bytes, or os.PathLike object, not 'b'Can't mix strings and bytes in path components'u'Can't mix strings and bytes in path components'u'genericpath'Parser for command line options. + +This module helps scripts to parse the command line arguments in +sys.argv. It supports the same conventions as the Unix getopt() +function (including the special meanings of arguments of the form `-' +and `--'). Long options similar to those supported by GNU software +may be used as well via an optional third argument. This module +provides two functions and an exception: + +getopt() -- Parse command line options +gnu_getopt() -- Like getopt(), but allow option and non-option arguments +to be intermixed. +GetoptError -- exception (class) raised with 'opt' attribute, which is the +option involved with the exception. +GetoptErrorgnu_getoptshortoptslongoptsgetopt(args, options[, long_options]) -> opts, args + + Parses command line options and parameter list. args is the + argument list to be parsed, without the leading reference to the + running program. Typically, this means "sys.argv[1:]". shortopts + is the string of option letters that the script wants to + recognize, with options that require an argument followed by a + colon (i.e., the same format that Unix getopt() uses). If + specified, longopts is a list of strings with the names of the + long options which should be supported. The leading '--' + characters should not be included in the option name. Options + which require an argument should be followed by an equal sign + ('='). + + The return value consists of two elements: the first is a list of + (option, value) pairs; the second is the list of program arguments + left after the option list was stripped (this is a trailing slice + of the first argument). Each option-and-value pair returned has + the option as its first element, prefixed with a hyphen (e.g., + '-x'), and the option argument as its second element, or an empty + string if the option has no argument. The options occur in the + list in the same order in which they were found, thus allowing + multiple occurrences. Long and short options may be mixed. + + do_longsdo_shortsgetopt(args, options[, long_options]) -> opts, args + + This function works like getopt(), except that GNU style scanning + mode is used by default. This means that option and non-option + arguments may be intermixed. The getopt() function stops + processing options as soon as a non-option argument is + encountered. + + If the first character of the option string is `+', or if the + environment variable POSIXLY_CORRECT is set, then option + processing stops as soon as a non-option argument is encountered. + + prog_argsall_options_firstPOSIXLY_CORRECToptarglong_has_argshas_argoption --%s requires argumentoption --%s must not have an argumentoption --%s not recognizedoption --%s not a unique prefixunique_matchoptstringshort_has_argoption -%s requires argumentoption -%s not recognizeda:balpha=# Long option support added by Lars Wirzenius .# Gerrit Holl moved the string-based exceptions# to class-based exceptions.# Peter Åstrand added gnu_getopt().# TODO for gnu_getopt():# - GNU getopt_long_only mechanism# - allow the caller to specify ordering# - RETURN_IN_ORDER option# - GNU extension with '-' as first character of option string# - optional arguments, specified by double colons# - an option string with a W followed by semicolon should# treat "-W foo" as "--foo"# Bootstrapping Python: gettext's dependencies not built yet# Allow options after non-option arguments?# Return:# has_arg?# full option name# Is there an exact match?# No exact match, so better be unique.# XXX since possibilities contains all valid continuations, might be# nice to work them into the error msgb'Parser for command line options. + +This module helps scripts to parse the command line arguments in +sys.argv. It supports the same conventions as the Unix getopt() +function (including the special meanings of arguments of the form `-' +and `--'). Long options similar to those supported by GNU software +may be used as well via an optional third argument. This module +provides two functions and an exception: + +getopt() -- Parse command line options +gnu_getopt() -- Like getopt(), but allow option and non-option arguments +to be intermixed. +GetoptError -- exception (class) raised with 'opt' attribute, which is the +option involved with the exception. +'u'Parser for command line options. + +This module helps scripts to parse the command line arguments in +sys.argv. It supports the same conventions as the Unix getopt() +function (including the special meanings of arguments of the form `-' +and `--'). Long options similar to those supported by GNU software +may be used as well via an optional third argument. This module +provides two functions and an exception: + +getopt() -- Parse command line options +gnu_getopt() -- Like getopt(), but allow option and non-option arguments +to be intermixed. +GetoptError -- exception (class) raised with 'opt' attribute, which is the +option involved with the exception. +'b'GetoptError'u'GetoptError'b'getopt'u'getopt'b'gnu_getopt'u'gnu_getopt'b'getopt(args, options[, long_options]) -> opts, args + + Parses command line options and parameter list. args is the + argument list to be parsed, without the leading reference to the + running program. Typically, this means "sys.argv[1:]". shortopts + is the string of option letters that the script wants to + recognize, with options that require an argument followed by a + colon (i.e., the same format that Unix getopt() uses). If + specified, longopts is a list of strings with the names of the + long options which should be supported. The leading '--' + characters should not be included in the option name. Options + which require an argument should be followed by an equal sign + ('='). + + The return value consists of two elements: the first is a list of + (option, value) pairs; the second is the list of program arguments + left after the option list was stripped (this is a trailing slice + of the first argument). Each option-and-value pair returned has + the option as its first element, prefixed with a hyphen (e.g., + '-x'), and the option argument as its second element, or an empty + string if the option has no argument. The options occur in the + list in the same order in which they were found, thus allowing + multiple occurrences. Long and short options may be mixed. + + 'u'getopt(args, options[, long_options]) -> opts, args + + Parses command line options and parameter list. args is the + argument list to be parsed, without the leading reference to the + running program. Typically, this means "sys.argv[1:]". shortopts + is the string of option letters that the script wants to + recognize, with options that require an argument followed by a + colon (i.e., the same format that Unix getopt() uses). If + specified, longopts is a list of strings with the names of the + long options which should be supported. The leading '--' + characters should not be included in the option name. Options + which require an argument should be followed by an equal sign + ('='). + + The return value consists of two elements: the first is a list of + (option, value) pairs; the second is the list of program arguments + left after the option list was stripped (this is a trailing slice + of the first argument). Each option-and-value pair returned has + the option as its first element, prefixed with a hyphen (e.g., + '-x'), and the option argument as its second element, or an empty + string if the option has no argument. The options occur in the + list in the same order in which they were found, thus allowing + multiple occurrences. Long and short options may be mixed. + + 'b'getopt(args, options[, long_options]) -> opts, args + + This function works like getopt(), except that GNU style scanning + mode is used by default. This means that option and non-option + arguments may be intermixed. The getopt() function stops + processing options as soon as a non-option argument is + encountered. + + If the first character of the option string is `+', or if the + environment variable POSIXLY_CORRECT is set, then option + processing stops as soon as a non-option argument is encountered. + + 'u'getopt(args, options[, long_options]) -> opts, args + + This function works like getopt(), except that GNU style scanning + mode is used by default. This means that option and non-option + arguments may be intermixed. The getopt() function stops + processing options as soon as a non-option argument is + encountered. + + If the first character of the option string is `+', or if the + environment variable POSIXLY_CORRECT is set, then option + processing stops as soon as a non-option argument is encountered. + + 'b'POSIXLY_CORRECT'u'POSIXLY_CORRECT'b'option --%s requires argument'u'option --%s requires argument'b'option --%s must not have an argument'u'option --%s must not have an argument'b'option --%s not recognized'u'option --%s not recognized'b'option --%s not a unique prefix'u'option --%s not a unique prefix'b'option -%s requires argument'u'option -%s requires argument'b'option -%s not recognized'u'option -%s not recognized'b'a:b'u'a:b'b'alpha='u'alpha='Internationalization and localization support. + +This module provides internationalization (I18N) and localization (L10N) +support for your Python programs by providing an interface to the GNU gettext +message catalog library. + +I18N refers to the operation by which a program is made aware of multiple +languages. L10N refers to the adaptation of your program, once +internationalized, to the local language and cultural habits. + +NullTranslationsGNUTranslationsCatalogtranslationinstalldngettextlgettextldgettextldngettextlngettextpgettextdpgettextnpgettextdnpgettextshare_default_localedir + (?P[ \t]+) | # spaces and horizontal tabs + (?P[0-9]+\b) | # decimal integer + (?Pn\b) | # only n is allowed + (?P[()]) | + (?P[-*/%+?:]|[>, + # <=, >=, ==, !=, &&, ||, + # ? : + # unary and bitwise ops + # not allowed + (?P\w+|.) # invalid token + _token_pattern_tokenizelastgroupWHITESPACESINVALIDinvalid token in plural form: %sunexpected token in plural form: %sunexpected end of plural form||&&!=<=>=_binary_ops_c2py_opsnexttoknot unbalanced parenthesis in plural form%s%s%s%d(%s)if_trueif_false%s if %s else %s_as_intPlural value must be an integer, got %sc2pyGets a C expression as used in PO files for plural forms and returns a + Python function that implements an equivalent expression. + plural form expression is too longplural form expression is too complexif True: + def func(n): + if not isinstance(n, int): + n = _as_int(n) + return int(%s) + _expand_langCOMPONENT_CODESETCOMPONENT_TERRITORYCOMPONENT_MODIFIERmaskmodifiercodesetterritorylanguage_info_output_charset_fallbackadd_fallbacklgettext() is deprecated, use gettext() instead.*\blgettext\b.*msgid1msgid2lngettext() is deprecated, use ngettext() instead.*\blngettext\b.*tmsgoutput_charset() is deprecatedset_output_charsetset_output_charset() is deprecatedallowed25000721580x950412deLE_MAGIC37257227730xde120495BE_MAGIC%s%sCONTEXTVERSIONS_get_versionsReturns a tuple of major version, minor versionOverride this method to support alternative .mo formats._catalogcatalogbuflen4I>IIBad magic numbermajor_versionminor_versionBad version number mlenmoffmendtlentofftendFile is corruptlastkb_item#-#-#-#-#content-typecharset=plural-formsplural=ctxt_msg_idlocaledirlanguagesenvarLANGUAGELANGnelangsnelang%s.momofile_translationsunspecified_unspecifiedmofilesNo translation file found for domainparameter codeset is deprecated.*\bset_output_charset\b.*_localedirs_localecodesetsmessages_current_domainbind_textdomain_codeset() is deprecatedldgettext() is deprecated, use dgettext() instead.*\bparameter codeset\b.*ldngettext() is deprecated, use dngettext() instead.*\bldgettext\b.*.*\bldngettext\b.*# This module represents the integration of work, contributions, feedback, and# suggestions from the following people:# Martin von Loewis, who wrote the initial implementation of the underlying# C-based libintlmodule (later renamed _gettext), along with a skeletal# gettext.py implementation.# Peter Funk, who wrote fintl.py, a fairly complete wrapper around intlmodule,# which also included a pure-Python implementation to read .mo files if# intlmodule wasn't available.# James Henstridge, who also wrote a gettext.py module, which has some# interesting, but currently unsupported experimental features: the notion of# a Catalog class and instances, and the ability to add to a catalog file via# a Python API.# Barry Warsaw integrated these modules, wrote the .install() API and code,# and conformed all C and Python code to Python's coding standards.# Francois Pinard and Marc-Andre Lemburg also contributed valuably to this# J. David Ibanez implemented plural forms. Bruno Haible fixed some bugs.# TODO:# - Lazy loading of .mo files. Currently the entire catalog is loaded into# memory, but that's probably bad for large translated programs. Instead,# the lexical sort of original strings in GNU .mo files should be exploited# to do binary searches and lazy initializations. Or you might want to use# the undocumented double-hash algorithm for .mo files with hash tables, but# you'll need to study the GNU gettext code to do this.# - Support Solaris .mo file formats. Unfortunately, we've been unable to# find this format documented anywhere.# Expression parsing for plural form selection.# The gettext library supports a small subset of C syntax. The only# incompatible difference is that integer literals starting with zero are# decimal.# https://www.gnu.org/software/gettext/manual/gettext.html#Plural-forms# http://git.savannah.gnu.org/cgit/gettext.git/tree/gettext-runtime/intl/plural.y# Break chained comparisons# '==', '!=', '<', '>', '<=', '>='# Replace some C operators by their Python equivalents# '<', '>', '<=', '>='# Python compiler limit is about 90.# The most complex example has 2.# Recursion error can be raised in _parse() or exec().# split up the locale into its base components# if all components for this combo exist ...# Magic number of .mo files# The encoding of a msgctxt and a msgid in a .mo file is# msgctxt + "\x04" + msgid (gettext version >= 0.15)# Acceptable .mo versions# Delay struct import for speeding up gettext import when .mo files# are not used.# Parse the .mo file header, which consists of 5 little endian 32# bit words.# germanic plural by default# Are we big endian or little endian?# Now put all messages from the .mo file buffer into the catalog# dictionary.# See if we're looking at GNU .mo conventions for metadata# Catalog description# Skip over comment lines:# Note: we unconditionally convert both msgids and msgstrs to# Unicode using the character encoding specified in the charset# parameter of the Content-Type header. The gettext documentation# strongly encourages msgids to be us-ascii, but some applications# require alternative encodings (e.g. Zope's ZCML and ZPT). For# traditional gettext applications, the msgid conversion will# cause no problems since us-ascii should always be a subset of# the charset encoding. We may want to fall back to 8-bit msgids# if the Unicode conversion fails.# Plural forms# advance to next entry in the seek tables# Locate a .mo file using the gettext strategy# Get some reasonable defaults for arguments that were not supplied# now normalize and expand the languages# select a language# a mapping between absolute .mo file path and Translation object# Avoid opening, reading, and parsing the .mo file after it's been done# once.# Copy the translation object to allow setting fallbacks and# output charset. All other instance data is shared with the# cached object.# Delay copy import for speeding up gettext import when .mo files# a mapping b/w domains and locale directories# a mapping b/w domains and codesets# current global domain, `messages' used for compatibility w/ GNU gettext# dcgettext() has been deemed unnecessary and is not implemented.# James Henstridge's Catalog constructor from GNOME gettext. Documented usage# was:# import gettext# cat = gettext.Catalog(PACKAGE, localedir=LOCALEDIR)# _ = cat.gettext# print _('Hello World')# The resulting catalog object currently don't support access through a# dictionary API, which was supported (but apparently unused) in GNOME# gettext.b'Internationalization and localization support. + +This module provides internationalization (I18N) and localization (L10N) +support for your Python programs by providing an interface to the GNU gettext +message catalog library. + +I18N refers to the operation by which a program is made aware of multiple +languages. L10N refers to the adaptation of your program, once +internationalized, to the local language and cultural habits. + +'u'Internationalization and localization support. + +This module provides internationalization (I18N) and localization (L10N) +support for your Python programs by providing an interface to the GNU gettext +message catalog library. + +I18N refers to the operation by which a program is made aware of multiple +languages. L10N refers to the adaptation of your program, once +internationalized, to the local language and cultural habits. + +'b'NullTranslations'u'NullTranslations'b'GNUTranslations'u'GNUTranslations'b'Catalog'u'Catalog'b'translation'u'translation'b'install'u'install'b'textdomain'u'textdomain'b'bindtextdomain'u'bindtextdomain'b'bind_textdomain_codeset'u'bind_textdomain_codeset'b'dgettext'u'dgettext'b'dngettext'u'dngettext'b'gettext'u'gettext'b'lgettext'u'lgettext'b'ldgettext'u'ldgettext'b'ldngettext'u'ldngettext'b'lngettext'u'lngettext'b'ngettext'u'ngettext'b'pgettext'u'pgettext'b'dpgettext'u'dpgettext'b'npgettext'u'npgettext'b'dnpgettext'u'dnpgettext'b'share'u'share'b'locale'b' + (?P[ \t]+) | # spaces and horizontal tabs + (?P[0-9]+\b) | # decimal integer + (?Pn\b) | # only n is allowed + (?P[()]) | + (?P[-*/%+?:]|[>, + # <=, >=, ==, !=, &&, ||, + # ? : + # unary and bitwise ops + # not allowed + (?P\w+|.) # invalid token + 'u' + (?P[ \t]+) | # spaces and horizontal tabs + (?P[0-9]+\b) | # decimal integer + (?Pn\b) | # only n is allowed + (?P[()]) | + (?P[-*/%+?:]|[>, + # <=, >=, ==, !=, &&, ||, + # ? : + # unary and bitwise ops + # not allowed + (?P\w+|.) # invalid token + 'b'WHITESPACES'u'WHITESPACES'b'INVALID'u'INVALID'b'invalid token in plural form: %s'u'invalid token in plural form: %s'b'unexpected token in plural form: %s'u'unexpected token in plural form: %s'b'unexpected end of plural form'u'unexpected end of plural form'b'||'u'||'b'&&'u'&&'u'=='b'!='u'!='b'<='u'<='b'>='u'>='b'not 'u'not 'b'unbalanced parenthesis in plural form'u'unbalanced parenthesis in plural form'b'%s%s'u'%s%s'b'%s%d'u'%s%d'b'(%s)'u'(%s)'b'%s if %s else %s'u'%s if %s else %s'b'Plural value must be an integer, got %s'u'Plural value must be an integer, got %s'b'Gets a C expression as used in PO files for plural forms and returns a + Python function that implements an equivalent expression. + 'u'Gets a C expression as used in PO files for plural forms and returns a + Python function that implements an equivalent expression. + 'b'plural form expression is too long'u'plural form expression is too long'b'plural form expression is too complex'u'plural form expression is too complex'b'_as_int'u'_as_int'b'if True: + def func(n): + if not isinstance(n, int): + n = _as_int(n) + return int(%s) + 'u'if True: + def func(n): + if not isinstance(n, int): + n = _as_int(n) + return int(%s) + 'b'lgettext() is deprecated, use gettext() instead'u'lgettext() is deprecated, use gettext() instead'b'.*\blgettext\b.*'u'.*\blgettext\b.*'b'lngettext() is deprecated, use ngettext() instead'u'lngettext() is deprecated, use ngettext() instead'b'.*\blngettext\b.*'u'.*\blngettext\b.*'b'output_charset() is deprecated'u'output_charset() is deprecated'b'set_output_charset() is deprecated'u'set_output_charset() is deprecated'b'%s%s'u'%s%s'b'Returns a tuple of major version, minor version'u'Returns a tuple of major version, minor version'b'Override this method to support alternative .mo formats.'u'Override this method to support alternative .mo formats.'b'4I'u'>4I'b'>II'u'>II'b'Bad magic number'u'Bad magic number'b'Bad version number 'u'Bad version number 'b'File is corrupt'u'File is corrupt'b'#-#-#-#-#'u'#-#-#-#-#'b'content-type'u'content-type'b'charset='u'charset='b'plural-forms'u'plural-forms'b'plural='u'plural='b'LANGUAGE'u'LANGUAGE'b'LC_ALL'u'LC_ALL'b'LC_MESSAGES'u'LC_MESSAGES'b'LANG'u'LANG'b'C'u'C'b'%s.mo'u'%s.mo'b'unspecified'u'unspecified'b'No translation file found for domain'u'No translation file found for domain'b'parameter codeset is deprecated'u'parameter codeset is deprecated'b'.*\bset_output_charset\b.*'u'.*\bset_output_charset\b.*'b'messages'u'messages'b'bind_textdomain_codeset() is deprecated'u'bind_textdomain_codeset() is deprecated'b'ldgettext() is deprecated, use dgettext() instead'u'ldgettext() is deprecated, use dgettext() instead'b'.*\bparameter codeset\b.*'u'.*\bparameter codeset\b.*'b'ldngettext() is deprecated, use dngettext() instead'u'ldngettext() is deprecated, use dngettext() instead'b'.*\bldgettext\b.*'u'.*\bldgettext\b.*'b'.*\bldngettext\b.*'u'.*\bldngettext\b.*'Filename globbing utility.iglobReturn a list of paths matching a pathname pattern. + + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. + + If recursive is true, the pattern '**' will match any files and + zero or more directories and subdirectories. + Return an iterator which yields the paths matching a pathname pattern. + + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. + + If recursive is true, the pattern '**' will match any files and + zero or more directories and subdirectories. + glob.glob_iglob_isrecursivedironlyhas_magic_glob2_glob1glob_in_dir_glob0_iterdir_ishiddenglob0glob1_rlistdirscandiris_dir([*?[])magic_checkmagic_check_bytes**Escape all special characters. + [\1]# skip empty string# Patterns ending with a slash should match only directories# `os.path.split()` returns the argument itself as a dirname if it is a# drive or UNC path. Prevent an infinite recursion if a drive or UNC path# contains magic characters (i.e. r'\\?\C:').# These 2 helper functions non-recursively glob inside a literal directory.# They return a list of basenames. _glob1 accepts a pattern while _glob0# takes a literal basename (so it only has to check for its existence).# `os.path.split()` returns an empty basename for paths ending with a# directory separator. 'q*x/' should match only directories.# Following functions are not public but can be used by third-party code.# This helper function recursively yields relative pathnames inside a literal# directory.# If dironly is false, yields all file names inside a directory.# If dironly is true, yields only directory names.# Recursively yields relative pathnames inside a literal directory.# Escaping is done by wrapping any of "*?[" between square brackets.# Metacharacters do not work in the drive part and shouldn't be escaped.b'Filename globbing utility.'u'Filename globbing utility.'b'glob'u'glob'b'iglob'u'iglob'b'Return a list of paths matching a pathname pattern. + + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. + + If recursive is true, the pattern '**' will match any files and + zero or more directories and subdirectories. + 'u'Return a list of paths matching a pathname pattern. + + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. + + If recursive is true, the pattern '**' will match any files and + zero or more directories and subdirectories. + 'b'Return an iterator which yields the paths matching a pathname pattern. + + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. + + If recursive is true, the pattern '**' will match any files and + zero or more directories and subdirectories. + 'u'Return an iterator which yields the paths matching a pathname pattern. + + The pattern may contain simple shell-style wildcards a la + fnmatch. However, unlike fnmatch, filenames starting with a + dot are special cases that are not matched by '*' and '?' + patterns. + + If recursive is true, the pattern '**' will match any files and + zero or more directories and subdirectories. + 'b'glob.glob'u'glob.glob'b'([*?[])'u'([*?[])'b'**'u'**'b'Escape all special characters. + 'u'Escape all special characters. + 'b'[\1]'u'[\1]'This module defines the data structures used to represent a grammar. + +These are a bit arcane because they are derived from the data +structures used by Python's 'pgen' parser generator. + +There's also a table here mapping operators to their names in the +token module; the Python tokenize module reports all operators as the +fallback token code OP, but the parser needs the actual token code. + +Pgen parsing tables conversion class. + + Once initialized, this class supplies the grammar tables for the + parsing engine implemented by parse.py. The parsing engine + accesses the instance variables directly. The class here does not + provide initialization of the tables; several subclasses exist to + do this (see the conv and pgen modules). + + The load() method reads the tables from a pickle file, which is + much faster than the other ways offered by subclasses. The pickle + file is written by calling dump() (after loading the grammar + tables using a subclass). The report() method prints a readable + representation of the tables to stdout, for debugging. + + The instance variables are as follows: + + symbol2number -- a dict mapping symbol names to numbers. Symbol + numbers are always 256 or higher, to distinguish + them from token numbers, which are between 0 and + 255 (inclusive). + + number2symbol -- a dict mapping numbers to symbol names; + these two are each other's inverse. + + states -- a list of DFAs, where each DFA is a list of + states, each state is a list of arcs, and each + arc is a (i, j) pair where i is a label and j is + a state number. The DFA number is the index into + this list. (This name is slightly confusing.) + Final states are represented by a special arc of + the form (0, j) where j is its own state number. + + dfas -- a dict mapping symbol numbers to (DFA, first) + pairs, where DFA is an item from the states list + above, and first is a set of tokens that can + begin this grammar rule (represented by a dict + whose values are always 1). + + labels -- a list of (x, y) pairs where x is either a token + number or a symbol number, and y is either None + or a string; the strings are keywords. The label + number is the index in this list; label numbers + are used to mark state transitions (arcs) in the + DFAs. + + start -- the number of the grammar's start symbol. + + keywords -- a dict mapping keyword strings to arc labels. + + tokens -- a dict mapping token numbers to arc labels. + + symbol2numbernumber2symbolstatesdfasEMPTYsymbol2labelDump the grammar tables to a pickle file.HIGHEST_PROTOCOLLoad the grammar tables from a pickle file.pklLoad the grammar tables from a pickle bytes object. + Copy the grammar. + dict_attrDump the grammar tables to standard output, for debugging.s2nn2s +( LPAR +) RPAR +[ LSQB +] RSQB +: COLON +, COMMA +; SEMI ++ PLUS +- MINUS +* STAR +/ SLASH +| VBAR +& AMPER +< LESS +> GREATER += EQUAL +. DOT +% PERCENT +` BACKQUOTE +{ LBRACE +} RBRACE +@ AT +@= ATEQUAL +== EQEQUAL +!= NOTEQUAL +<> NOTEQUAL +<= LESSEQUAL +>= GREATEREQUAL +~ TILDE +^ CIRCUMFLEX +<< LEFTSHIFT +>> RIGHTSHIFT +** DOUBLESTAR ++= PLUSEQUAL +-= MINEQUAL +*= STAREQUAL +/= SLASHEQUAL +%= PERCENTEQUAL +&= AMPEREQUAL +|= VBAREQUAL +^= CIRCUMFLEXEQUAL +<<= LEFTSHIFTEQUAL +>>= RIGHTSHIFTEQUAL +**= DOUBLESTAREQUAL +// DOUBLESLASH +//= DOUBLESLASHEQUAL +-> RARROW +:= COLONEQUAL +opmap_raw# Map from operator to number (since tokenize doesn't do this)b'This module defines the data structures used to represent a grammar. + +These are a bit arcane because they are derived from the data +structures used by Python's 'pgen' parser generator. + +There's also a table here mapping operators to their names in the +token module; the Python tokenize module reports all operators as the +fallback token code OP, but the parser needs the actual token code. + +'u'This module defines the data structures used to represent a grammar. + +These are a bit arcane because they are derived from the data +structures used by Python's 'pgen' parser generator. + +There's also a table here mapping operators to their names in the +token module; the Python tokenize module reports all operators as the +fallback token code OP, but the parser needs the actual token code. + +'b'Pgen parsing tables conversion class. + + Once initialized, this class supplies the grammar tables for the + parsing engine implemented by parse.py. The parsing engine + accesses the instance variables directly. The class here does not + provide initialization of the tables; several subclasses exist to + do this (see the conv and pgen modules). + + The load() method reads the tables from a pickle file, which is + much faster than the other ways offered by subclasses. The pickle + file is written by calling dump() (after loading the grammar + tables using a subclass). The report() method prints a readable + representation of the tables to stdout, for debugging. + + The instance variables are as follows: + + symbol2number -- a dict mapping symbol names to numbers. Symbol + numbers are always 256 or higher, to distinguish + them from token numbers, which are between 0 and + 255 (inclusive). + + number2symbol -- a dict mapping numbers to symbol names; + these two are each other's inverse. + + states -- a list of DFAs, where each DFA is a list of + states, each state is a list of arcs, and each + arc is a (i, j) pair where i is a label and j is + a state number. The DFA number is the index into + this list. (This name is slightly confusing.) + Final states are represented by a special arc of + the form (0, j) where j is its own state number. + + dfas -- a dict mapping symbol numbers to (DFA, first) + pairs, where DFA is an item from the states list + above, and first is a set of tokens that can + begin this grammar rule (represented by a dict + whose values are always 1). + + labels -- a list of (x, y) pairs where x is either a token + number or a symbol number, and y is either None + or a string; the strings are keywords. The label + number is the index in this list; label numbers + are used to mark state transitions (arcs) in the + DFAs. + + start -- the number of the grammar's start symbol. + + keywords -- a dict mapping keyword strings to arc labels. + + tokens -- a dict mapping token numbers to arc labels. + + 'u'Pgen parsing tables conversion class. + + Once initialized, this class supplies the grammar tables for the + parsing engine implemented by parse.py. The parsing engine + accesses the instance variables directly. The class here does not + provide initialization of the tables; several subclasses exist to + do this (see the conv and pgen modules). + + The load() method reads the tables from a pickle file, which is + much faster than the other ways offered by subclasses. The pickle + file is written by calling dump() (after loading the grammar + tables using a subclass). The report() method prints a readable + representation of the tables to stdout, for debugging. + + The instance variables are as follows: + + symbol2number -- a dict mapping symbol names to numbers. Symbol + numbers are always 256 or higher, to distinguish + them from token numbers, which are between 0 and + 255 (inclusive). + + number2symbol -- a dict mapping numbers to symbol names; + these two are each other's inverse. + + states -- a list of DFAs, where each DFA is a list of + states, each state is a list of arcs, and each + arc is a (i, j) pair where i is a label and j is + a state number. The DFA number is the index into + this list. (This name is slightly confusing.) + Final states are represented by a special arc of + the form (0, j) where j is its own state number. + + dfas -- a dict mapping symbol numbers to (DFA, first) + pairs, where DFA is an item from the states list + above, and first is a set of tokens that can + begin this grammar rule (represented by a dict + whose values are always 1). + + labels -- a list of (x, y) pairs where x is either a token + number or a symbol number, and y is either None + or a string; the strings are keywords. The label + number is the index in this list; label numbers + are used to mark state transitions (arcs) in the + DFAs. + + start -- the number of the grammar's start symbol. + + keywords -- a dict mapping keyword strings to arc labels. + + tokens -- a dict mapping token numbers to arc labels. + + 'b'EMPTY'u'EMPTY'b'Dump the grammar tables to a pickle file.'u'Dump the grammar tables to a pickle file.'b'Load the grammar tables from a pickle file.'u'Load the grammar tables from a pickle file.'b'Load the grammar tables from a pickle bytes object.'u'Load the grammar tables from a pickle bytes object.'b' + Copy the grammar. + 'u' + Copy the grammar. + 'b'symbol2number'u'symbol2number'b'number2symbol'u'number2symbol'b'dfas'u'dfas'b'tokens'u'tokens'b'symbol2label'u'symbol2label'b'Dump the grammar tables to standard output, for debugging.'u'Dump the grammar tables to standard output, for debugging.'b's2n'u's2n'b'n2s'u'n2s'b'states'u'states'b'labels'u'labels'b' +( LPAR +) RPAR +[ LSQB +] RSQB +: COLON +, COMMA +; SEMI ++ PLUS +- MINUS +* STAR +/ SLASH +| VBAR +& AMPER +< LESS +> GREATER += EQUAL +. DOT +% PERCENT +` BACKQUOTE +{ LBRACE +} RBRACE +@ AT +@= ATEQUAL +== EQEQUAL +!= NOTEQUAL +<> NOTEQUAL +<= LESSEQUAL +>= GREATEREQUAL +~ TILDE +^ CIRCUMFLEX +<< LEFTSHIFT +>> RIGHTSHIFT +** DOUBLESTAR ++= PLUSEQUAL +-= MINEQUAL +*= STAREQUAL +/= SLASHEQUAL +%= PERCENTEQUAL +&= AMPEREQUAL +|= VBAREQUAL +^= CIRCUMFLEXEQUAL +<<= LEFTSHIFTEQUAL +>>= RIGHTSHIFTEQUAL +**= DOUBLESTAREQUAL +// DOUBLESLASH +//= DOUBLESLASHEQUAL +-> RARROW +:= COLONEQUAL +'u' +( LPAR +) RPAR +[ LSQB +] RSQB +: COLON +, COMMA +; SEMI ++ PLUS +- MINUS +* STAR +/ SLASH +| VBAR +& AMPER +< LESS +> GREATER += EQUAL +. DOT +% PERCENT +` BACKQUOTE +{ LBRACE +} RBRACE +@ AT +@= ATEQUAL +== EQEQUAL +!= NOTEQUAL +<> NOTEQUAL +<= LESSEQUAL +>= GREATEREQUAL +~ TILDE +^ CIRCUMFLEX +<< LEFTSHIFT +>> RIGHTSHIFT +** DOUBLESTAR ++= PLUSEQUAL +-= MINEQUAL +*= STAREQUAL +/= SLASHEQUAL +%= PERCENTEQUAL +&= AMPEREQUAL +|= VBAREQUAL +^= CIRCUMFLEXEQUAL +<<= LEFTSHIFTEQUAL +>>= RIGHTSHIFTEQUAL +**= DOUBLESTAREQUAL +// DOUBLESLASH +//= DOUBLESLASHEQUAL +-> RARROW +:= COLONEQUAL +'u'lib2to3.pgen2.grammar'u'pgen2.grammar'u'grammar'u'Access to the Unix group database. + +Group entries are reported as 4-tuples containing the following fields +from the group database, in order: + + gr_name - name of the group + gr_passwd - group password (encrypted); often empty + gr_gid - numeric ID of the group + gr_mem - list of members + +The gid is an integer, name and password are strings. (Note that most +users are not explicitly listed as members of the groups they are in +according to the password database. Check both databases to get +complete membership information.)'u'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/grp.cpython-38-darwin.so'u'grp'getgrallgetgrgidgetgrnamu'grp.struct_group: Results from getgr*() routines. + +This object may be accessed either as a tuple of + (gr_name,gr_passwd,gr_gid,gr_mem) +or via the object attributes as named in the above tuple. +'gr_gidgr_memgr_namegr_passwdgrp.struct_groupstruct_groupgrpFunctions that read and write gzipped files. + +The user of the file doesn't have to worry about the compression, +but random access is not allowed.BadGzipFileFTEXTFHCRCFEXTRAFNAMEFCOMMENTREADWRITE_COMPRESS_LEVEL_FAST_COMPRESS_LEVEL_TRADEOFF_COMPRESS_LEVEL_BESTOpen a gzip-compressed file in binary or text mode. + + The filename argument can be an actual filename (a str or bytes object), or + an existing file object to read from or write to. + + The mode argument can be "r", "rb", "w", "wb", "x", "xb", "a" or "ab" for + binary mode, or "rt", "wt", "xt" or "at" for text mode. The default mode is + "rb", and the default compresslevel is 9. + + For binary mode, this function is equivalent to the GzipFile constructor: + GzipFile(filename, mode, compresslevel). In this case, the encoding, errors + and newline arguments must not be provided. + + For text mode, a GzipFile object is created, and wrapped in an + io.TextIOWrapper instance with the specified encoding, error handling + behavior, and line ending(s). + + gz_modefilename must be a str or bytes object, or a filewrite32u= 1, the system will successively create + new files with the same pathname as the base file, but with extensions + ".1", ".2" etc. appended to it. For example, with a backupCount of 5 + and a base file name of "app.log", you would get "app.log", + "app.log.1", "app.log.2", ... through to "app.log.5". The file being + written to is always "app.log" - when it gets filled up, it is closed + and renamed to "app.log.1", and if files "app.log.1", "app.log.2" etc. + exist, then they are renamed to "app.log.2", "app.log.3" etc. + respectively. + + If maxBytes is zero, rollover never occurs. + + Do a rollover, as described in __init__(). + %s.%dsfndfn.1 + Determine if rollover should occur. + + Basically, see if the supplied record would cause the file to exceed + the size limit we have. + TimedRotatingFileHandler + Handler for logging to a file, rotating the log file at certain timed + intervals. + + If backupCount is > 0, when rollover is done, no more than backupCount + files are kept - the oldest ones are deleted. + atTime%Y-%m-%d_%H-%M-%S^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}-\d{2}(\.\w+)?$extMatch%Y-%m-%d_%H-%M^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}(\.\w+)?$%Y-%m-%d_%H^\d{4}-\d{2}-\d{2}_\d{2}(\.\w+)?$MIDNIGHT%Y-%m-%d^\d{4}-\d{2}-\d{2}(\.\w+)?$You must specify a day for weekly rollover from 0 to 6 (0 is Monday): %sInvalid day specified for weekly rollover: %sdayOfWeekInvalid rollover interval specified: %scomputeRolloverrolloverAt + Work out the rollover time based on the specified time. + currentHourcurrentMinutecurrentSecondcurrentDayrotate_tsdaysToWaitnewRolloverAtdstNowdstAtRolloveraddend + Determine if rollover should occur. + + record is not used, as we are just comparing times, but it is needed so + the method signatures are the same + getFilesToDelete + Determine the files to delete when rolling over. + + More specific than the earlier method, which just used glob.glob(). + dirNamefileNamesplen + do a rollover; in this case, a date/time stamp is appended to the filename + when the rollover happens. However, you want the file to be named for the + start of the interval, not the current time. If there is a backup count, + then we have to get a list of matching filenames, sort them and remove + the one with the oldest suffix. + timeTupledstThenWatchedFileHandler + A handler for logging to a file, which watches the file + to see if it has changed while in use. This can happen because of + usage of programs such as newsyslog and logrotate which perform + log file rotation. This handler, intended for use under Unix, + watches the file to see if it has changed since the last emit. + (A file has changed if its device or inode have changed.) + If it has changed, the old file stream is closed, and the file + opened to get a new stream. + + This handler is not appropriate for use under Windows, because + under Windows open files cannot be moved or renamed - logging + opens the files with exclusive locks - and so there is no need + for such a handler. Furthermore, ST_INO is not supported under + Windows; stat always returns zero for this value. + + This handler is based on a suggestion and patch by Chad J. + Schroeder. + devino_statstreamsresreopenIfNeeded + Reopen log file if needed. + + Checks if the underlying file has changed, and if it + has, close the old stream and reopen the file to get the + current stream. + + Emit a record. + + If underlying file has changed, reopen the file before emitting the + record to it. + SocketHandler + A handler class which writes logging records, in pickle format, to + a streaming socket. The socket is kept open across logging calls. + If the peer resets it, an attempt is made to reconnect on the next call. + The pickle which is sent is that of the LogRecord's attribute dictionary + (__dict__), so that the receiver does not need to have the logging module + installed in order to process the logging event. + + To unpickle the record at the receiving end into a LogRecord, use the + makeLogRecord function. + + Initializes the handler with a specific host address and port. + + When the attribute *closeOnError* is set to True - if a socket error + occurs, the socket is silently closed and then reopened on the next + logging call. + closeOnErrorretryTimeretryStartretryMaxretryFactormakeSocket + A factory method which allows subclasses to define the precise + type of socket they want. + createSocket + Try to create a socket, using an exponential backoff with + a max retry time. Thanks to Robert Olson for the original patch + (SF #815911) which has been slightly refactored. + attemptretryPeriod + Send a pickled string to the socket. + + This function allows for partial sends which can happen when the + network is busy. + makePickle + Pickles the record in binary format with a length prefix, and + returns it ready for transmission across the socket. + dummy>Lslen + Handle an error during logging. + + An error has occurred during logging. Most likely cause - + connection lost. Close the socket so that we can retry on the + next event. + + Emit a record. + + Pickles the record and writes it to the socket in binary format. + If there is an error with the socket, silently drop the packet. + If there was a problem with the socket, re-establishes the + socket. + + Closes the socket. + DatagramHandler + A handler class which writes logging records, in pickle format, to + a datagram socket. The pickle which is sent is that of the LogRecord's + attribute dictionary (__dict__), so that the receiver does not need to + have the logging module installed in order to process the logging event. + + To unpickle the record at the receiving end into a LogRecord, use the + makeLogRecord function. + + + Initializes the handler with a specific host address and port. + + The factory method of SocketHandler is here overridden to create + a UDP socket (SOCK_DGRAM). + + Send a pickled string to a socket. + + This function no longer allows for partial sends which can happen + when the network is busy - UDP does not guarantee delivery and + can deliver packets out of sequence. + SysLogHandler + A handler class which sends formatted logging records to a syslog + server. Based on Sam Rushing's syslog module: + http://www.nightmare.com/squirl/python-ext/misc/syslog.py + Contributed by Nicolas Untz (after which minor refactoring changes + have been made). + LOG_EMERGLOG_ALERTLOG_CRITLOG_ERRLOG_WARNINGLOG_NOTICELOG_INFOLOG_DEBUGLOG_KERNLOG_USERLOG_MAILLOG_DAEMONLOG_AUTHLOG_SYSLOGLOG_LPRLOG_NEWSLOG_UUCPLOG_CRONLOG_AUTHPRIVLOG_FTPLOG_LOCAL0LOG_LOCAL1LOG_LOCAL2LOG_LOCAL3LOG_LOCAL4LOG_LOCAL5LOG_LOCAL6LOG_LOCAL7alertcritemergnoticepanicpriority_namesauthprivcrondaemonkernlprmailnewssecuritysysloguucplocal0local1local2local3local4local5local6local7facility_namespriority_mapfacility + Initialize a handler. + + If address is specified as a string, a UNIX socket is used. To log to a + local syslogd, "SysLogHandler(address="/dev/log")" can be used. + If facility is not specified, LOG_USER is used. If socktype is + specified as socket.SOCK_DGRAM or socket.SOCK_STREAM, that specific + socket type will be used. For Unix sockets, you can also specify a + socktype of None, in which case socket.SOCK_DGRAM will be used, falling + back to socket.SOCK_STREAM. + unixsocket_connect_unixsocketressgetaddrinfo returns an empty listuse_socktypeencodePriority + Encode the facility and priority. You can pass in strings or + integers - if strings are passed, the facility_names and + priority_names mapping dictionaries are used to convert them to + integers. + mapPriority + Map a logging level name to a key in the priority_names map. + This is useful in two scenarios: when custom levels are being + used, and in the case where you can't do a straightforward + mapping by lowercasing the logging level name because of locale- + specific issues (see SF #1524081). + identappend_nul + Emit a record. + + The record is formatted, and then sent to the syslog server. If + exception information is present, it is NOT sent to the server. + <%d>prioSMTPHandler + A handler class which sends an SMTP email for each logging event. + 5.0mailhostfromaddrtoaddrssubjectcredentials + Initialize the handler. + + Initialize the instance with the from and to addresses and subject + line of the email. To specify a non-standard SMTP port, use the + (host, port) tuple format for the mailhost argument. To specify + authentication credentials, supply a (username, password) tuple + for the credentials argument. To specify the use of a secure + protocol (TLS), pass in a tuple for the secure argument. This will + only be used when authentication credentials are supplied. The tuple + will be either an empty tuple, or a single-value tuple with the name + of a keyfile, or a 2-value tuple with the names of the keyfile and + certificate file. (This tuple is passed to the `starttls` method). + A timeout in seconds can be specified for the SMTP connection (the + default is one second). + mailportusernamegetSubject + Determine the subject for the email. + + If you want to specify a subject line which is record-dependent, + override this method. + + Emit a record. + + Format the record and send it to the specified addressees. + smtplibEmailMessageSMTP_PORTSMTPsmtpFromToSubjectDateset_contentehlostarttlssend_messageNTEventLogHandler + A handler class which sends events to the NT Event Log. Adds a + registry entry for the specified application name. If no dllname is + provided, win32service.pyd (which contains some basic message + placeholders) is used. Note that use of these placeholders will make + your event logs big, as the entire message source is held in the log. + If you want slimmer logs, you have to pass in the name of your own DLL + which contains the message definitions you want to use in the event log. + Applicationappnamedllnamelogtypewin32evtlogutilwin32evtlog_weluwin32service.pydAddSourceToRegistryEVENTLOG_ERROR_TYPEdeftypeEVENTLOG_INFORMATION_TYPEEVENTLOG_WARNING_TYPEtypemapThe Python Win32 extensions for NT (service, event logging) appear not to be available."The Python Win32 extensions for NT (service, event ""logging) appear not to be available."getMessageID + Return the message ID for the event record. If you are using your + own messages, you could do this by having the msg passed to the + logger being an ID rather than a formatting string. Then, in here, + you could use a dictionary lookup to get the message ID. This + version returns 1, which is the base message ID in win32service.pyd. + getEventCategory + Return the event category for the record. + + Override this if you want to specify your own categories. This version + returns 0. + getEventType + Return the event type for the record. + + Override this if you want to specify your own types. This version does + a mapping using the handler's typemap attribute, which is set up in + __init__() to a dictionary which contains mappings for DEBUG, INFO, + WARNING, ERROR and CRITICAL. If you are using your own levels you will + either need to override this method or place a suitable dictionary in + the handler's typemap attribute. + + Emit a record. + + Determine the message ID, event category and event type. Then + log the message in the NT event log. + ReportEvent + Clean up this handler. + + You can remove the application name from the registry as a + source of event log entries. However, if you do this, you will + not be able to see the events as you intended in the Event Log + Viewer - it needs to be able to access the registry to get the + DLL name. + HTTPHandler + A class which sends records to a Web server, using either GET or + POST semantics. + GET + Initialize the instance with the host, the request URL, and the method + ("GET" or "POST") + method must be GET or POSTcontext parameter only makes sense with secure=True"context parameter only makes sense ""with secure=True"mapLogRecord + Default implementation of mapping the log record into a dict + that is sent as the CGI data. Overwrite in your class. + Contributed by Franz Glasner. + + Emit a record. + + Send the record to the Web server as a percent-encoded dictionary + %c%sContent-typeapplication/x-www-form-urlencodedContent-length + A handler class which buffers logging records in memory. Whenever each + record is added to the buffer, a check is made to see if the buffer should + be flushed. If it should, then flush() is expected to do what's needed. + capacity + Initialize the handler with the buffer size. + + Should the handler flush its buffer? + + Returns true if the buffer is up to capacity. This method can be + overridden to implement custom flushing strategies. + + Emit a record. + + Append the record. If shouldFlush() tells us to, call flush() to process + the buffer. + + Override to implement custom flushing behaviour. + + This version just zaps the buffer to empty. + + Close the handler. + + This version just flushes and chains to the parent class' close(). + MemoryHandler + A handler class which buffers logging records in memory, periodically + flushing them to a target handler. Flushing occurs whenever the buffer + is full, or when an event of a certain severity or greater is seen. + flushLevelflushOnClose + Initialize the handler with the buffer size, the level at which + flushing should occur and an optional target. + + Note that without a target being set either here or via setTarget(), + a MemoryHandler is no use to anyone! + + The ``flushOnClose`` argument is ``True`` for backward compatibility + reasons - the old behaviour is that when the handler is closed, the + buffer is flushed, even if the flush level hasn't been exceeded nor the + capacity exceeded. To prevent this, set ``flushOnClose`` to ``False``. + + Check for buffer full or a record at the flushLevel or higher. + setTarget + Set the target handler for this handler. + + For a MemoryHandler, flushing means just sending the buffered + records to the target, if there is one. Override if you want + different behaviour. + + The record buffer is also cleared by this operation. + + Flush, if appropriately configured, set the target to None and lose the + buffer. + QueueHandler + This handler sends events to a queue. Typically, it would be used together + with a multiprocessing Queue to centralise logging to file in one process + (in a multi-process application), so as to avoid file write contention + between processes. + + This code is new in Python 3.2, but this class can be copy pasted into + user code for use with earlier Python versions. + + Initialise an instance, using the passed queue. + enqueue + Enqueue a record. + + The base implementation uses put_nowait. You may want to override + this method if you want to use blocking, timeouts or custom queue + implementations. + prepare + Prepares a record for queuing. The object returned by this method is + enqueued. + + The base implementation formats the record to merge the message + and arguments, and removes unpickleable items from the record + in-place. + + You might want to override this method if you want to convert + the record to a dict or JSON string, or send a modified copy + of the record while leaving the original intact. + + Emit a record. + + Writes the LogRecord to the queue, preparing it for pickling first. + QueueListener + This class implements an internal threaded listener which watches for + LogRecords being added to a queue, removes them and passes them to a + list of handlers for processing. + respect_handler_level + Initialise an instance with the specified queue and + handlers. + dequeueblock + Dequeue a record and return it, optionally blocking. + + The base implementation uses get. You may want to override this method + if you want to use timeouts or work with custom queue implementations. + + Start the listener. + + This starts up a background thread to monitor the queue for + LogRecords to process. + Thread_monitor + Prepare a record for handling. + + This method just returns the passed-in record. You may want to + override this method if you need to do any custom marshalling or + manipulation of the record before passing it to the handlers. + + Handle a record. + + This just loops through the handlers offering them the record + to handle. + + Monitor the queue for records, and ask the handler + to deal with them. + + This method runs on a separate, internal thread. + The thread will terminate if it sees a sentinel object in the queue. + has_task_doneenqueue_sentinel + This is used to enqueue the sentinel record. + + The base implementation uses put_nowait. You may want to override this + method if you want to use timeouts or work with custom queue + implementations. + + Stop the listener. + + This asks the thread to terminate, and then waits for it to do so. + Note that if you don't call this before your application exits, there + may be some records still left on the queue, which won't be processed. + # Copyright 2001-2016 by Vinay Sajip. All Rights Reserved.# Some constants...# number of seconds in a day# Issue 18940: A file may not have been created if delay is True.# If rotation/rollover is wanted, it doesn't make sense to use another# mode. If for example 'w' were specified, then if there were multiple# runs of the calling application, the logs from previous runs would be# lost if the 'w' is respected, because the log file would be truncated# on each run.# delay was set...# are we rolling over?#due to non-posix-compliant Windows feature# Calculate the real rollover interval, which is just the number of# seconds between rollovers. Also set the filename suffix used when# a rollover occurs. Current 'when' events supported:# S - Seconds# M - Minutes# H - Hours# D - Days# midnight - roll over at midnight# W{0-6} - roll over on a certain day; 0 - Monday# Case of the 'when' specifier is not important; lower or upper case# will work.# one second# one minute# one hour# one day# one week# multiply by units requested# The following line added because the filename passed in could be a# path object (see Issue #27493), but self.baseFilename will be a string# If we are rolling over at midnight or weekly, then the interval is already known.# What we need to figure out is WHEN the next interval is. In other words,# if you are rolling over at midnight, then your base interval is 1 day,# but you want to start that one day clock at midnight, not now. So, we# have to fudge the rolloverAt value in order to trigger the first rollover# at the right time. After that, the regular interval will take care of# the rest. Note that this code doesn't care about leap seconds. :)# This could be done with less code, but I wanted it to be clear# r is the number of seconds left between now and the next rotation# Rotate time is before the current time (for example when# self.rotateAt is 13:45 and it now 14:15), rotation is# tomorrow.# If we are rolling over on a certain day, add in the number of days until# the next rollover, but offset by 1 since we just calculated the time# until the next day starts. There are three cases:# Case 1) The day to rollover is today; in this case, do nothing# Case 2) The day to rollover is further in the interval (i.e., today is# day 2 (Wednesday) and rollover is on day 6 (Sunday). Days to# next rollover is simply 6 - 2 - 1, or 3.# Case 3) The day to rollover is behind us in the interval (i.e., today# is day 5 (Saturday) and rollover is on day 3 (Thursday).# Days to rollover is 6 - 5 + 3, or 4. In this case, it's the# number of days left in the current week (1) plus the number# of days in the next week until the rollover day (3).# The calculations described in 2) and 3) above need to have a day added.# This is because the above time calculation takes us to midnight on this# day, i.e. the start of the next day.# 0 is Monday# DST kicks in before next rollover, so we need to deduct an hour# DST bows out before next rollover, so we need to add an hour# get the time that this sequence started at and make it a TimeTuple#If DST changes and midnight or weekly rollover, adjust for this.# Reduce the chance of race conditions by stat'ing by path only# once and then fstat'ing our new fd if we opened a new log stream.# See issue #14632: Thanks to John Mulligan for the problem report# and patch.# stat the file by path, checking for existence# compare file system stat with that of our stream file handle# we have an open file handle, clean it up# See Issue #21742: _open () might fail.# open a new file handle and get new stat info from that fd# Exponential backoff parameters.# Issue 19182# Either retryTime is None, in which case this# is the first time back after a disconnect, or# we've waited long enough.# next time, no delay before trying#Creation failed, so set the retry time and return.#self.sock can be None either because we haven't reached the retry#time yet, or because we have reached the retry time and retried,#but are still unable to connect.# so we can call createSocket next time# just to get traceback text into record.exc_text ...# See issue #14436: If msg or args are objects, they may not be# available on the receiving end. So we convert the msg % args# to a string, save it as msg and zap the args.# Issue #25685: delete 'message' if present: redundant with 'msg'#try to reconnect next time# from :# ======================================================================# priorities/facilities are encoded into a single 32-bit quantity, where# the bottom 3 bits are the priority (0-7) and the top 28 bits are the# facility (0-big number). Both the priorities and the facilities map# roughly one-to-one to strings in the syslogd(8) source code. This# mapping is included in this file.# priorities (these are ordered)# system is unusable# action must be taken immediately# critical conditions# error conditions# warning conditions# normal but significant condition# informational# debug-level messages# facility codes# kernel messages# random user-level messages# mail system# system daemons# security/authorization messages# messages generated internally by syslogd# line printer subsystem# network news subsystem# UUCP subsystem# clock daemon# security/authorization messages (private)# FTP daemon# other codes through 15 reserved for system use# reserved for local use# DEPRECATED#The map below appears to be trivially lowercasing the key. However,#there's more to it than meets the eye - in some locales, lowercasing#gives unexpected results. See SF #1524081: in the Turkish locale,#"INFO".lower() != "info"# Syslog server may be unavailable during handler initialisation.# C's openlog() function also ignores connection errors.# Moreover, we ignore these errors while logging, so it not worse# to ignore it also here.# it worked, so set self.socktype to the used type# user didn't specify falling back, so fail# prepended to all messages# some old syslog daemons expect a NUL terminator# We need to convert record level to lowercase, maybe this will# change in the future.# Message is a string. Convert to bytes as required by RFC 5424#self._welu.RemoveSourceFromRegistry(self.appname, self.logtype)# support multiple hosts on one IP address...# need to strip optional :port from host, if present# See issue #30904: putrequest call above already adds this header# on Python 3.x.# h.putheader("Host", host)#can't do anything with the result# See Issue #26559 for why this has been added# The format operation gets traceback text into record.exc_text# (if there's exception data), and also returns the formatted# message. We can then use this to replace the original# msg + args, as these might be unpickleable. We also zap the# exc_info and exc_text attributes, as they are no longer# needed and, if not None, will typically not be pickleable.# bpo-35726: make copy of record to avoid affecting other handlers in the chain.b' +Additional handlers for the logging package for Python. The core package is +based on PEP 282 and comments thereto in comp.lang.python. + +Copyright (C) 2001-2016 Vinay Sajip. All Rights Reserved. + +To use, simply 'import logging.handlers' and log away! +'u' +Additional handlers for the logging package for Python. The core package is +based on PEP 282 and comments thereto in comp.lang.python. + +Copyright (C) 2001-2016 Vinay Sajip. All Rights Reserved. + +To use, simply 'import logging.handlers' and log away! +'b' + Base class for handlers that rotate log files at a certain point. + Not meant to be instantiated directly. Instead, use RotatingFileHandler + or TimedRotatingFileHandler. + 'u' + Base class for handlers that rotate log files at a certain point. + Not meant to be instantiated directly. Instead, use RotatingFileHandler + or TimedRotatingFileHandler. + 'b' + Use the specified filename for streamed logging + 'u' + Use the specified filename for streamed logging + 'b' + Emit a record. + + Output the record to the file, catering for rollover as described + in doRollover(). + 'u' + Emit a record. + + Output the record to the file, catering for rollover as described + in doRollover(). + 'b' + Modify the filename of a log file when rotating. + + This is provided so that a custom filename can be provided. + + The default implementation calls the 'namer' attribute of the + handler, if it's callable, passing the default name to + it. If the attribute isn't callable (the default is None), the name + is returned unchanged. + + :param default_name: The default name for the log file. + 'u' + Modify the filename of a log file when rotating. + + This is provided so that a custom filename can be provided. + + The default implementation calls the 'namer' attribute of the + handler, if it's callable, passing the default name to + it. If the attribute isn't callable (the default is None), the name + is returned unchanged. + + :param default_name: The default name for the log file. + 'b' + When rotating, rotate the current log. + + The default implementation calls the 'rotator' attribute of the + handler, if it's callable, passing the source and dest arguments to + it. If the attribute isn't callable (the default is None), the source + is simply renamed to the destination. + + :param source: The source filename. This is normally the base + filename, e.g. 'test.log' + :param dest: The destination filename. This is normally + what the source is rotated to, e.g. 'test.log.1'. + 'u' + When rotating, rotate the current log. + + The default implementation calls the 'rotator' attribute of the + handler, if it's callable, passing the source and dest arguments to + it. If the attribute isn't callable (the default is None), the source + is simply renamed to the destination. + + :param source: The source filename. This is normally the base + filename, e.g. 'test.log' + :param dest: The destination filename. This is normally + what the source is rotated to, e.g. 'test.log.1'. + 'b' + Handler for logging to a set of files, which switches from one file + to the next when the current file reaches a certain size. + 'u' + Handler for logging to a set of files, which switches from one file + to the next when the current file reaches a certain size. + 'b' + Open the specified file and use it as the stream for logging. + + By default, the file grows indefinitely. You can specify particular + values of maxBytes and backupCount to allow the file to rollover at + a predetermined size. + + Rollover occurs whenever the current log file is nearly maxBytes in + length. If backupCount is >= 1, the system will successively create + new files with the same pathname as the base file, but with extensions + ".1", ".2" etc. appended to it. For example, with a backupCount of 5 + and a base file name of "app.log", you would get "app.log", + "app.log.1", "app.log.2", ... through to "app.log.5". The file being + written to is always "app.log" - when it gets filled up, it is closed + and renamed to "app.log.1", and if files "app.log.1", "app.log.2" etc. + exist, then they are renamed to "app.log.2", "app.log.3" etc. + respectively. + + If maxBytes is zero, rollover never occurs. + 'u' + Open the specified file and use it as the stream for logging. + + By default, the file grows indefinitely. You can specify particular + values of maxBytes and backupCount to allow the file to rollover at + a predetermined size. + + Rollover occurs whenever the current log file is nearly maxBytes in + length. If backupCount is >= 1, the system will successively create + new files with the same pathname as the base file, but with extensions + ".1", ".2" etc. appended to it. For example, with a backupCount of 5 + and a base file name of "app.log", you would get "app.log", + "app.log.1", "app.log.2", ... through to "app.log.5". The file being + written to is always "app.log" - when it gets filled up, it is closed + and renamed to "app.log.1", and if files "app.log.1", "app.log.2" etc. + exist, then they are renamed to "app.log.2", "app.log.3" etc. + respectively. + + If maxBytes is zero, rollover never occurs. + 'b' + Do a rollover, as described in __init__(). + 'u' + Do a rollover, as described in __init__(). + 'b'%s.%d'u'%s.%d'b'.1'u'.1'b' + Determine if rollover should occur. + + Basically, see if the supplied record would cause the file to exceed + the size limit we have. + 'u' + Determine if rollover should occur. + + Basically, see if the supplied record would cause the file to exceed + the size limit we have. + 'b' + Handler for logging to a file, rotating the log file at certain timed + intervals. + + If backupCount is > 0, when rollover is done, no more than backupCount + files are kept - the oldest ones are deleted. + 'u' + Handler for logging to a file, rotating the log file at certain timed + intervals. + + If backupCount is > 0, when rollover is done, no more than backupCount + files are kept - the oldest ones are deleted. + 'b'S'u'S'b'%Y-%m-%d_%H-%M-%S'u'%Y-%m-%d_%H-%M-%S'b'^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}-\d{2}(\.\w+)?$'u'^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}-\d{2}(\.\w+)?$'b'M'u'M'b'%Y-%m-%d_%H-%M'u'%Y-%m-%d_%H-%M'b'^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}(\.\w+)?$'u'^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}(\.\w+)?$'b'%Y-%m-%d_%H'u'%Y-%m-%d_%H'b'^\d{4}-\d{2}-\d{2}_\d{2}(\.\w+)?$'u'^\d{4}-\d{2}-\d{2}_\d{2}(\.\w+)?$'b'D'u'D'b'MIDNIGHT'u'MIDNIGHT'b'%Y-%m-%d'u'%Y-%m-%d'b'^\d{4}-\d{2}-\d{2}(\.\w+)?$'u'^\d{4}-\d{2}-\d{2}(\.\w+)?$'b'W'u'W'b'You must specify a day for weekly rollover from 0 to 6 (0 is Monday): %s'u'You must specify a day for weekly rollover from 0 to 6 (0 is Monday): %s'b'Invalid day specified for weekly rollover: %s'u'Invalid day specified for weekly rollover: %s'b'Invalid rollover interval specified: %s'u'Invalid rollover interval specified: %s'b' + Work out the rollover time based on the specified time. + 'u' + Work out the rollover time based on the specified time. + 'b' + Determine if rollover should occur. + + record is not used, as we are just comparing times, but it is needed so + the method signatures are the same + 'u' + Determine if rollover should occur. + + record is not used, as we are just comparing times, but it is needed so + the method signatures are the same + 'b' + Determine the files to delete when rolling over. + + More specific than the earlier method, which just used glob.glob(). + 'u' + Determine the files to delete when rolling over. + + More specific than the earlier method, which just used glob.glob(). + 'b' + do a rollover; in this case, a date/time stamp is appended to the filename + when the rollover happens. However, you want the file to be named for the + start of the interval, not the current time. If there is a backup count, + then we have to get a list of matching filenames, sort them and remove + the one with the oldest suffix. + 'u' + do a rollover; in this case, a date/time stamp is appended to the filename + when the rollover happens. However, you want the file to be named for the + start of the interval, not the current time. If there is a backup count, + then we have to get a list of matching filenames, sort them and remove + the one with the oldest suffix. + 'b' + A handler for logging to a file, which watches the file + to see if it has changed while in use. This can happen because of + usage of programs such as newsyslog and logrotate which perform + log file rotation. This handler, intended for use under Unix, + watches the file to see if it has changed since the last emit. + (A file has changed if its device or inode have changed.) + If it has changed, the old file stream is closed, and the file + opened to get a new stream. + + This handler is not appropriate for use under Windows, because + under Windows open files cannot be moved or renamed - logging + opens the files with exclusive locks - and so there is no need + for such a handler. Furthermore, ST_INO is not supported under + Windows; stat always returns zero for this value. + + This handler is based on a suggestion and patch by Chad J. + Schroeder. + 'u' + A handler for logging to a file, which watches the file + to see if it has changed while in use. This can happen because of + usage of programs such as newsyslog and logrotate which perform + log file rotation. This handler, intended for use under Unix, + watches the file to see if it has changed since the last emit. + (A file has changed if its device or inode have changed.) + If it has changed, the old file stream is closed, and the file + opened to get a new stream. + + This handler is not appropriate for use under Windows, because + under Windows open files cannot be moved or renamed - logging + opens the files with exclusive locks - and so there is no need + for such a handler. Furthermore, ST_INO is not supported under + Windows; stat always returns zero for this value. + + This handler is based on a suggestion and patch by Chad J. + Schroeder. + 'b' + Reopen log file if needed. + + Checks if the underlying file has changed, and if it + has, close the old stream and reopen the file to get the + current stream. + 'u' + Reopen log file if needed. + + Checks if the underlying file has changed, and if it + has, close the old stream and reopen the file to get the + current stream. + 'b' + Emit a record. + + If underlying file has changed, reopen the file before emitting the + record to it. + 'u' + Emit a record. + + If underlying file has changed, reopen the file before emitting the + record to it. + 'b' + A handler class which writes logging records, in pickle format, to + a streaming socket. The socket is kept open across logging calls. + If the peer resets it, an attempt is made to reconnect on the next call. + The pickle which is sent is that of the LogRecord's attribute dictionary + (__dict__), so that the receiver does not need to have the logging module + installed in order to process the logging event. + + To unpickle the record at the receiving end into a LogRecord, use the + makeLogRecord function. + 'u' + A handler class which writes logging records, in pickle format, to + a streaming socket. The socket is kept open across logging calls. + If the peer resets it, an attempt is made to reconnect on the next call. + The pickle which is sent is that of the LogRecord's attribute dictionary + (__dict__), so that the receiver does not need to have the logging module + installed in order to process the logging event. + + To unpickle the record at the receiving end into a LogRecord, use the + makeLogRecord function. + 'b' + Initializes the handler with a specific host address and port. + + When the attribute *closeOnError* is set to True - if a socket error + occurs, the socket is silently closed and then reopened on the next + logging call. + 'u' + Initializes the handler with a specific host address and port. + + When the attribute *closeOnError* is set to True - if a socket error + occurs, the socket is silently closed and then reopened on the next + logging call. + 'b' + A factory method which allows subclasses to define the precise + type of socket they want. + 'u' + A factory method which allows subclasses to define the precise + type of socket they want. + 'b' + Try to create a socket, using an exponential backoff with + a max retry time. Thanks to Robert Olson for the original patch + (SF #815911) which has been slightly refactored. + 'u' + Try to create a socket, using an exponential backoff with + a max retry time. Thanks to Robert Olson for the original patch + (SF #815911) which has been slightly refactored. + 'b' + Send a pickled string to the socket. + + This function allows for partial sends which can happen when the + network is busy. + 'u' + Send a pickled string to the socket. + + This function allows for partial sends which can happen when the + network is busy. + 'b' + Pickles the record in binary format with a length prefix, and + returns it ready for transmission across the socket. + 'u' + Pickles the record in binary format with a length prefix, and + returns it ready for transmission across the socket. + 'b'exc_info'u'exc_info'b'>L'u'>L'b' + Handle an error during logging. + + An error has occurred during logging. Most likely cause - + connection lost. Close the socket so that we can retry on the + next event. + 'u' + Handle an error during logging. + + An error has occurred during logging. Most likely cause - + connection lost. Close the socket so that we can retry on the + next event. + 'b' + Emit a record. + + Pickles the record and writes it to the socket in binary format. + If there is an error with the socket, silently drop the packet. + If there was a problem with the socket, re-establishes the + socket. + 'u' + Emit a record. + + Pickles the record and writes it to the socket in binary format. + If there is an error with the socket, silently drop the packet. + If there was a problem with the socket, re-establishes the + socket. + 'b' + Closes the socket. + 'u' + Closes the socket. + 'b' + A handler class which writes logging records, in pickle format, to + a datagram socket. The pickle which is sent is that of the LogRecord's + attribute dictionary (__dict__), so that the receiver does not need to + have the logging module installed in order to process the logging event. + + To unpickle the record at the receiving end into a LogRecord, use the + makeLogRecord function. + + 'u' + A handler class which writes logging records, in pickle format, to + a datagram socket. The pickle which is sent is that of the LogRecord's + attribute dictionary (__dict__), so that the receiver does not need to + have the logging module installed in order to process the logging event. + + To unpickle the record at the receiving end into a LogRecord, use the + makeLogRecord function. + + 'b' + Initializes the handler with a specific host address and port. + 'u' + Initializes the handler with a specific host address and port. + 'b' + The factory method of SocketHandler is here overridden to create + a UDP socket (SOCK_DGRAM). + 'u' + The factory method of SocketHandler is here overridden to create + a UDP socket (SOCK_DGRAM). + 'b' + Send a pickled string to a socket. + + This function no longer allows for partial sends which can happen + when the network is busy - UDP does not guarantee delivery and + can deliver packets out of sequence. + 'u' + Send a pickled string to a socket. + + This function no longer allows for partial sends which can happen + when the network is busy - UDP does not guarantee delivery and + can deliver packets out of sequence. + 'b' + A handler class which sends formatted logging records to a syslog + server. Based on Sam Rushing's syslog module: + http://www.nightmare.com/squirl/python-ext/misc/syslog.py + Contributed by Nicolas Untz (after which minor refactoring changes + have been made). + 'u' + A handler class which sends formatted logging records to a syslog + server. Based on Sam Rushing's syslog module: + http://www.nightmare.com/squirl/python-ext/misc/syslog.py + Contributed by Nicolas Untz (after which minor refactoring changes + have been made). + 'b'alert'u'alert'b'crit'u'crit'b'emerg'u'emerg'b'err'u'err'b'notice'u'notice'b'panic'u'panic'b'auth'u'auth'b'authpriv'u'authpriv'b'cron'u'cron'b'daemon'u'daemon'b'ftp'u'ftp'b'kern'u'kern'b'lpr'u'lpr'b'mail'u'mail'b'news'u'news'b'security'u'security'b'syslog'u'syslog'b'user'u'user'b'uucp'u'uucp'b'local0'u'local0'b'local1'u'local1'b'local2'u'local2'b'local3'u'local3'b'local4'u'local4'b'local5'u'local5'b'local6'u'local6'b'local7'u'local7'b' + Initialize a handler. + + If address is specified as a string, a UNIX socket is used. To log to a + local syslogd, "SysLogHandler(address="/dev/log")" can be used. + If facility is not specified, LOG_USER is used. If socktype is + specified as socket.SOCK_DGRAM or socket.SOCK_STREAM, that specific + socket type will be used. For Unix sockets, you can also specify a + socktype of None, in which case socket.SOCK_DGRAM will be used, falling + back to socket.SOCK_STREAM. + 'u' + Initialize a handler. + + If address is specified as a string, a UNIX socket is used. To log to a + local syslogd, "SysLogHandler(address="/dev/log")" can be used. + If facility is not specified, LOG_USER is used. If socktype is + specified as socket.SOCK_DGRAM or socket.SOCK_STREAM, that specific + socket type will be used. For Unix sockets, you can also specify a + socktype of None, in which case socket.SOCK_DGRAM will be used, falling + back to socket.SOCK_STREAM. + 'b'getaddrinfo returns an empty list'u'getaddrinfo returns an empty list'b' + Encode the facility and priority. You can pass in strings or + integers - if strings are passed, the facility_names and + priority_names mapping dictionaries are used to convert them to + integers. + 'u' + Encode the facility and priority. You can pass in strings or + integers - if strings are passed, the facility_names and + priority_names mapping dictionaries are used to convert them to + integers. + 'b' + Map a logging level name to a key in the priority_names map. + This is useful in two scenarios: when custom levels are being + used, and in the case where you can't do a straightforward + mapping by lowercasing the logging level name because of locale- + specific issues (see SF #1524081). + 'u' + Map a logging level name to a key in the priority_names map. + This is useful in two scenarios: when custom levels are being + used, and in the case where you can't do a straightforward + mapping by lowercasing the logging level name because of locale- + specific issues (see SF #1524081). + 'b' + Emit a record. + + The record is formatted, and then sent to the syslog server. If + exception information is present, it is NOT sent to the server. + 'u' + Emit a record. + + The record is formatted, and then sent to the syslog server. If + exception information is present, it is NOT sent to the server. + 'b'<%d>'u'<%d>'b' + A handler class which sends an SMTP email for each logging event. + 'u' + A handler class which sends an SMTP email for each logging event. + 'b' + Initialize the handler. + + Initialize the instance with the from and to addresses and subject + line of the email. To specify a non-standard SMTP port, use the + (host, port) tuple format for the mailhost argument. To specify + authentication credentials, supply a (username, password) tuple + for the credentials argument. To specify the use of a secure + protocol (TLS), pass in a tuple for the secure argument. This will + only be used when authentication credentials are supplied. The tuple + will be either an empty tuple, or a single-value tuple with the name + of a keyfile, or a 2-value tuple with the names of the keyfile and + certificate file. (This tuple is passed to the `starttls` method). + A timeout in seconds can be specified for the SMTP connection (the + default is one second). + 'u' + Initialize the handler. + + Initialize the instance with the from and to addresses and subject + line of the email. To specify a non-standard SMTP port, use the + (host, port) tuple format for the mailhost argument. To specify + authentication credentials, supply a (username, password) tuple + for the credentials argument. To specify the use of a secure + protocol (TLS), pass in a tuple for the secure argument. This will + only be used when authentication credentials are supplied. The tuple + will be either an empty tuple, or a single-value tuple with the name + of a keyfile, or a 2-value tuple with the names of the keyfile and + certificate file. (This tuple is passed to the `starttls` method). + A timeout in seconds can be specified for the SMTP connection (the + default is one second). + 'b' + Determine the subject for the email. + + If you want to specify a subject line which is record-dependent, + override this method. + 'u' + Determine the subject for the email. + + If you want to specify a subject line which is record-dependent, + override this method. + 'b' + Emit a record. + + Format the record and send it to the specified addressees. + 'u' + Emit a record. + + Format the record and send it to the specified addressees. + 'b'From'u'From'b'To'u'To'b'Subject'u'Subject'b'Date'u'Date'b' + A handler class which sends events to the NT Event Log. Adds a + registry entry for the specified application name. If no dllname is + provided, win32service.pyd (which contains some basic message + placeholders) is used. Note that use of these placeholders will make + your event logs big, as the entire message source is held in the log. + If you want slimmer logs, you have to pass in the name of your own DLL + which contains the message definitions you want to use in the event log. + 'u' + A handler class which sends events to the NT Event Log. Adds a + registry entry for the specified application name. If no dllname is + provided, win32service.pyd (which contains some basic message + placeholders) is used. Note that use of these placeholders will make + your event logs big, as the entire message source is held in the log. + If you want slimmer logs, you have to pass in the name of your own DLL + which contains the message definitions you want to use in the event log. + 'b'Application'u'Application'b'win32service.pyd'u'win32service.pyd'b'The Python Win32 extensions for NT (service, event logging) appear not to be available.'u'The Python Win32 extensions for NT (service, event logging) appear not to be available.'b' + Return the message ID for the event record. If you are using your + own messages, you could do this by having the msg passed to the + logger being an ID rather than a formatting string. Then, in here, + you could use a dictionary lookup to get the message ID. This + version returns 1, which is the base message ID in win32service.pyd. + 'u' + Return the message ID for the event record. If you are using your + own messages, you could do this by having the msg passed to the + logger being an ID rather than a formatting string. Then, in here, + you could use a dictionary lookup to get the message ID. This + version returns 1, which is the base message ID in win32service.pyd. + 'b' + Return the event category for the record. + + Override this if you want to specify your own categories. This version + returns 0. + 'u' + Return the event category for the record. + + Override this if you want to specify your own categories. This version + returns 0. + 'b' + Return the event type for the record. + + Override this if you want to specify your own types. This version does + a mapping using the handler's typemap attribute, which is set up in + __init__() to a dictionary which contains mappings for DEBUG, INFO, + WARNING, ERROR and CRITICAL. If you are using your own levels you will + either need to override this method or place a suitable dictionary in + the handler's typemap attribute. + 'u' + Return the event type for the record. + + Override this if you want to specify your own types. This version does + a mapping using the handler's typemap attribute, which is set up in + __init__() to a dictionary which contains mappings for DEBUG, INFO, + WARNING, ERROR and CRITICAL. If you are using your own levels you will + either need to override this method or place a suitable dictionary in + the handler's typemap attribute. + 'b' + Emit a record. + + Determine the message ID, event category and event type. Then + log the message in the NT event log. + 'u' + Emit a record. + + Determine the message ID, event category and event type. Then + log the message in the NT event log. + 'b' + Clean up this handler. + + You can remove the application name from the registry as a + source of event log entries. However, if you do this, you will + not be able to see the events as you intended in the Event Log + Viewer - it needs to be able to access the registry to get the + DLL name. + 'u' + Clean up this handler. + + You can remove the application name from the registry as a + source of event log entries. However, if you do this, you will + not be able to see the events as you intended in the Event Log + Viewer - it needs to be able to access the registry to get the + DLL name. + 'b' + A class which sends records to a Web server, using either GET or + POST semantics. + 'u' + A class which sends records to a Web server, using either GET or + POST semantics. + 'b'GET'u'GET'b' + Initialize the instance with the host, the request URL, and the method + ("GET" or "POST") + 'u' + Initialize the instance with the host, the request URL, and the method + ("GET" or "POST") + 'b'method must be GET or POST'u'method must be GET or POST'b'context parameter only makes sense with secure=True'u'context parameter only makes sense with secure=True'b' + Default implementation of mapping the log record into a dict + that is sent as the CGI data. Overwrite in your class. + Contributed by Franz Glasner. + 'u' + Default implementation of mapping the log record into a dict + that is sent as the CGI data. Overwrite in your class. + Contributed by Franz Glasner. + 'b' + Emit a record. + + Send the record to the Web server as a percent-encoded dictionary + 'u' + Emit a record. + + Send the record to the Web server as a percent-encoded dictionary + 'b'%c%s'u'%c%s'b'Content-type'u'Content-type'b'application/x-www-form-urlencoded'u'application/x-www-form-urlencoded'b'Content-length'u'Content-length'b' + A handler class which buffers logging records in memory. Whenever each + record is added to the buffer, a check is made to see if the buffer should + be flushed. If it should, then flush() is expected to do what's needed. + 'u' + A handler class which buffers logging records in memory. Whenever each + record is added to the buffer, a check is made to see if the buffer should + be flushed. If it should, then flush() is expected to do what's needed. + 'b' + Initialize the handler with the buffer size. + 'u' + Initialize the handler with the buffer size. + 'b' + Should the handler flush its buffer? + + Returns true if the buffer is up to capacity. This method can be + overridden to implement custom flushing strategies. + 'u' + Should the handler flush its buffer? + + Returns true if the buffer is up to capacity. This method can be + overridden to implement custom flushing strategies. + 'b' + Emit a record. + + Append the record. If shouldFlush() tells us to, call flush() to process + the buffer. + 'u' + Emit a record. + + Append the record. If shouldFlush() tells us to, call flush() to process + the buffer. + 'b' + Override to implement custom flushing behaviour. + + This version just zaps the buffer to empty. + 'u' + Override to implement custom flushing behaviour. + + This version just zaps the buffer to empty. + 'b' + Close the handler. + + This version just flushes and chains to the parent class' close(). + 'u' + Close the handler. + + This version just flushes and chains to the parent class' close(). + 'b' + A handler class which buffers logging records in memory, periodically + flushing them to a target handler. Flushing occurs whenever the buffer + is full, or when an event of a certain severity or greater is seen. + 'u' + A handler class which buffers logging records in memory, periodically + flushing them to a target handler. Flushing occurs whenever the buffer + is full, or when an event of a certain severity or greater is seen. + 'b' + Initialize the handler with the buffer size, the level at which + flushing should occur and an optional target. + + Note that without a target being set either here or via setTarget(), + a MemoryHandler is no use to anyone! + + The ``flushOnClose`` argument is ``True`` for backward compatibility + reasons - the old behaviour is that when the handler is closed, the + buffer is flushed, even if the flush level hasn't been exceeded nor the + capacity exceeded. To prevent this, set ``flushOnClose`` to ``False``. + 'u' + Initialize the handler with the buffer size, the level at which + flushing should occur and an optional target. + + Note that without a target being set either here or via setTarget(), + a MemoryHandler is no use to anyone! + + The ``flushOnClose`` argument is ``True`` for backward compatibility + reasons - the old behaviour is that when the handler is closed, the + buffer is flushed, even if the flush level hasn't been exceeded nor the + capacity exceeded. To prevent this, set ``flushOnClose`` to ``False``. + 'b' + Check for buffer full or a record at the flushLevel or higher. + 'u' + Check for buffer full or a record at the flushLevel or higher. + 'b' + Set the target handler for this handler. + 'u' + Set the target handler for this handler. + 'b' + For a MemoryHandler, flushing means just sending the buffered + records to the target, if there is one. Override if you want + different behaviour. + + The record buffer is also cleared by this operation. + 'u' + For a MemoryHandler, flushing means just sending the buffered + records to the target, if there is one. Override if you want + different behaviour. + + The record buffer is also cleared by this operation. + 'b' + Flush, if appropriately configured, set the target to None and lose the + buffer. + 'u' + Flush, if appropriately configured, set the target to None and lose the + buffer. + 'b' + This handler sends events to a queue. Typically, it would be used together + with a multiprocessing Queue to centralise logging to file in one process + (in a multi-process application), so as to avoid file write contention + between processes. + + This code is new in Python 3.2, but this class can be copy pasted into + user code for use with earlier Python versions. + 'u' + This handler sends events to a queue. Typically, it would be used together + with a multiprocessing Queue to centralise logging to file in one process + (in a multi-process application), so as to avoid file write contention + between processes. + + This code is new in Python 3.2, but this class can be copy pasted into + user code for use with earlier Python versions. + 'b' + Initialise an instance, using the passed queue. + 'u' + Initialise an instance, using the passed queue. + 'b' + Enqueue a record. + + The base implementation uses put_nowait. You may want to override + this method if you want to use blocking, timeouts or custom queue + implementations. + 'u' + Enqueue a record. + + The base implementation uses put_nowait. You may want to override + this method if you want to use blocking, timeouts or custom queue + implementations. + 'b' + Prepares a record for queuing. The object returned by this method is + enqueued. + + The base implementation formats the record to merge the message + and arguments, and removes unpickleable items from the record + in-place. + + You might want to override this method if you want to convert + the record to a dict or JSON string, or send a modified copy + of the record while leaving the original intact. + 'u' + Prepares a record for queuing. The object returned by this method is + enqueued. + + The base implementation formats the record to merge the message + and arguments, and removes unpickleable items from the record + in-place. + + You might want to override this method if you want to convert + the record to a dict or JSON string, or send a modified copy + of the record while leaving the original intact. + 'b' + Emit a record. + + Writes the LogRecord to the queue, preparing it for pickling first. + 'u' + Emit a record. + + Writes the LogRecord to the queue, preparing it for pickling first. + 'b' + This class implements an internal threaded listener which watches for + LogRecords being added to a queue, removes them and passes them to a + list of handlers for processing. + 'u' + This class implements an internal threaded listener which watches for + LogRecords being added to a queue, removes them and passes them to a + list of handlers for processing. + 'b' + Initialise an instance with the specified queue and + handlers. + 'u' + Initialise an instance with the specified queue and + handlers. + 'b' + Dequeue a record and return it, optionally blocking. + + The base implementation uses get. You may want to override this method + if you want to use timeouts or work with custom queue implementations. + 'u' + Dequeue a record and return it, optionally blocking. + + The base implementation uses get. You may want to override this method + if you want to use timeouts or work with custom queue implementations. + 'b' + Start the listener. + + This starts up a background thread to monitor the queue for + LogRecords to process. + 'u' + Start the listener. + + This starts up a background thread to monitor the queue for + LogRecords to process. + 'b' + Prepare a record for handling. + + This method just returns the passed-in record. You may want to + override this method if you need to do any custom marshalling or + manipulation of the record before passing it to the handlers. + 'u' + Prepare a record for handling. + + This method just returns the passed-in record. You may want to + override this method if you need to do any custom marshalling or + manipulation of the record before passing it to the handlers. + 'b' + Handle a record. + + This just loops through the handlers offering them the record + to handle. + 'u' + Handle a record. + + This just loops through the handlers offering them the record + to handle. + 'b' + Monitor the queue for records, and ask the handler + to deal with them. + + This method runs on a separate, internal thread. + The thread will terminate if it sees a sentinel object in the queue. + 'u' + Monitor the queue for records, and ask the handler + to deal with them. + + This method runs on a separate, internal thread. + The thread will terminate if it sees a sentinel object in the queue. + 'b'task_done'u'task_done'b' + This is used to enqueue the sentinel record. + + The base implementation uses put_nowait. You may want to override this + method if you want to use timeouts or work with custom queue + implementations. + 'u' + This is used to enqueue the sentinel record. + + The base implementation uses put_nowait. You may want to override this + method if you want to use timeouts or work with custom queue + implementations. + 'b' + Stop the listener. + + This asks the thread to terminate, and then waits for it to do so. + Note that if you don't call this before your application exits, there + may be some records still left on the queue, which won't be processed. + 'u' + Stop the listener. + + This asks the thread to terminate, and then waits for it to do so. + Note that if you don't call this before your application exits, there + may be some records still left on the queue, which won't be processed. + 'u'logging.handlers'hashlib module - A common interface to many hash functions. + +new(name, data=b'', **kwargs) - returns a new hash object implementing the + given hash function; initializing the hash + using the given binary data. + +Named constructor functions are also available, these are faster +than using new(name): + +md5(), sha1(), sha224(), sha256(), sha384(), sha512(), blake2b(), blake2s(), +sha3_224, sha3_256, sha3_384, sha3_512, shake_128, and shake_256. + +More algorithms may be available on your platform but the above are guaranteed +to exist. See the algorithms_guaranteed and algorithms_available attributes +to find out what algorithm names can be passed to new(). + +NOTE: If you want the adler32 or crc32 hash functions they are available in +the zlib module. + +Choose your hash function wisely. Some have known collision weaknesses. +sha384 and sha512 will be slow on 32 bit platforms. + +Hash objects have these methods: + - update(data): Update the hash object with the bytes in data. Repeated calls + are equivalent to a single call with the concatenation of all + the arguments. + - digest(): Return the digest of the bytes passed to the update() method + so far as a bytes object. + - hexdigest(): Like digest() except the digest is returned as a string + of double length, containing only hexadecimal digits. + - copy(): Return a copy (clone) of the hash object. This can be used to + efficiently compute the digests of datas that share a common + initial substring. + +For example, to obtain the digest of the byte string 'Nobody inspects the +spammish repetition': + + >>> import hashlib + >>> m = hashlib.md5() + >>> m.update(b"Nobody inspects") + >>> m.update(b" the spammish repetition") + >>> m.digest() + b'\xbbd\x9c\x83\xdd\x1e\xa5\xc9\xd9\xde\xc9\xa1\x8d\xf0\xff\xe9' + +More condensed: + + >>> hashlib.sha224(b"Nobody inspects the spammish repetition").hexdigest() + 'a4337bc45a8fc544c03f52dc550cd6e1e87021bc896588bd79e901e2' + +__always_supportedalgorithms_guaranteedalgorithms_available__builtin_constructor_cache__block_openssl_constructor__get_builtin_constructorSHA1MD5SHA256SHA224SHA512SHA384unsupported hash type __get_openssl_constructoropenssl___py_newnew(name, data=b'', **kwargs) - Return a new hashing object using the + named algorithm; optionally initialized with data (which must be + a bytes-like object). + __hash_newnew(name, data=b'') - Return a new hashing object using the named algorithm; + optionally initialized with data (which must be a bytes-like object). + __get_hash0x5C_trans_5C0x36_trans_36hash_namesaltiterationsdklenPassword based key derivation function 2 (PKCS #5 v2.0) + + This Python implementations based on the hmac module about as fast + as OpenSSL's PKCS5_PBKDF2_HMAC for short passwords and much faster + for long passwords. + outerprficpyocpydkeyrkey__func_namecode for hash %s was not found.#. Copyright (C) 2005-2010 Gregory P. Smith (greg@krypto.org)# Licensed to PSF under a Contributor Agreement.# This tuple and __get_builtin_constructor() must be modified if a new# always available algorithm is added.# no extension module, this hash is unsupported.# Prefer our blake2 and sha3 implementation.# Allow the C module to raise ValueError. The function will be# defined but the hash not actually available thanks to OpenSSL.# Use the C function directly (very fast)# Prefer our blake2 and sha3 implementation# OpenSSL 1.1.0 comes with a limited implementation of blake2b/s.# It does neither support keyed blake2 nor advanced features like# salt, personal, tree hashing or SSE.# If the _hashlib module (OpenSSL) doesn't support the named# hash, try using our builtin implementations.# This allows for SHA224/256 and SHA384/512 support even though# the OpenSSL library prior to 0.9.8 doesn't provide them.# OpenSSL's PKCS5_PBKDF2_HMAC requires OpenSSL 1.0+ with HMAC and SHA# Fast inline HMAC implementation# PBKDF2_HMAC uses the password as key. We can re-use the same# digest objects and just update copies to skip initialization.# endianness doesn't matter here as long to / from use the same# rkey = rkey ^ prev# OpenSSL's scrypt requires OpenSSL 1.1+# try them all, some may not work due to the OpenSSL# version not supporting that algorithm.# Cleanup locals()b'hashlib module - A common interface to many hash functions. + +new(name, data=b'', **kwargs) - returns a new hash object implementing the + given hash function; initializing the hash + using the given binary data. + +Named constructor functions are also available, these are faster +than using new(name): + +md5(), sha1(), sha224(), sha256(), sha384(), sha512(), blake2b(), blake2s(), +sha3_224, sha3_256, sha3_384, sha3_512, shake_128, and shake_256. + +More algorithms may be available on your platform but the above are guaranteed +to exist. See the algorithms_guaranteed and algorithms_available attributes +to find out what algorithm names can be passed to new(). + +NOTE: If you want the adler32 or crc32 hash functions they are available in +the zlib module. + +Choose your hash function wisely. Some have known collision weaknesses. +sha384 and sha512 will be slow on 32 bit platforms. + +Hash objects have these methods: + - update(data): Update the hash object with the bytes in data. Repeated calls + are equivalent to a single call with the concatenation of all + the arguments. + - digest(): Return the digest of the bytes passed to the update() method + so far as a bytes object. + - hexdigest(): Like digest() except the digest is returned as a string + of double length, containing only hexadecimal digits. + - copy(): Return a copy (clone) of the hash object. This can be used to + efficiently compute the digests of datas that share a common + initial substring. + +For example, to obtain the digest of the byte string 'Nobody inspects the +spammish repetition': + + >>> import hashlib + >>> m = hashlib.md5() + >>> m.update(b"Nobody inspects") + >>> m.update(b" the spammish repetition") + >>> m.digest() + b'\xbbd\x9c\x83\xdd\x1e\xa5\xc9\xd9\xde\xc9\xa1\x8d\xf0\xff\xe9' + +More condensed: + + >>> hashlib.sha224(b"Nobody inspects the spammish repetition").hexdigest() + 'a4337bc45a8fc544c03f52dc550cd6e1e87021bc896588bd79e901e2' + +'u'hashlib module - A common interface to many hash functions. + +new(name, data=b'', **kwargs) - returns a new hash object implementing the + given hash function; initializing the hash + using the given binary data. + +Named constructor functions are also available, these are faster +than using new(name): + +md5(), sha1(), sha224(), sha256(), sha384(), sha512(), blake2b(), blake2s(), +sha3_224, sha3_256, sha3_384, sha3_512, shake_128, and shake_256. + +More algorithms may be available on your platform but the above are guaranteed +to exist. See the algorithms_guaranteed and algorithms_available attributes +to find out what algorithm names can be passed to new(). + +NOTE: If you want the adler32 or crc32 hash functions they are available in +the zlib module. + +Choose your hash function wisely. Some have known collision weaknesses. +sha384 and sha512 will be slow on 32 bit platforms. + +Hash objects have these methods: + - update(data): Update the hash object with the bytes in data. Repeated calls + are equivalent to a single call with the concatenation of all + the arguments. + - digest(): Return the digest of the bytes passed to the update() method + so far as a bytes object. + - hexdigest(): Like digest() except the digest is returned as a string + of double length, containing only hexadecimal digits. + - copy(): Return a copy (clone) of the hash object. This can be used to + efficiently compute the digests of datas that share a common + initial substring. + +For example, to obtain the digest of the byte string 'Nobody inspects the +spammish repetition': + + >>> import hashlib + >>> m = hashlib.md5() + >>> m.update(b"Nobody inspects") + >>> m.update(b" the spammish repetition") + >>> m.digest() + b'\xbbd\x9c\x83\xdd\x1e\xa5\xc9\xd9\xde\xc9\xa1\x8d\xf0\xff\xe9' + +More condensed: + + >>> hashlib.sha224(b"Nobody inspects the spammish repetition").hexdigest() + 'a4337bc45a8fc544c03f52dc550cd6e1e87021bc896588bd79e901e2' + +'b'sha1'u'sha1'b'sha224'u'sha224'b'sha256'u'sha256'b'sha384'u'sha384'b'sha512'u'sha512'b'blake2b'u'blake2b'b'blake2s'u'blake2s'b'sha3_224'u'sha3_224'b'sha3_256'u'sha3_256'b'sha3_384'u'sha3_384'b'sha3_512'u'sha3_512'b'shake_128'u'shake_128'b'shake_256'u'shake_256'b'new'u'new'b'algorithms_guaranteed'u'algorithms_guaranteed'b'algorithms_available'u'algorithms_available'b'pbkdf2_hmac'u'pbkdf2_hmac'b'SHA1'u'SHA1'b'MD5'u'MD5'b'SHA256'u'SHA256'b'SHA224'u'SHA224'b'SHA512'u'SHA512'b'SHA384'u'SHA384'b'unsupported hash type 'u'unsupported hash type 'b'openssl_'u'openssl_'b'new(name, data=b'', **kwargs) - Return a new hashing object using the + named algorithm; optionally initialized with data (which must be + a bytes-like object). + 'u'new(name, data=b'', **kwargs) - Return a new hashing object using the + named algorithm; optionally initialized with data (which must be + a bytes-like object). + 'b'new(name, data=b'') - Return a new hashing object using the named algorithm; + optionally initialized with data (which must be a bytes-like object). + 'u'new(name, data=b'') - Return a new hashing object using the named algorithm; + optionally initialized with data (which must be a bytes-like object). + 'b'Password based key derivation function 2 (PKCS #5 v2.0) + + This Python implementations based on the hmac module about as fast + as OpenSSL's PKCS5_PBKDF2_HMAC for short passwords and much faster + for long passwords. + 'u'Password based key derivation function 2 (PKCS #5 v2.0) + + This Python implementations based on the hmac module about as fast + as OpenSSL's PKCS5_PBKDF2_HMAC for short passwords and much faster + for long passwords. + 'b'block_size'u'block_size'b'code for hash %s was not found.'u'code for hash %s was not found.'u'hashlib'Header encoding and decoding functionality.decode_headermake_headeremail.errorsBSPACESPACE8MAXLINELENUSASCIIUTF8 + =\? # literal =? + (?P[^?]*?) # non-greedy up to the next ? is the charset + \? # literal ? + (?P[qQbB]) # either a "q" or a "b", case insensitive + \? # literal ? + (?P.*?) # non-greedy up to the next ?= is the encoded string + \?= # literal ?= + ecre[\041-\176]+:$fcre\n[^ \t]+:_embedded_header_max_appendDecode a message header value without converting charset. + + Returns a list of (string, charset) pairs containing each of the decoded + parts of the header. Charset is None for non-encoded parts of the header, + otherwise a lower-case string containing the name of the character set + specified in the encoded string. + + header may be a string that may or may not contain RFC2047 encoded words, + or it may be a Header object. + + An email.errors.HeaderParseError may be raised when certain decoding error + occurs (e.g. a base64 decoding exception). + _chunksunencodeddroplistdecoded_wordsencoded_stringheader_decodepaderrBase64 decoding errorUnexpected encoding: collapsedlast_wordlast_charsetdecoded_seqcontinuation_wsCreate a Header from a sequence of pairs as returned by decode_header() + + decode_header() takes a header value string and returns a sequence of + pairs of the format (decoded_string, charset) where charset is the string + name of the character set. + + This function takes one of those sequence of pairs and returns a Header + instance. Optional maxlinelen, header_name, and continuation_ws are as in + the Header constructor. + Create a MIME-compliant header that can contain many character sets. + + Optional s is the initial header value. If None, the initial header + value is not set. You can later append to the header with .append() + method calls. s may be a byte string or a Unicode string, but see the + .append() documentation for semantics. + + Optional charset serves two purposes: it has the same meaning as the + charset argument to the .append() method. It also sets the default + character set for all subsequent .append() calls that omit the charset + argument. If charset is not provided in the constructor, the us-ascii + charset is used both as s's initial charset and as the default for + subsequent .append() calls. + + The maximum line length can be specified explicitly via maxlinelen. For + splitting the first line to a shorter value (to account for the field + header which isn't included in s, e.g. `Subject') pass in the name of + the field in header_name. The default maxlinelen is 78 as recommended + by RFC 2822. + + continuation_ws must be RFC 2822 compliant folding whitespace (usually + either a space or a hard tab) which will be prepended to continuation + lines. + + errors is passed through to the .append() call. + _continuation_ws_maxlinelen_headerlenReturn the string value of the header.uchunkslastcslastspacenextcsoriginal_bytes_nonctexthasspaceAppend a string to the MIME header. + + Optional charset, if given, should be a Charset instance or the name + of a character set (which will be converted to a Charset instance). A + value of None (the default) means that the charset given in the + constructor is used. + + s may be a byte string or a Unicode string. If it is a byte string + (i.e. isinstance(s, str) is false), then charset is the encoding of + that byte string, and a UnicodeError will be raised if the string + cannot be decoded with that charset. If s is a Unicode string, then + charset is a hint specifying the character set of the characters in + the string. In either case, when producing an RFC 2822 compliant + header using RFC 2047 rules, the string will be encoded using the + output codec of the charset. If the string cannot be encoded to the + output codec, a UnicodeError will be raised. + + Optional `errors' is passed as the errors argument to the decode + call if s is a byte string. + True if string s is not a ctext character of RFC822. + ;, splitcharsEncode a message header into an RFC-compliant format. + + There are many issues involved in converting a given string for use in + an email header. Only certain character sets are readable in most + email clients, and as header strings can only contain a subset of + 7-bit ASCII, care must be taken to properly convert and encode (with + Base64 or quoted-printable) header strings. In addition, there is a + 75-character length limit on any given encoded header field, so + line-wrapping must be performed, even with double-byte character sets. + + Optional maxlinelen specifies the maximum length of each generated + line, exclusive of the linesep string. Individual lines may be longer + than maxlinelen if a folding point cannot be found. The first line + will be shorter by the length of the header name plus ": " if a header + name was specified at Header construction time. The default value for + maxlinelen is determined at header construction time. + + Optional splitchars is a string containing characters which should be + given extra weight by the splitting algorithm during normal header + wrapping. This is in very rough support of RFC 2822's `higher level + syntactic breaks': split points preceded by a splitchar are preferred + during line splitting, with the characters preferred in the order in + which they appear in the string. Space and tab may be included in the + string to indicate whether preference should be given to one over the + other as a split point when other split chars do not appear in the line + being split. Splitchars does not affect RFC 2047 encoded lines. + + Optional linesep is a string to be used to separate the lines of + the value. The default value is the most useful for typical + Python applications, but it can be set to \r\n to produce RFC-compliant + line separators when needed. + _ValueFormatteradd_transitionslinefwsheader value appears to contain an embedded header: {!r}"header value appears to contain ""an embedded header: {!r}"last_chunkheaderlen_maxlen_continuation_ws_len_splitchars_Accumulator_current_lineend_of_lineis_onlyws_ascii_split_maxlengthsencoded_lines_append_chunklast_line([]+)part_countprevpart_initial_sizepop_frominitial_sizepoppedstartval# Match encoded-word strings in the form =?charset?q?Hello_World?=# Field name regexp, including trailing colon, but not separating whitespace,# according to RFC 2822. Character range is from tilde to exclamation mark.# For use with .match()# Find a header embedded in a putative header value. Used to check for# header injection attack.# If it is a Header object, we can just return the encoded chunks.# If no encoding, just return the header with no charset.# First step is to parse all the encoded parts into triplets of the form# (encoded_string, encoding, charset). For unencoded strings, the last# two parts will be None.# Now loop over words and remove words that consist of whitespace# between two encoded strings.# The next step is to decode each encoded word by applying the reverse# base64 or quopri transformation. decoded_words is now a list of the# form (decoded_word, charset).# This is an unencoded word.# Postel's law: add missing padding# Now convert all words to bytes and collapse consecutive runs of# similarly encoded words.# None means us-ascii but we can simply pass it on to h.append()# Take the separating colon and space into account.# We must preserve spaces between encoded and non-encoded word# boundaries, which means for us we need to add a space when we go# from a charset to None/us-ascii, or from None/us-ascii to a# charset. Only do this for the second and subsequent chunks.# Don't add a space if the None/us-ascii string already has# a space (trailing or leading depending on transition)# Rich comparison operators for equality only. BAW: does it make sense to# have or explicitly disable <, <=, >, >= operators?# other may be a Header or a string. Both are fine so coerce# ourselves to a unicode (of the unencoded header value), swap the# args and do another comparison.# Ensure that the bytes we're storing can be decoded to the output# character set, otherwise an early error is raised.# A maxlinelen of 0 means don't wrap. For all practical purposes,# choosing a huge number here accomplishes that and makes the# _ValueFormatter algorithm much simpler.# Step 1: Normalize the chunks so that all runs of identical charsets# get collapsed into a single unicode string.# If the charset has no header encoding (i.e. it is an ASCII encoding)# then we must split the header at the "highest level syntactic break"# possible. Note that we don't have a lot of smarts about field# syntax; we just try to break on semi-colons, then commas, then# whitespace. Eventually, this should be pluggable.# Otherwise, we're doing either a Base64 or a quoted-printable# encoding which means we don't need to split the line on syntactic# breaks. We can basically just find enough characters to fit on the# current line, minus the RFC 2047 chrome. What makes this trickier# though is that we have to split at octet boundaries, not character# boundaries but it's only safe to split at character boundaries so at# best we can only get close.# The first element extends the current line, but if it's None then# nothing more fit on the current line so start a new line.# There are no encoded lines, so we're done.# There was only one line.# Everything else are full lines in themselves.# The first line's length.# The RFC 2822 header folding algorithm is simple in principle but# complex in practice. Lines may be folded any place where "folding# white space" appears by inserting a linesep character in front of the# FWS. The complication is that not all spaces or tabs qualify as FWS,# and we are also supposed to prefer to break at "higher level# syntactic breaks". We can't do either of these without intimate# knowledge of the structure of structured headers, which we don't have# here. So the best we can do here is prefer to break at the specified# splitchars, and hope that we don't choose any spaces or tabs that# aren't legal FWS. (This is at least better than the old algorithm,# where we would sometimes *introduce* FWS after a splitchar, or the# algorithm before that, where we would turn all white space runs into# single spaces or tabs.)# Find the best split point, working backward from the end.# There might be none, on a long first line.# There will be a header, so leave it on a line by itself.# We don't use continuation_ws here because the whitespace# after a header should always be a space.b'Header encoding and decoding functionality.'u'Header encoding and decoding functionality.'b'Header'u'Header'b'decode_header'u'decode_header'b'make_header'u'make_header'b' + =\? # literal =? + (?P[^?]*?) # non-greedy up to the next ? is the charset + \? # literal ? + (?P[qQbB]) # either a "q" or a "b", case insensitive + \? # literal ? + (?P.*?) # non-greedy up to the next ?= is the encoded string + \?= # literal ?= + 'u' + =\? # literal =? + (?P[^?]*?) # non-greedy up to the next ? is the charset + \? # literal ? + (?P[qQbB]) # either a "q" or a "b", case insensitive + \? # literal ? + (?P.*?) # non-greedy up to the next ?= is the encoded string + \?= # literal ?= + 'b'[\041-\176]+:$'u'[\041-\176]+:$'b'\n[^ \t]+:'u'\n[^ \t]+:'b'Decode a message header value without converting charset. + + Returns a list of (string, charset) pairs containing each of the decoded + parts of the header. Charset is None for non-encoded parts of the header, + otherwise a lower-case string containing the name of the character set + specified in the encoded string. + + header may be a string that may or may not contain RFC2047 encoded words, + or it may be a Header object. + + An email.errors.HeaderParseError may be raised when certain decoding error + occurs (e.g. a base64 decoding exception). + 'u'Decode a message header value without converting charset. + + Returns a list of (string, charset) pairs containing each of the decoded + parts of the header. Charset is None for non-encoded parts of the header, + otherwise a lower-case string containing the name of the character set + specified in the encoded string. + + header may be a string that may or may not contain RFC2047 encoded words, + or it may be a Header object. + + An email.errors.HeaderParseError may be raised when certain decoding error + occurs (e.g. a base64 decoding exception). + 'b'_chunks'u'_chunks'u'==='b'Base64 decoding error'u'Base64 decoding error'b'Unexpected encoding: 'u'Unexpected encoding: 'b'Create a Header from a sequence of pairs as returned by decode_header() + + decode_header() takes a header value string and returns a sequence of + pairs of the format (decoded_string, charset) where charset is the string + name of the character set. + + This function takes one of those sequence of pairs and returns a Header + instance. Optional maxlinelen, header_name, and continuation_ws are as in + the Header constructor. + 'u'Create a Header from a sequence of pairs as returned by decode_header() + + decode_header() takes a header value string and returns a sequence of + pairs of the format (decoded_string, charset) where charset is the string + name of the character set. + + This function takes one of those sequence of pairs and returns a Header + instance. Optional maxlinelen, header_name, and continuation_ws are as in + the Header constructor. + 'b'Create a MIME-compliant header that can contain many character sets. + + Optional s is the initial header value. If None, the initial header + value is not set. You can later append to the header with .append() + method calls. s may be a byte string or a Unicode string, but see the + .append() documentation for semantics. + + Optional charset serves two purposes: it has the same meaning as the + charset argument to the .append() method. It also sets the default + character set for all subsequent .append() calls that omit the charset + argument. If charset is not provided in the constructor, the us-ascii + charset is used both as s's initial charset and as the default for + subsequent .append() calls. + + The maximum line length can be specified explicitly via maxlinelen. For + splitting the first line to a shorter value (to account for the field + header which isn't included in s, e.g. `Subject') pass in the name of + the field in header_name. The default maxlinelen is 78 as recommended + by RFC 2822. + + continuation_ws must be RFC 2822 compliant folding whitespace (usually + either a space or a hard tab) which will be prepended to continuation + lines. + + errors is passed through to the .append() call. + 'u'Create a MIME-compliant header that can contain many character sets. + + Optional s is the initial header value. If None, the initial header + value is not set. You can later append to the header with .append() + method calls. s may be a byte string or a Unicode string, but see the + .append() documentation for semantics. + + Optional charset serves two purposes: it has the same meaning as the + charset argument to the .append() method. It also sets the default + character set for all subsequent .append() calls that omit the charset + argument. If charset is not provided in the constructor, the us-ascii + charset is used both as s's initial charset and as the default for + subsequent .append() calls. + + The maximum line length can be specified explicitly via maxlinelen. For + splitting the first line to a shorter value (to account for the field + header which isn't included in s, e.g. `Subject') pass in the name of + the field in header_name. The default maxlinelen is 78 as recommended + by RFC 2822. + + continuation_ws must be RFC 2822 compliant folding whitespace (usually + either a space or a hard tab) which will be prepended to continuation + lines. + + errors is passed through to the .append() call. + 'b'Return the string value of the header.'u'Return the string value of the header.'b'Append a string to the MIME header. + + Optional charset, if given, should be a Charset instance or the name + of a character set (which will be converted to a Charset instance). A + value of None (the default) means that the charset given in the + constructor is used. + + s may be a byte string or a Unicode string. If it is a byte string + (i.e. isinstance(s, str) is false), then charset is the encoding of + that byte string, and a UnicodeError will be raised if the string + cannot be decoded with that charset. If s is a Unicode string, then + charset is a hint specifying the character set of the characters in + the string. In either case, when producing an RFC 2822 compliant + header using RFC 2047 rules, the string will be encoded using the + output codec of the charset. If the string cannot be encoded to the + output codec, a UnicodeError will be raised. + + Optional `errors' is passed as the errors argument to the decode + call if s is a byte string. + 'u'Append a string to the MIME header. + + Optional charset, if given, should be a Charset instance or the name + of a character set (which will be converted to a Charset instance). A + value of None (the default) means that the charset given in the + constructor is used. + + s may be a byte string or a Unicode string. If it is a byte string + (i.e. isinstance(s, str) is false), then charset is the encoding of + that byte string, and a UnicodeError will be raised if the string + cannot be decoded with that charset. If s is a Unicode string, then + charset is a hint specifying the character set of the characters in + the string. In either case, when producing an RFC 2822 compliant + header using RFC 2047 rules, the string will be encoded using the + output codec of the charset. If the string cannot be encoded to the + output codec, a UnicodeError will be raised. + + Optional `errors' is passed as the errors argument to the decode + call if s is a byte string. + 'b'True if string s is not a ctext character of RFC822. + 'u'True if string s is not a ctext character of RFC822. + 'b';, 'u';, 'b'Encode a message header into an RFC-compliant format. + + There are many issues involved in converting a given string for use in + an email header. Only certain character sets are readable in most + email clients, and as header strings can only contain a subset of + 7-bit ASCII, care must be taken to properly convert and encode (with + Base64 or quoted-printable) header strings. In addition, there is a + 75-character length limit on any given encoded header field, so + line-wrapping must be performed, even with double-byte character sets. + + Optional maxlinelen specifies the maximum length of each generated + line, exclusive of the linesep string. Individual lines may be longer + than maxlinelen if a folding point cannot be found. The first line + will be shorter by the length of the header name plus ": " if a header + name was specified at Header construction time. The default value for + maxlinelen is determined at header construction time. + + Optional splitchars is a string containing characters which should be + given extra weight by the splitting algorithm during normal header + wrapping. This is in very rough support of RFC 2822's `higher level + syntactic breaks': split points preceded by a splitchar are preferred + during line splitting, with the characters preferred in the order in + which they appear in the string. Space and tab may be included in the + string to indicate whether preference should be given to one over the + other as a split point when other split chars do not appear in the line + being split. Splitchars does not affect RFC 2047 encoded lines. + + Optional linesep is a string to be used to separate the lines of + the value. The default value is the most useful for typical + Python applications, but it can be set to \r\n to produce RFC-compliant + line separators when needed. + 'u'Encode a message header into an RFC-compliant format. + + There are many issues involved in converting a given string for use in + an email header. Only certain character sets are readable in most + email clients, and as header strings can only contain a subset of + 7-bit ASCII, care must be taken to properly convert and encode (with + Base64 or quoted-printable) header strings. In addition, there is a + 75-character length limit on any given encoded header field, so + line-wrapping must be performed, even with double-byte character sets. + + Optional maxlinelen specifies the maximum length of each generated + line, exclusive of the linesep string. Individual lines may be longer + than maxlinelen if a folding point cannot be found. The first line + will be shorter by the length of the header name plus ": " if a header + name was specified at Header construction time. The default value for + maxlinelen is determined at header construction time. + + Optional splitchars is a string containing characters which should be + given extra weight by the splitting algorithm during normal header + wrapping. This is in very rough support of RFC 2822's `higher level + syntactic breaks': split points preceded by a splitchar are preferred + during line splitting, with the characters preferred in the order in + which they appear in the string. Space and tab may be included in the + string to indicate whether preference should be given to one over the + other as a split point when other split chars do not appear in the line + being split. Splitchars does not affect RFC 2047 encoded lines. + + Optional linesep is a string to be used to separate the lines of + the value. The default value is the most useful for typical + Python applications, but it can be set to \r\n to produce RFC-compliant + line separators when needed. + 'b'header value appears to contain an embedded header: {!r}'u'header value appears to contain an embedded header: {!r}'b'(['u'(['b']+)'u']+)'u'email.header'Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +Heap queues + +[explanation by François Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +a usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +nsmallestheapPush item onto heap, maintaining the heap invariant._siftdownPop the smallest item off the heap, maintaining the heap invariant.lasteltreturnitem_siftupPop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + Fast version of a heappush followed by a heappop.Transform list into a heap, in-place, in O(len(x)) time.Maxheap version of a heappop._siftup_maxMaxheap version of a heappop followed by a heappush.Transform list into a maxheap, in-place, in O(len(x)) time.newitemparentposchildposrightpos_siftdown_maxMaxheap variant of _siftdownMaxheap variant of _siftupMerge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + If *key* is not None, applies a key function to each element to determine + its sort order. + + >>> list(merge(['dog', 'horse'], ['cat', 'fish', 'kangaroo'], key=len)) + ['dog', 'cat', 'fish', 'horse', 'kangaroo'] + + h_append_heapify_heappop_heapreplaceorderkey_valueFind the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + _orderFind the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + # Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger# raises appropriate IndexError if heap is empty# Transform bottom-up. The largest index there's any point to looking at# is the largest with a child index in-range, so must have 2*i + 1 < n,# or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so# j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is# (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1.# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos# is the index of a leaf with a possibly out-of-order value. Restore the# heap invariant.# Follow the path to the root, moving parents down until finding a place# newitem fits.# The child indices of heap index pos are already heaps, and we want to make# a heap at index pos too. We do this by bubbling the smaller child of# pos up (and so on with that child's children, etc) until hitting a leaf,# then using _siftdown to move the oddball originally at index pos into place.# We *could* break out of the loop as soon as we find a pos where newitem <=# both its children, but turns out that's not a good idea, and despite that# many books write the algorithm that way. During a heap pop, the last array# element is sifted in, and that tends to be large, so that comparing it# against values starting from the root usually doesn't pay (= usually doesn't# get us out of the loop early). See Knuth, Volume 3, where this is# explained and quantified in an exercise.# Cutting the # of comparisons is important, since these routines have no# way to extract "the priority" from an array element, so that intelligence# is likely to be hiding in custom comparison methods, or in array elements# storing (priority, record) tuples. Comparisons are thus potentially# expensive.# On random arrays of length 1000, making this change cut the number of# comparisons made by heapify() a little, and those made by exhaustive# heappop() a lot, in accord with theory. Here are typical results from 3# runs (3 just to demonstrate how small the variance is):# Compares needed by heapify Compares needed by 1000 heappops# -------------------------- --------------------------------# 1837 cut to 1663 14996 cut to 8680# 1855 cut to 1659 14966 cut to 8678# 1847 cut to 1660 15024 cut to 8703# Building the heap by using heappush() 1000 times instead required# 2198, 2148, and 2219 compares: heapify() is more efficient, when# you can use it.# The total compares needed by list.sort() on the same lists were 8627,# 8627, and 8632 (this should be compared to the sum of heapify() and# heappop() compares): list.sort() is (unsurprisingly!) more efficient# for sorting.# Bubble up the smaller child until hitting a leaf.# leftmost child position# Set childpos to index of smaller child.# Move the smaller child up.# The leaf at pos is empty now. Put newitem there, and bubble it up# to its final resting place (by sifting its parents down).# Bubble up the larger child until hitting a leaf.# Set childpos to index of larger child.# Move the larger child up.# raises StopIteration when exhausted# restore heap condition# remove empty iterator# fast case when only a single iterator remains# Algorithm notes for nlargest() and nsmallest()# ==============================================# Make a single pass over the data while keeping the k most extreme values# in a heap. Memory consumption is limited to keeping k values in a list.# Measured performance for random inputs:# number of comparisons# n inputs k-extreme values (average of 5 trials) % more than min()# ------------- ---------------- --------------------- -----------------# 1,000 100 3,317 231.7%# 10,000 100 14,046 40.5%# 100,000 100 105,749 5.7%# 1,000,000 100 1,007,751 0.8%# 10,000,000 100 10,009,401 0.1%# Theoretical number of comparisons for k smallest of n random inputs:# Step Comparisons Action# ---- -------------------------- ---------------------------# 1 1.66 * k heapify the first k-inputs# 2 n - k compare remaining elements to top of heap# 3 k * (1 + lg2(k)) * ln(n/k) replace the topmost value on the heap# 4 k * lg2(k) - (k/2) final sort of the k most extreme values# Combining and simplifying for a rough estimate gives:# comparisons = n + k * (log(k, 2) * log(n/k) + log(k, 2) + log(n/k))# Computing the number of comparisons for step 3:# -----------------------------------------------# * For the i-th new value from the iterable, the probability of being in the# k most extreme values is k/i. For example, the probability of the 101st# value seen being in the 100 most extreme values is 100/101.# * If the value is a new extreme value, the cost of inserting it into the# heap is 1 + log(k, 2).# * The probability times the cost gives:# (k/i) * (1 + log(k, 2))# * Summing across the remaining n-k elements gives:# sum((k/i) * (1 + log(k, 2)) for i in range(k+1, n+1))# * This reduces to:# (H(n) - H(k)) * k * (1 + log(k, 2))# * Where H(n) is the n-th harmonic number estimated by:# gamma = 0.5772156649# H(n) = log(n, e) + gamma + 1 / (2 * n)# http://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence# * Substituting the H(n) formula:# comparisons = k * (1 + log(k, 2)) * (log(n/k, e) + (1/n - 1/k) / 2)# Worst-case for step 3:# ----------------------# In the worst case, the input data is reversed sorted so that every new element# must be inserted in the heap:# comparisons = 1.66 * k + log(k, 2) * (n - k)# Alternative Algorithms# Other algorithms were not used because they:# 1) Took much more auxiliary memory,# 2) Made multiple passes over the data.# 3) Made more comparisons in common cases (small k, large n, semi-random input).# See the more detailed comparison of approach at:# http://code.activestate.com/recipes/577573-compare-algorithms-for-heapqsmallest# Short-cut for n==1 is to use min()# When n>=size, it's faster to use sorted()# When key is none, use simpler decoration# put the range(n) first so that zip() doesn't# consume one too many elements from the iterator# General case, slowest method# Short-cut for n==1 is to use max()# If available, use C implementationb'Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +'b'Heap queues + +[explanation by François Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +a usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +'b'heappush'u'heappush'b'heappop'u'heappop'b'heapify'u'heapify'b'heapreplace'u'heapreplace'b'merge'u'merge'b'nlargest'u'nlargest'b'nsmallest'u'nsmallest'b'heappushpop'u'heappushpop'b'Push item onto heap, maintaining the heap invariant.'u'Push item onto heap, maintaining the heap invariant.'b'Pop the smallest item off the heap, maintaining the heap invariant.'u'Pop the smallest item off the heap, maintaining the heap invariant.'b'Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + 'u'Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + 'b'Fast version of a heappush followed by a heappop.'u'Fast version of a heappush followed by a heappop.'b'Transform list into a heap, in-place, in O(len(x)) time.'u'Transform list into a heap, in-place, in O(len(x)) time.'b'Maxheap version of a heappop.'u'Maxheap version of a heappop.'b'Maxheap version of a heappop followed by a heappush.'u'Maxheap version of a heappop followed by a heappush.'b'Transform list into a maxheap, in-place, in O(len(x)) time.'u'Transform list into a maxheap, in-place, in O(len(x)) time.'b'Maxheap variant of _siftdown'u'Maxheap variant of _siftdown'b'Maxheap variant of _siftup'u'Maxheap variant of _siftup'b'Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + If *key* is not None, applies a key function to each element to determine + its sort order. + + >>> list(merge(['dog', 'horse'], ['cat', 'fish', 'kangaroo'], key=len)) + ['dog', 'cat', 'fish', 'horse', 'kangaroo'] + + 'u'Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + If *key* is not None, applies a key function to each element to determine + its sort order. + + >>> list(merge(['dog', 'horse'], ['cat', 'fish', 'kangaroo'], key=len)) + ['dog', 'cat', 'fish', 'horse', 'kangaroo'] + + 'b'Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + 'u'Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + 'b'Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + 'u'Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + 'u'heapq'HMAC (Keyed-Hashing for Message Authentication) module. + +Implements the HMAC algorithm as described by RFC 2104. +compare_digest_hashopenssl_openssl_md_methstrans_5Ctrans_36HMACRFC 2104 HMAC class. Also complies with RFC 4231. + + This supports the API for Cryptographic Hash Functions (PEP 247). + digestmodCreate a new HMAC object. + + key: bytes or buffer, key for the keyed hash object. + msg: bytes or buffer, Initial input for the hash or None. + digestmod: A hash name suitable for hashlib.new(). *OR* + A hashlib constructor returning a new hash object. *OR* + A module supporting PEP 247. + + Required as of 3.8, despite its position after the optional + msg argument. Passing it as a keyword argument is + recommended, though not required for legacy API reasons. + key: expected bytes or bytearray, but got %rMissing required parameter 'digestmod'.digest_consblock_size of %d seems too small; using our default of %d.'block_size of %d seems too small; using our ''default of %d.'No block_size attribute on given digest object; Assuming %d.'No block_size attribute on given digest object; ''Assuming %d.'hmac-Feed data from msg into this hashing object.Return a separate copy of this hashing object. + + An update to this copy won't affect the original object. + _currentReturn a hash object for the current state. + + To be used only internally with digest() and hexdigest(). + Return the hash value of this hashing object. + + This returns the hmac value as bytes. The object is + not altered in any way by this function; you can continue + updating the object after calling this function. + Like digest(), but returns a string of hexadecimal digits instead. + Create a new hashing object and return it. + + key: bytes or buffer, The starting key for the hash. + msg: bytes or buffer, Initial input for the hash, or None. + digestmod: A hash name suitable for hashlib.new(). *OR* + A hashlib constructor returning a new hash object. *OR* + A module supporting PEP 247. + + Required as of 3.8, despite its position after the optional + msg argument. Passing it as a keyword argument is + recommended, though not required for legacy API reasons. + + You can now feed arbitrary bytes into the object using its update() + method, and can ask for the hash value at any time by calling its digest() + or hexdigest() methods. + Fast inline implementation of HMAC. + + key: bytes or buffer, The key for the keyed hash object. + msg: bytes or buffer, Input message. + digest: A hash name suitable for hashlib.new() for best performance. *OR* + A hashlib constructor returning a new hash object. *OR* + A module supporting PEP 247. + # The size of the digests returned by HMAC depends on the underlying# hashing module used. Use digest_size from the instance of HMAC instead.# 512-bit HMAC; can be changed in subclasses.# self.blocksize is the default blocksize. self.block_size is# effective block size as well as the public API attribute.# Call __new__ directly to avoid the expensive __init__.b'HMAC (Keyed-Hashing for Message Authentication) module. + +Implements the HMAC algorithm as described by RFC 2104. +'u'HMAC (Keyed-Hashing for Message Authentication) module. + +Implements the HMAC algorithm as described by RFC 2104. +'b'RFC 2104 HMAC class. Also complies with RFC 4231. + + This supports the API for Cryptographic Hash Functions (PEP 247). + 'u'RFC 2104 HMAC class. Also complies with RFC 4231. + + This supports the API for Cryptographic Hash Functions (PEP 247). + 'b'Create a new HMAC object. + + key: bytes or buffer, key for the keyed hash object. + msg: bytes or buffer, Initial input for the hash or None. + digestmod: A hash name suitable for hashlib.new(). *OR* + A hashlib constructor returning a new hash object. *OR* + A module supporting PEP 247. + + Required as of 3.8, despite its position after the optional + msg argument. Passing it as a keyword argument is + recommended, though not required for legacy API reasons. + 'u'Create a new HMAC object. + + key: bytes or buffer, key for the keyed hash object. + msg: bytes or buffer, Initial input for the hash or None. + digestmod: A hash name suitable for hashlib.new(). *OR* + A hashlib constructor returning a new hash object. *OR* + A module supporting PEP 247. + + Required as of 3.8, despite its position after the optional + msg argument. Passing it as a keyword argument is + recommended, though not required for legacy API reasons. + 'b'key: expected bytes or bytearray, but got %r'u'key: expected bytes or bytearray, but got %r'b'Missing required parameter 'digestmod'.'u'Missing required parameter 'digestmod'.'b'block_size of %d seems too small; using our default of %d.'u'block_size of %d seems too small; using our default of %d.'b'No block_size attribute on given digest object; Assuming %d.'u'No block_size attribute on given digest object; Assuming %d.'b'hmac-'u'hmac-'b'Feed data from msg into this hashing object.'u'Feed data from msg into this hashing object.'b'Return a separate copy of this hashing object. + + An update to this copy won't affect the original object. + 'u'Return a separate copy of this hashing object. + + An update to this copy won't affect the original object. + 'b'Return a hash object for the current state. + + To be used only internally with digest() and hexdigest(). + 'u'Return a hash object for the current state. + + To be used only internally with digest() and hexdigest(). + 'b'Return the hash value of this hashing object. + + This returns the hmac value as bytes. The object is + not altered in any way by this function; you can continue + updating the object after calling this function. + 'u'Return the hash value of this hashing object. + + This returns the hmac value as bytes. The object is + not altered in any way by this function; you can continue + updating the object after calling this function. + 'b'Like digest(), but returns a string of hexadecimal digits instead. + 'u'Like digest(), but returns a string of hexadecimal digits instead. + 'b'Create a new hashing object and return it. + + key: bytes or buffer, The starting key for the hash. + msg: bytes or buffer, Initial input for the hash, or None. + digestmod: A hash name suitable for hashlib.new(). *OR* + A hashlib constructor returning a new hash object. *OR* + A module supporting PEP 247. + + Required as of 3.8, despite its position after the optional + msg argument. Passing it as a keyword argument is + recommended, though not required for legacy API reasons. + + You can now feed arbitrary bytes into the object using its update() + method, and can ask for the hash value at any time by calling its digest() + or hexdigest() methods. + 'u'Create a new hashing object and return it. + + key: bytes or buffer, The starting key for the hash. + msg: bytes or buffer, Initial input for the hash, or None. + digestmod: A hash name suitable for hashlib.new(). *OR* + A hashlib constructor returning a new hash object. *OR* + A module supporting PEP 247. + + Required as of 3.8, despite its position after the optional + msg argument. Passing it as a keyword argument is + recommended, though not required for legacy API reasons. + + You can now feed arbitrary bytes into the object using its update() + method, and can ask for the hash value at any time by calling its digest() + or hexdigest() methods. + 'b'Fast inline implementation of HMAC. + + key: bytes or buffer, The key for the keyed hash object. + msg: bytes or buffer, Input message. + digest: A hash name suitable for hashlib.new() for best performance. *OR* + A hashlib constructor returning a new hash object. *OR* + A module supporting PEP 247. + 'u'Fast inline implementation of HMAC. + + key: bytes or buffer, The key for the keyed hash object. + msg: bytes or buffer, Input message. + digest: A hash name suitable for hashlib.new() for best performance. *OR* + A hashlib constructor returning a new hash object. *OR* + A module supporting PEP 247. + 'u'hmac'Get useful information from live Python objects. + +This module encapsulates the interface provided by the internal special +attributes (co_*, im_*, tb_*, etc.) in a friendlier fashion. +It also provides some help for examining source code and class layout. + +Here are some of the useful functions provided by this module: + + ismodule(), isclass(), ismethod(), isfunction(), isgeneratorfunction(), + isgenerator(), istraceback(), isframe(), iscode(), isbuiltin(), + isroutine() - check object types + getmembers() - get members of an object that satisfy a given condition + + getfile(), getsourcefile(), getsource() - find an object's source code + getdoc(), getcomments() - get documentation on an object + getmodule() - determine the module that an object came from + getclasstree() - arrange classes so as to represent their hierarchy + + getargvalues(), getcallargs() - get info about function arguments + getfullargspec() - same, with support for Python 3 features + formatargvalues() - format an argument spec + getouterframes(), getinnerframes() - get info about frames + currentframe() - get the current stack frame + stack(), trace() - get info about frames on the stack or in a traceback + + signature() - get a Signature object for the callable +Ka-Ping Yee Yury Selivanov importlib.machinerymod_dictCO_TPFLAGS_IS_ABSTRACTReturn true if the object is a module. + + Module objects provide these attributes: + __cached__ pathname to byte compiled file + __doc__ documentation string + __file__ filename (missing for built-in modules)Return true if the object is a class. + + Class objects provide these attributes: + __doc__ documentation string + __module__ name of module in which this class was definedReturn true if the object is an instance method. + + Instance method objects provide these attributes: + __doc__ documentation string + __name__ name with which this method was defined + __func__ function object containing implementation of method + __self__ instance to which this method is boundReturn true if the object is a method descriptor. + + But not if ismethod() or isclass() or isfunction() are true. + + This is new in Python 2.2, and, for example, is true of int.__add__. + An object passing this test has a __get__ attribute but not a __set__ + attribute, but beyond that the set of attributes varies. __name__ is + usually sensible, and __doc__ often is. + + Methods implemented via descriptors that also pass one of the other + tests return false from the ismethoddescriptor() test, simply because + the other tests promise more -- you can, e.g., count on having the + __func__ attribute (etc) when an object passes ismethod().tpisdatadescriptorReturn true if the object is a data descriptor. + + Data descriptors have a __set__ or a __delete__ attribute. Examples are + properties (defined in Python) and getsets and members (defined in C). + Typically, data descriptors will also have __name__ and __doc__ attributes + (properties, getsets, and members have both of these attributes), but this + is not guaranteed.MemberDescriptorTypeismemberdescriptorReturn true if the object is a member descriptor. + + Member descriptors are specialized descriptors defined in extension + modules.GetSetDescriptorTypeisgetsetdescriptorReturn true if the object is a getset descriptor. + + getset descriptors are specialized descriptors defined in extension + modules.Return true if the object is a user-defined function. + + Function objects provide these attributes: + __doc__ documentation string + __name__ name with which this function was defined + __code__ code object containing compiled function bytecode + __defaults__ tuple of any default values for arguments + __globals__ global namespace in which this function was defined + __annotations__ dict of parameter annotations + __kwdefaults__ dict of keyword only parameters with defaults_has_code_flagReturn true if ``f`` is a function (or a method or functools.partial + wrapper wrapping a function) whose code object has the given ``flag`` + set in its flags.Return true if the object is a user-defined generator function. + + Generator function objects provide the same attributes as functions. + See help(isfunction) for a list of attributes.Return true if the object is a coroutine function. + + Coroutine functions are defined with "async def" syntax. + isasyncgenfunctionReturn true if the object is an asynchronous generator function. + + Asynchronous generator functions are defined with "async def" + syntax and have "yield" expressions in their body. + isasyncgenReturn true if the object is an asynchronous generator.AsyncGeneratorTypeReturn true if the object is a generator. + + Generator objects provide these attributes: + __iter__ defined to support iteration over container + close raises a new GeneratorExit exception inside the + generator to terminate the iteration + gi_code code object + gi_frame frame object or possibly None once the generator has + been exhausted + gi_running set to 1 when generator is executing, 0 otherwise + next return the next item from the container + send resumes the generator and "sends" a value that becomes + the result of the current yield-expression + throw used to raise an exception inside the generatorReturn true if the object is a coroutine.Return true if object can be passed to an ``await`` expression.CO_ITERABLE_COROUTINEReturn true if the object is a traceback. + + Traceback objects provide these attributes: + tb_frame frame object at this level + tb_lasti index of last attempted instruction in bytecode + tb_lineno current line number in Python source code + tb_next next inner traceback object (called by this level)TracebackTypeReturn true if the object is a frame object. + + Frame objects provide these attributes: + f_back next outer frame object (this frame's caller) + f_builtins built-in namespace seen by this frame + f_code code object being executed in this frame + f_globals global namespace seen by this frame + f_lasti index of last attempted instruction in bytecode + f_lineno current line number in Python source code + f_locals local namespace seen by this frame + f_trace tracing function for this frame, or NoneFrameTypeReturn true if the object is a code object. + + Code objects provide these attributes: + co_argcount number of arguments (not including *, ** args + or keyword only arguments) + co_code string of raw compiled bytecode + co_cellvars tuple of names of cell variables + co_consts tuple of constants used in the bytecode + co_filename name of file in which this code object was created + co_firstlineno number of first line in Python source code + co_flags bitmap: 1=optimized | 2=newlocals | 4=*arg | 8=**arg + | 16=nested | 32=generator | 64=nofree | 128=coroutine + | 256=iterable_coroutine | 512=async_generator + co_freevars tuple of names of free variables + co_posonlyargcount number of positional only arguments + co_kwonlyargcount number of keyword only arguments (not including ** arg) + co_lnotab encoded mapping of line numbers to bytecode indices + co_name name with which this code object was defined + co_names tuple of names of local variables + co_nlocals number of local variables + co_stacksize virtual machine stack space required + co_varnames tuple of names of arguments and local variablesisbuiltinReturn true if the object is a built-in function or method. + + Built-in functions and methods provide these attributes: + __doc__ documentation string + __name__ original name of this function or method + __self__ instance to which a method is bound, or NoneReturn true if the object is any kind of function or method.isabstractReturn true if the object is an abstract base class (ABC).getmembersReturn all members of an object as (name, value) pairs sorted by name. + Optionally, only return members that satisfy a given predicate.getmroprocessedname kind defining_class objectclassify_class_attrsReturn list of attribute-descriptor tuples. + + For each name in dir(cls), the return list contains a 4-tuple + with these elements: + + 0. The name (a string). + + 1. The kind of attribute this is, one of these strings: + 'class method' created via classmethod() + 'static method' created via staticmethod() + 'property' created via property() + 'method' any other flavor of method or descriptor + 'data' not a method + + 2. The class which defined this attribute (a class). + + 3. The object as obtained by calling getattr; if this fails, or if the + resulting object does not live anywhere in the class' mro (including + metaclasses) then the object is looked up in the defining class's + dict (found by walking the mro). + + If one of the items in dir(cls) is stored in the metaclass it will now + be discovered and not have None be listed as the class in which it was + defined. Any items whose home class cannot be discovered are skipped. + metamroclass_basesall_baseshomeclsget_objdict_obj__dict__ is special, don't want the proxylast_clssrch_clssrch_objBuiltinMethodTypestatic methodClassMethodDescriptorTypeclass methodReturn tuple of base classes (including cls) in method resolution order.Get the object wrapped by *func*. + + Follows the chain of :attr:`__wrapped__` attributes returning the last + object in the chain. + + *stop* is an optional callback accepting an object in the wrapper chain + as its sole argument that allows the unwrapping to be terminated early if + the callback returns a true value. If the callback never returns a true + value, the last object in the chain is returned as usual. For example, + :func:`signature` uses this to stop unwrapping if any object in the + chain has a ``__signature__`` attribute defined. + + :exc:`ValueError` is raised if a cycle is encountered. + + _is_wrapperrecursion_limitid_funcwrapper loop when unwrapping {!r}indentsizeReturn the indent size, in spaces, at the start of a line of text.expline_findclass_finddocgetdocGet the documentation string for an object. + + All tabs are expanded to spaces. To clean up docstrings that are + indented to line up with blocks of code, any whitespace than can be + uniformly removed from the second line onwards is removed.Clean up indentation from docstrings. + + Any whitespace that can be uniformly removed from the second line + onwards is removed.marginWork out which source or compiled file an object was defined in.{!r} is a built-in module{!r} is a built-in classmodule, class, method, function, traceback, frame, or code object was expected, got {}'module, class, method, function, traceback, frame, or ''code object was expected, got {}'getmodulenameReturn the module name for a given file, or None.all_suffixesneglenReturn the filename that can be used to locate an object's source. + Return None if no way can be identified to get the source. + all_bytecode_suffixesgetabsfile_filenameReturn an absolute path to the source or compiled file for an object. + + The idea is for each object to have a unique origin, so this routine + normalizes the result as much as possible.modulesbyfile_filesbymodnameReturn the module an object was defined in, or None if not found.mainobjectbuiltinbuiltinobjectfindsourceReturn the entire source file and starting line number for an object. + + The argument may be a module, class, method, function, traceback, frame, + or code object. The source code is returned as a list of all the lines + in the file and the line number indexes a line in that list. An OSError + is raised if the source code cannot be retrieved.source code not availablecould not get source code^(\s*)class\s*\bcandidatescould not find class definitioncould not find function definitionlnum^(\s*def\s)|(\s*async\s+def\s)|(.*(?getsourceReturn the text of the source code for an object. + + The argument may be a module, class, method, function, traceback, frame, + or code object. The source code is returned as a single string. An + OSError is raised if the source code cannot be retrieved.walktreeRecursive helper function for getclasstree().getclasstreeArrange the given list of classes into a hierarchy of nested lists. + + Where a nested list appears, it contains classes derived from the class + whose entry immediately precedes the list. Each entry is a 2-tuple + containing a class and a tuple of its base classes. If the 'unique' + argument is true, exactly one entry appears in the returned structure + for each class in the given list. Otherwise, classes using multiple + inheritance and their descendants will appear multiple times.Argumentsargs, varargs, varkwgetargsGet information about the arguments accepted by a code object. + + Three things are returned: (args, varargs, varkw), where + 'args' is the list of argument names. Keyword-only arguments are + appended. 'varargs' and 'varkw' are the names of the * and ** + arguments or None.{!r} is not a code objectnkwargskwonlyargsvarargsCO_VARARGSvarkwCO_VARKEYWORDSArgSpecargs varargs keywords defaultsgetargspecGet the names and default values of a function's parameters. + + A tuple of four things is returned: (args, varargs, keywords, defaults). + 'args' is a list of the argument names, including keyword-only argument names. + 'varargs' and 'keywords' are the names of the * and ** parameters or None. + 'defaults' is an n-tuple of the default values of the last n parameters. + + This function is deprecated, as it does not support annotations or + keyword-only parameters and will raise ValueError if either is present + on the supplied callable. + + For a more structured introspection API, use inspect.signature() instead. + + Alternatively, use getfullargspec() for an API with a similar namedtuple + based interface, but full support for annotations and keyword-only + parameters. + + Deprecated since Python 3.5, use `inspect.getfullargspec()`. + inspect.getargspec() is deprecated since Python 3.0, use inspect.signature() or inspect.getfullargspec()"inspect.getargspec() is deprecated since Python 3.0, ""use inspect.signature() or inspect.getfullargspec()"getfullargspeckwonlydefaultsFunction has keyword-only parameters or annotations, use inspect.signature() API which can support them"Function has keyword-only parameters or annotations"", use inspect.signature() API which can support them"FullArgSpecargs, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotationsGet the names and default values of a callable object's parameters. + + A tuple of seven things is returned: + (args, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotations). + 'args' is a list of the parameter names. + 'varargs' and 'varkw' are the names of the * and ** parameters or None. + 'defaults' is an n-tuple of the default values of the last n parameters. + 'kwonlyargs' is a list of keyword-only parameter names. + 'kwonlydefaults' is a dictionary mapping names from kwonlyargs to defaults. + 'annotations' is a dictionary mapping parameter names to annotations. + + Notable differences from inspect.signature(): + - the "self" parameter is always reported, even for bound methods + - wrapper chains defined by __wrapped__ *not* unwrapped automatically + _signature_from_callablefollow_wrapper_chainsskip_bound_argSignaturesigclsunsupported callableposonlyargsreturn_annotation_POSITIONAL_ONLY_POSITIONAL_OR_KEYWORD_VAR_POSITIONAL_KEYWORD_ONLY_VAR_KEYWORDannotationArgInfoargs varargs keywords localsgetargvaluesGet information about arguments passed into a particular frame. + + A tuple of four things is returned: (args, varargs, varkw, locals). + 'args' is a list of the argument names. + 'varargs' and 'varkw' are the names of the * and ** arguments or None. + 'locals' is the locals dictionary of the given frame.formatannotationbase_moduletyping.formatannotationrelativeto_formatannotationformatargspec -> formatargformatvarargsformatvarkwformatvalueformatreturnsFormat an argument spec from the values returned by getfullargspec. + + The first seven arguments are (args, varargs, varkw, defaults, + kwonlyargs, kwonlydefaults, annotations). The other five arguments + are the corresponding optional formatting functions that are called to + turn names and values into strings. The last argument is an optional + function to format the sequence of arguments. + + Deprecated since Python 3.5: use the `signature` function and `Signature` + objects. + `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly"`formatargspec` is deprecated since Python 3.5. Use `signature` and ""the `Signature` object directly"formatargandannotationfirstdefaultkwonlyargformatargvaluesFormat an argument spec from the 4 values returned by getargvalues. + + The first four arguments are (args, varargs, varkw, locals). The + next four arguments are the corresponding optional formatting functions + that are called to turn names and values into strings. The ninth + argument is an optional function to format the sequence of arguments._missing_argumentsf_nameargnames{} and {}, {} and {}%s() missing %i required %s argument%s: %spositionalkeyword-only_too_manykwonlydefcountgivenatleastkwonly_givenat least %dfrom %d to %dkwonly_sig positional argument%s (and %d keyword-only argument%s)%s() takes %s positional argument%s but %d%s %s givenwasweregetcallargsGet the mapping of arguments to values. + + A dict is returned, with keys the function argument names (including the + names of the * and ** arguments, if any), and values the respective bound + values from 'positional' and 'named'.arg2valuenum_posnum_argsnum_defaultspossible_kwargs%s() got an unexpected keyword argument %r%s() got multiple values for argument %rreqkwargClosureVarsnonlocals globals builtins unboundgetclosurevars + Get the mapping of free variables to their current values. + + Returns a named tuple of dicts mapping the current nonlocal, global + and builtin references as seen by the body of the function. A final + set of unbound names that could not be resolved is also provided. + {!r} is not a Python functionnonlocal_varscellcell_contentsglobal_ns__builtins__builtin_nsglobal_varsbuiltin_varsunbound_namesTracebackfilename lineno function code_context indexgetframeinfoGet information about a frame or traceback object. + + A tuple of five things is returned: the filename, the line number of + the current line, the function name, a list of lines of context from + the source code, and the index of the current line within that list. + The optional second argument specifies the number of lines of context + to return, which are centered around the current line.{!r} is not a frame or traceback objectgetlinenoGet the line number from a frame object, allowing for optimization.FrameInfogetouterframesGet a list of records for a frame and all higher (calling) frames. + + Each record contains a frame object, filename, line number, function + name, a list of lines of context, and index within the context.framelistframeinfogetinnerframesGet a list of records for a traceback's frame and all lower frames. + + Each record contains a frame object, filename, line number, function + name, a list of lines of context, and index within the context.Return the frame of the caller or None if this is not possible.Return a list of records for the stack above the caller's frame.Return a list of records for the stack below the current exception._static_getmro_check_instanceinstance_dict_check_class_shadowed_dict_is_typeclass_dictgetattr_staticRetrieve attributes without triggering dynamic lookup via the + descriptor protocol, __getattr__ or __getattribute__. + + Note: this function may not be able to retrieve all attributes + that getattr can fetch (like dynamically created attributes) + and may find attributes that getattr can't (like descriptors + that raise AttributeError). It can also return descriptor objects + instead of instance members in some cases. See the + documentation for details. + instance_resultklass_resultGEN_CREATEDGEN_RUNNINGGEN_SUSPENDEDGEN_CLOSEDgetgeneratorstateGet current state of a generator-iterator. + + Possible states are: + GEN_CREATED: Waiting to start execution. + GEN_RUNNING: Currently being executed by the interpreter. + GEN_SUSPENDED: Currently suspended at a yield expression. + GEN_CLOSED: Execution has completed. + getgeneratorlocals + Get the mapping of generator local variables to their current values. + + A dict is returned, with the keys the local variable names and values the + bound values.{!r} is not a Python generatorCORO_CREATEDCORO_RUNNINGCORO_SUSPENDEDCORO_CLOSEDgetcoroutinestateGet current state of a coroutine object. + + Possible states are: + CORO_CREATED: Waiting to start execution. + CORO_RUNNING: Currently being executed by the interpreter. + CORO_SUSPENDED: Currently suspended at an await expression. + CORO_CLOSED: Execution has completed. + getcoroutinelocals + Get the mapping of coroutine local variables to their current values. + + A dict is returned, with the keys the local variable names and values the + bound values._WrapperDescriptor_MethodWrapper_ClassMethodWrapper_NonUserDefinedCallables_signature_get_user_defined_methodmethod_namePrivate helper. Checks if ``cls`` has an attribute + named ``method_name`` and returns it only if it is a + pure python function. + meth_signature_get_partialwrapped_sigextra_argsPrivate helper to calculate how 'wrapped_sig' signature will + look like after applying a 'functools.partial' object (or alike) + on it. + old_paramsnew_paramspartial_argspartial_keywordsbind_partialbapartial object {!r} has incorrect argumentstransform_to_kwonlyparam_namearg_valuenew_param_signature_bound_methodPrivate helper to transform signatures for unbound + functions to bound methods. + invalid method signatureinvalid argument type_signature_is_builtinPrivate helper to test if `obj` is a callable that might + support Argument Clinic's __text_signature__ protocol. + _signature_is_functionlikePrivate helper to test if `obj` is a duck type of FunctionType. + A good example of such objects are functions compiled with + Cython, which have all attributes that a pure Python function + would have, but have their code statically compiled. + _void_signature_get_bound_param Private helper to get first parameter name from a + __text_signature__ of a builtin method, which should + be in the following format: '($param1, ...)'. + Assumptions are that the first argument won't have + a default value or an annotation. + ($cpos_signature_strip_non_python_syntax + Private helper function. Takes a signature in Argument Clinic's + extended signature format. + + Returns a tuple of three things: + * that signature re-rendered in standard Python syntax, + * the index of the "self" parameter (generally 0), or None if + the function does not have a "self" parameter, and + * the index of the last "positional only" parameter, + or None if the signature has no positional-only parameters. + self_parameterlast_positional_onlytoken_streamdelayed_commaskip_next_commacurrent_parameterERRORTOKENENCODINGclean_signature_signature_fromstrPrivate helper to parse content of '__text_signature__' + and return a Signature based on it. + ast_parameter_clsParameterdef foo: pass{!r} builtin has invalid signatureinvalidmodule_dictsys_module_dictparse_nameAnnotations are not currently supportedwrap_valueRewriteSymbolicsvisit_Attributevisit_Namename_nodedefault_node_emptyfillvaluePOSITIONAL_ONLYPOSITIONAL_OR_KEYWORDvarargVAR_POSITIONALKEYWORD_ONLYkw_defaultsVAR_KEYWORD_selfself_isboundself_ismodule_signature_from_builtinPrivate helper function to get signature for + builtin callables. + {!r} is not a Python builtin function"{!r} is not a Python builtin ""function"no signature found for builtin {!r}_signature_from_functionPrivate helper: constructs Signature for the given python function.is_duck_functionfunc_codepos_countarg_namesposonly_countkeyword_only_countkeyword_onlypos_default_countnon_default_countposonly_left__validate_parameters__Private helper function to get signature for arbitrary + callable objects. + {!r} is not a callable object__signature__unexpected object {!r} in __signature__ attribute'unexpected object {!r} in __signature__ ''attribute'first_wrapped_paramsig_paramstext_sigfrom_callableno signature found for builtin type {!r}no signature found for {!r}no signature found for builtin function {!r}callable {!r} is not supported by signatureA private marker - used in Parameter & Signature.Marker object for Signature.empty and Parameter.empty._ParameterKind_PARAM_NAME_MAPPINGpositional-onlypositional or keywordvariadic positionalvariadic keywordRepresents a parameter in a function signature. + + Has the following public attributes: + + * name : str + The name of the parameter as a string. + * default : object + The default value for the parameter if specified. If the + parameter has no default value, this attribute is set to + `Parameter.empty`. + * annotation + The annotation for the parameter if specified. If the + parameter has no annotation, this attribute is set to + `Parameter.empty`. + * kind : str + Describes how argument values are bound to the parameter. + Possible values: `Parameter.POSITIONAL_ONLY`, + `Parameter.POSITIONAL_OR_KEYWORD`, `Parameter.VAR_POSITIONAL`, + `Parameter.KEYWORD_ONLY`, `Parameter.VAR_KEYWORD`. + _kind_annotationvalue is not a valid Parameter.kind{} parameters cannot have default valuesname is a required attribute for Parametername must be a str, not a {}implicit arguments must be passed as positional or keyword arguments, not {}'implicit arguments must be passed as ''positional or keyword arguments, not {}'implicit{}{!r} is not a valid parameter nameCreates a customized copy of the Parameter.formatted{}: {}{} = {}{}={}<{} "{}">BoundArgumentsResult of `Signature.bind` call. Holds the mapping of arguments + to the function's parameters. + + Has the following public attributes: + + * arguments : OrderedDict + An ordered mutable mapping of parameters' names to arguments' values. + Does not contain arguments' default values. + * signature : Signature + The Signature object that created this instance. + * args : tuple + Tuple of positional arguments values. + * kwargs : dict + Dict of keyword arguments values. + _signaturekwargs_startedapply_defaultsSet default values for missing arguments. + + For variable-positional arguments (*args) the default is an + empty tuple. + + For variable-keyword arguments (**kwargs) the default is an + empty dict. + new_arguments<{} ({})>A Signature object represents the overall signature of a function. + It stores a Parameter object for each parameter accepted by the + function, as well as information specific to the function itself. + + A Signature object has the following public attributes and methods: + + * parameters : OrderedDict + An ordered mapping of parameters' names to the corresponding + Parameter objects (keyword-only arguments are in the same order + as listed in `code.co_varnames`). + * return_annotation : object + The annotation for the return type of the function if specified. + If the function has no annotation for its return type, this + attribute is set to `Signature.empty`. + * bind(*args, **kwargs) -> BoundArguments + Creates a mapping from positional and keyword arguments to + parameters. + * bind_partial(*args, **kwargs) -> BoundArguments + Creates a partial mapping from positional and keyword arguments + to parameters (simulating 'functools.partial' behavior.) + _return_annotation_parameters_bound_arguments_clsConstructs Signature from the given list of Parameter + objects and 'return_annotation'. All arguments are optional. + top_kindkind_defaultswrong parameter order: {} parameter before {} parameter'wrong parameter order: {} parameter before {} ''parameter'non-default argument follows default argument'non-default argument follows default ''argument'duplicate parameter name: {!r}from_functionConstructs Signature for the given python function. + + Deprecated since Python 3.5, use `Signature.from_callable()`. + inspect.Signature.from_function() is deprecated since Python 3.5, use Signature.from_callable()"inspect.Signature.from_function() is deprecated since ""Python 3.5, use Signature.from_callable()"from_builtinConstructs Signature for the given builtin function. + + Deprecated since Python 3.5, use `Signature.from_callable()`. + inspect.Signature.from_builtin() is deprecated since Python 3.5, use Signature.from_callable()"inspect.Signature.from_builtin() is deprecated since "follow_wrappedConstructs Signature for the given callable object.Creates a customized copy of the Signature. + Pass 'parameters' and/or 'return_annotation' arguments + to override them in the new copy. + _hash_basiskwo_paramsPrivate method. Don't use directly.parameters_exarg_valsarg_valtoo many positional argumentsmultiple values for argument {arg!r}{arg!r} parameter is positional only, but was passed as a keyword'{arg!r} parameter is positional only, ''but was passed as a keyword'missing a required argument: {arg!r}kwargs_paramgot an unexpected keyword argument {arg!r}Get a BoundArguments object, that maps the passed `args` + and `kwargs` to the function's signature. Raises `TypeError` + if the passed arguments can not be bound. + Get a BoundArguments object, that partially maps the + passed `args` and `kwargs` to the function's signature. + Raises `TypeError` if the passed arguments can not be bound. + render_pos_only_separatorrender_kw_only_separatorrenderedanno -> {}Get a signature object for the passed callable. Logic for inspecting an object given at command line The object to be analysed. It supports the 'module:qualname' syntax"The object to be analysed. ""It supports the 'module:qualname' syntax"--detailsDisplay info about the module rather than its source codemod_namehas_attrsFailed to import {} ({}: {})Can't get info for builtin modules.detailsTarget: {}Origin: {}Cached: {}Loader: {}Submodule search path: {}Line: {}# This module is in the public domain. No warranties.# Create constants for the compiler flags in Include/code.h# We try to get them from dis to avoid duplication# See Include/object.h# ----------------------------------------------------------- type-checking# mutual exclusion# CPython and equivalent# Other implementations# It looks like ABCMeta.__new__ has finished running;# TPFLAGS_IS_ABSTRACT should have been accurate.# It looks like ABCMeta.__new__ has not finished running yet; we're# probably in __init_subclass__. We'll look for abstractmethods manually.# :dd any DynamicClassAttributes to the list of names if object is a class;# this may result in duplicate entries if, for example, a virtual# attribute with the same name as a DynamicClassAttribute exists# First try to get the value via getattr. Some descriptors don't# like calling their __get__ (see bug #1785), so fall back to# looking in the __dict__.# handle the duplicate key# could be a (currently) missing slot member, or a buggy# __dir__; discard and move on# for attributes stored in the metaclass# :dd any DynamicClassAttributes to the list of names;# attribute with the same name as a DynamicClassAttribute exists.# Get the object associated with the name, and where it was defined.# Normal objects will be looked up with both getattr and directly in# its class' dict (in case getattr fails [bug #1785], and also to look# for a docstring).# For DynamicClassAttributes on the second pass we only look in the# class's dict.# Getting an obj from the __dict__ sometimes reveals more than# using getattr. Static and class methods are dramatic examples.# if the resulting object does not live somewhere in the# mro, drop it and search the mro manually# first look in the classes# then check the metaclasses# unable to locate the attribute anywhere, most likely due to# buggy custom __dir__; discard and move on# Classify the object or its descriptor.# ----------------------------------------------------------- class helpers# -------------------------------------------------------- function helpers# remember the original func for error reporting# Memoise by id to tolerate non-hashable objects, but store objects to# ensure they aren't destroyed, which would allow their IDs to be reused.# -------------------------------------------------- source code extraction# classmethod# Should be tested before isdatadescriptor().# Find minimum indentation of any non-blank lines after first line.# Remove indentation.# Remove any trailing or leading blank lines.# Check for paths that look like an actual module file# try longest suffixes first, in case they overlap# only return a non-existent filename if the module has a PEP 302 loader# or it is in the linecache# Try the filename to modulename cache# Try the cache again with the absolute file name# Update the filename to module name cache and check yet again# Copy sys.modules in order to cope with changes while iterating# Have already mapped this module, so skip it# Always map to the name the module knows itself by# Check the main module# Check builtins# Invalidate cache if needed.# Allow filenames in form of "" to pass through.# `doctest` monkeypatches `linecache` module to enable# inspection, so let `linecache.getlines` to be called.# make some effort to find the best matching class definition:# use the one with the least indentation, which is the one# that's most probably not inside a function definition.# if it's at toplevel, it's already the best one# else add whitespace to candidate list# this will sort by whitespace, and by line number,# less whitespace first# Look for a comment block at the top of the file.# Look for a preceding block of comments at the same indentation.# skip any decorators# look for the first "def", "class" or "lambda"# skip to the end of the line# stop skipping when a NEWLINE is seen# lambdas always end at the first NEWLINE# hitting a NEWLINE when in a decorator without args# ends the decorator# the end of matching indent/dedent pairs end a block# (note that this only works for "def"/"class" blocks,# not e.g. for "if: else:" or "try: finally:" blocks)# Include comments if indented at least as much as the block# any other token on the same indentation level end the previous# block as well, except the pseudo-tokens COMMENT and NL.# for module or frame that corresponds to module, return all source lines# --------------------------------------------------- class tree extraction# ------------------------------------------------ argument list extraction# Re: `skip_bound_arg=False`# There is a notable difference in behaviour between getfullargspec# and Signature: the former always returns 'self' parameter for bound# methods, whereas the Signature always shows the actual calling# signature of the passed object.# To simulate this behaviour, we "unbind" bound methods, to trick# inspect.signature to always return their first parameter ("self",# usually)# Re: `follow_wrapper_chains=False`# getfullargspec() historically ignored __wrapped__ attributes,# so we ensure that remains the case in 3.3+# Most of the times 'signature' will raise ValueError.# But, it can also raise AttributeError, and, maybe something# else. So to be fully backwards compatible, we catch all# possible exceptions here, and reraise a TypeError.# compatibility with 'func.__kwdefaults__'# compatibility with 'func.__defaults__'# implicit 'self' (or 'cls' for classmethods) argument# Nonlocal references are named in co_freevars and resolved# by looking them up in __closure__ by positional index# Global and builtin references are named in co_names and resolved# by looking them up in __globals__ or __builtins__# Because these used to be builtins instead of keywords, they# may still show up as name references. We ignore them.# -------------------------------------------------- stack frame extraction# FrameType.f_lineno is now a descriptor that grovels co_lnotab# ------------------------------------------------ static version of getattr# for types we check the metaclass too# ------------------------------------------------ generator introspection# ------------------------------------------------ coroutine introspection################################################################################## Function Signature Object (PEP 362)# Once '__signature__' will be added to 'C'-level# callables, this check won't be necessary# If positional-only parameter is bound by partial,# it effectively disappears from the signature# This means that this parameter, and all parameters# after it should be keyword-only (and var-positional# should be removed). Here's why. Consider the following# function:# foo(a, b, *args, c):# pass# "partial(foo, a='spam')" will have the following# signature: "(*, a='spam', b, c)". Because attempting# to call that partial with "(10, 20)" arguments will# raise a TypeError, saying that "a" argument received# multiple values.# Set the new default value# was passed as a positional argument# Drop first parameter:# '(p1, p2[, ...])' -> '(p2[, ...])'# Unless we add a new parameter type we never# get here# It's a var-positional parameter.# Do nothing. '(*args[, ...])' -> '(*args[, ...])'# Can't test 'isinstance(type)' here, as it would# also be True for regular python classes# All function-like objects are obviously callables,# and not classes.# Important to use _void ...# ... and not None here# token stream always starts with ENCODING token, skip it# Lazy import ast because it's relatively heavy and# it's not used for other than this function.# non-keyword-only parameters# *args# keyword-only arguments# **kwargs# Possibly strip the bound argument:# - We *always* strip first bound argument if# it is a module.# - We don't strip first bound argument if# skip_bound_arg is False.# for builtins, self parameter is always positional-only!# If it's not a pure Python function, and not a duck type# of pure function:# Parameter information.# Non-keyword-only parameters w/o defaults.# ... w/ defaults.# Keyword-only parameters.# Is 'func' is a pure Python function - don't validate the# parameters list (for correct order and defaults), it should be OK.# In this case we skip the first parameter of the underlying# function (usually `self` or `cls`).# Was this function wrapped by a decorator?# If the unwrapped object is a *method*, we might want to# skip its first parameter (self).# See test_signature_wrapped_bound_method for details.# Unbound partialmethod (see functools.partialmethod)# This means, that we need to calculate the signature# as if it's a regular partial object, but taking into# account that the first positional argument# (usually `self`, or `cls`) will not be passed# automatically (as for boundmethods)# First argument of the wrapped callable is `*args`, as in# `partialmethod(lambda *args)`.# If it's a pure Python function, or an object that is duck type# of a Python function (Cython functions, for instance), then:# obj is a class or a metaclass# First, let's see if it has an overloaded __call__ defined# in its metaclass# Now we check if the 'obj' class has a '__new__' method# Finally, we should have at least __init__ implemented# At this point we know, that `obj` is a class, with no user-# defined '__init__', '__new__', or class-level '__call__'# Since '__text_signature__' is implemented as a# descriptor that extracts text signature from the# class docstring, if 'obj' is derived from a builtin# class, its own '__text_signature__' may be 'None'.# Therefore, we go through the MRO (except the last# class in there, which is 'object') to find the first# class with non-empty text signature.# If 'obj' class has a __text_signature__ attribute:# return a signature based on it# No '__text_signature__' was found for the 'obj' class.# Last option is to check if its '__init__' is# object.__init__ or type.__init__.# We have a class (not metaclass), but no user-defined# __init__ or __new__ for it# Return a signature of 'object' builtin.# An object with __call__# We also check that the 'obj' is not an instance of# _WrapperDescriptor or _MethodWrapper to avoid# infinite recursion (and even potential segfault)# For classes and objects we skip the first parameter of their# __call__, __new__, or __init__ methods# Raise a nicer error message for builtins# These are implicit arguments generated by comprehensions. In# order to provide a friendlier interface to users, we recast# their name as "implicitN" and treat them as positional-only.# See issue 19611.# Add annotation and default value# We're done here. Other arguments# will be mapped in 'BoundArguments.kwargs'# plain argument# plain keyword argument# This BoundArguments was likely produced by# Signature.bind_partial().# No default for this parameter, but the# previous parameter of the same kind had# a default# There is a default for this parameter.# Let's iterate through the positional arguments and corresponding# parameters# No more positional arguments# No more parameters. That's it. Just need to check that# we have no `kwargs` after this while loop# That's OK, just empty *args. Let's start parsing# kwargs# That's fine too - we have a default value for this# parameter. So, lets start parsing `kwargs`, starting# with the current parameter# No default, not VAR_KEYWORD, not VAR_POSITIONAL,# not in `kwargs`# We have a positional argument to process# Looks like we have no parameter for this positional# We have an '*args'-like argument, let's fill it with# all positional arguments we have left and move on to# the next phase# Now, we iterate through the remaining parameters to process# keyword arguments# Memorize that we have a '**kwargs'-like parameter# Named arguments don't refer to '*args'-like parameters.# We only arrive here if the positional arguments ended# before reaching the last parameter before *args.# We have no value for this parameter. It's fine though,# if it has a default value, or it is an '*args'-like# parameter, left alone by the processing of positional# arguments.# This should never happen in case of a properly built# Signature object (but let's have this check here# to ensure correct behaviour just in case)# Process our '**kwargs'-like parameter# It's not a positional-only parameter, and the flag# is set to 'True' (there were pos-only params before.)# OK, we have an '*args'-like parameter, so we won't need# a '*' to separate keyword-only arguments# We have a keyword-only parameter to render and we haven't# rendered an '*args'-like parameter before, so add a '*'# separator to the parameters list ("foo(arg1, *, arg2)" case)# This condition should be only triggered once, so# reset the flag# There were only positional-only parameters, hence the# flag was not reset to 'False'b'Get useful information from live Python objects. + +This module encapsulates the interface provided by the internal special +attributes (co_*, im_*, tb_*, etc.) in a friendlier fashion. +It also provides some help for examining source code and class layout. + +Here are some of the useful functions provided by this module: + + ismodule(), isclass(), ismethod(), isfunction(), isgeneratorfunction(), + isgenerator(), istraceback(), isframe(), iscode(), isbuiltin(), + isroutine() - check object types + getmembers() - get members of an object that satisfy a given condition + + getfile(), getsourcefile(), getsource() - find an object's source code + getdoc(), getcomments() - get documentation on an object + getmodule() - determine the module that an object came from + getclasstree() - arrange classes so as to represent their hierarchy + + getargvalues(), getcallargs() - get info about function arguments + getfullargspec() - same, with support for Python 3 features + formatargvalues() - format an argument spec + getouterframes(), getinnerframes() - get info about frames + currentframe() - get the current stack frame + stack(), trace() - get info about frames on the stack or in a traceback + + signature() - get a Signature object for the callable +'u'Get useful information from live Python objects. + +This module encapsulates the interface provided by the internal special +attributes (co_*, im_*, tb_*, etc.) in a friendlier fashion. +It also provides some help for examining source code and class layout. + +Here are some of the useful functions provided by this module: + + ismodule(), isclass(), ismethod(), isfunction(), isgeneratorfunction(), + isgenerator(), istraceback(), isframe(), iscode(), isbuiltin(), + isroutine() - check object types + getmembers() - get members of an object that satisfy a given condition + + getfile(), getsourcefile(), getsource() - find an object's source code + getdoc(), getcomments() - get documentation on an object + getmodule() - determine the module that an object came from + getclasstree() - arrange classes so as to represent their hierarchy + + getargvalues(), getcallargs() - get info about function arguments + getfullargspec() - same, with support for Python 3 features + formatargvalues() - format an argument spec + getouterframes(), getinnerframes() - get info about frames + currentframe() - get the current stack frame + stack(), trace() - get info about frames on the stack or in a traceback + + signature() - get a Signature object for the callable +'b'Ka-Ping Yee 'u'Ka-Ping Yee 'b'Yury Selivanov 'u'Yury Selivanov 'b'CO_'u'CO_'b'Return true if the object is a module. + + Module objects provide these attributes: + __cached__ pathname to byte compiled file + __doc__ documentation string + __file__ filename (missing for built-in modules)'u'Return true if the object is a module. + + Module objects provide these attributes: + __cached__ pathname to byte compiled file + __doc__ documentation string + __file__ filename (missing for built-in modules)'b'Return true if the object is a class. + + Class objects provide these attributes: + __doc__ documentation string + __module__ name of module in which this class was defined'u'Return true if the object is a class. + + Class objects provide these attributes: + __doc__ documentation string + __module__ name of module in which this class was defined'b'Return true if the object is an instance method. + + Instance method objects provide these attributes: + __doc__ documentation string + __name__ name with which this method was defined + __func__ function object containing implementation of method + __self__ instance to which this method is bound'u'Return true if the object is an instance method. + + Instance method objects provide these attributes: + __doc__ documentation string + __name__ name with which this method was defined + __func__ function object containing implementation of method + __self__ instance to which this method is bound'b'Return true if the object is a method descriptor. + + But not if ismethod() or isclass() or isfunction() are true. + + This is new in Python 2.2, and, for example, is true of int.__add__. + An object passing this test has a __get__ attribute but not a __set__ + attribute, but beyond that the set of attributes varies. __name__ is + usually sensible, and __doc__ often is. + + Methods implemented via descriptors that also pass one of the other + tests return false from the ismethoddescriptor() test, simply because + the other tests promise more -- you can, e.g., count on having the + __func__ attribute (etc) when an object passes ismethod().'u'Return true if the object is a method descriptor. + + But not if ismethod() or isclass() or isfunction() are true. + + This is new in Python 2.2, and, for example, is true of int.__add__. + An object passing this test has a __get__ attribute but not a __set__ + attribute, but beyond that the set of attributes varies. __name__ is + usually sensible, and __doc__ often is. + + Methods implemented via descriptors that also pass one of the other + tests return false from the ismethoddescriptor() test, simply because + the other tests promise more -- you can, e.g., count on having the + __func__ attribute (etc) when an object passes ismethod().'b'Return true if the object is a data descriptor. + + Data descriptors have a __set__ or a __delete__ attribute. Examples are + properties (defined in Python) and getsets and members (defined in C). + Typically, data descriptors will also have __name__ and __doc__ attributes + (properties, getsets, and members have both of these attributes), but this + is not guaranteed.'u'Return true if the object is a data descriptor. + + Data descriptors have a __set__ or a __delete__ attribute. Examples are + properties (defined in Python) and getsets and members (defined in C). + Typically, data descriptors will also have __name__ and __doc__ attributes + (properties, getsets, and members have both of these attributes), but this + is not guaranteed.'b'MemberDescriptorType'u'MemberDescriptorType'b'Return true if the object is a member descriptor. + + Member descriptors are specialized descriptors defined in extension + modules.'u'Return true if the object is a member descriptor. + + Member descriptors are specialized descriptors defined in extension + modules.'b'GetSetDescriptorType'u'GetSetDescriptorType'b'Return true if the object is a getset descriptor. + + getset descriptors are specialized descriptors defined in extension + modules.'u'Return true if the object is a getset descriptor. + + getset descriptors are specialized descriptors defined in extension + modules.'b'Return true if the object is a user-defined function. + + Function objects provide these attributes: + __doc__ documentation string + __name__ name with which this function was defined + __code__ code object containing compiled function bytecode + __defaults__ tuple of any default values for arguments + __globals__ global namespace in which this function was defined + __annotations__ dict of parameter annotations + __kwdefaults__ dict of keyword only parameters with defaults'u'Return true if the object is a user-defined function. + + Function objects provide these attributes: + __doc__ documentation string + __name__ name with which this function was defined + __code__ code object containing compiled function bytecode + __defaults__ tuple of any default values for arguments + __globals__ global namespace in which this function was defined + __annotations__ dict of parameter annotations + __kwdefaults__ dict of keyword only parameters with defaults'b'Return true if ``f`` is a function (or a method or functools.partial + wrapper wrapping a function) whose code object has the given ``flag`` + set in its flags.'u'Return true if ``f`` is a function (or a method or functools.partial + wrapper wrapping a function) whose code object has the given ``flag`` + set in its flags.'b'Return true if the object is a user-defined generator function. + + Generator function objects provide the same attributes as functions. + See help(isfunction) for a list of attributes.'u'Return true if the object is a user-defined generator function. + + Generator function objects provide the same attributes as functions. + See help(isfunction) for a list of attributes.'b'Return true if the object is a coroutine function. + + Coroutine functions are defined with "async def" syntax. + 'u'Return true if the object is a coroutine function. + + Coroutine functions are defined with "async def" syntax. + 'b'Return true if the object is an asynchronous generator function. + + Asynchronous generator functions are defined with "async def" + syntax and have "yield" expressions in their body. + 'u'Return true if the object is an asynchronous generator function. + + Asynchronous generator functions are defined with "async def" + syntax and have "yield" expressions in their body. + 'b'Return true if the object is an asynchronous generator.'u'Return true if the object is an asynchronous generator.'b'Return true if the object is a generator. + + Generator objects provide these attributes: + __iter__ defined to support iteration over container + close raises a new GeneratorExit exception inside the + generator to terminate the iteration + gi_code code object + gi_frame frame object or possibly None once the generator has + been exhausted + gi_running set to 1 when generator is executing, 0 otherwise + next return the next item from the container + send resumes the generator and "sends" a value that becomes + the result of the current yield-expression + throw used to raise an exception inside the generator'u'Return true if the object is a generator. + + Generator objects provide these attributes: + __iter__ defined to support iteration over container + close raises a new GeneratorExit exception inside the + generator to terminate the iteration + gi_code code object + gi_frame frame object or possibly None once the generator has + been exhausted + gi_running set to 1 when generator is executing, 0 otherwise + next return the next item from the container + send resumes the generator and "sends" a value that becomes + the result of the current yield-expression + throw used to raise an exception inside the generator'b'Return true if the object is a coroutine.'u'Return true if the object is a coroutine.'b'Return true if object can be passed to an ``await`` expression.'u'Return true if object can be passed to an ``await`` expression.'b'Return true if the object is a traceback. + + Traceback objects provide these attributes: + tb_frame frame object at this level + tb_lasti index of last attempted instruction in bytecode + tb_lineno current line number in Python source code + tb_next next inner traceback object (called by this level)'u'Return true if the object is a traceback. + + Traceback objects provide these attributes: + tb_frame frame object at this level + tb_lasti index of last attempted instruction in bytecode + tb_lineno current line number in Python source code + tb_next next inner traceback object (called by this level)'b'Return true if the object is a frame object. + + Frame objects provide these attributes: + f_back next outer frame object (this frame's caller) + f_builtins built-in namespace seen by this frame + f_code code object being executed in this frame + f_globals global namespace seen by this frame + f_lasti index of last attempted instruction in bytecode + f_lineno current line number in Python source code + f_locals local namespace seen by this frame + f_trace tracing function for this frame, or None'u'Return true if the object is a frame object. + + Frame objects provide these attributes: + f_back next outer frame object (this frame's caller) + f_builtins built-in namespace seen by this frame + f_code code object being executed in this frame + f_globals global namespace seen by this frame + f_lasti index of last attempted instruction in bytecode + f_lineno current line number in Python source code + f_locals local namespace seen by this frame + f_trace tracing function for this frame, or None'b'Return true if the object is a code object. + + Code objects provide these attributes: + co_argcount number of arguments (not including *, ** args + or keyword only arguments) + co_code string of raw compiled bytecode + co_cellvars tuple of names of cell variables + co_consts tuple of constants used in the bytecode + co_filename name of file in which this code object was created + co_firstlineno number of first line in Python source code + co_flags bitmap: 1=optimized | 2=newlocals | 4=*arg | 8=**arg + | 16=nested | 32=generator | 64=nofree | 128=coroutine + | 256=iterable_coroutine | 512=async_generator + co_freevars tuple of names of free variables + co_posonlyargcount number of positional only arguments + co_kwonlyargcount number of keyword only arguments (not including ** arg) + co_lnotab encoded mapping of line numbers to bytecode indices + co_name name with which this code object was defined + co_names tuple of names of local variables + co_nlocals number of local variables + co_stacksize virtual machine stack space required + co_varnames tuple of names of arguments and local variables'u'Return true if the object is a code object. + + Code objects provide these attributes: + co_argcount number of arguments (not including *, ** args + or keyword only arguments) + co_code string of raw compiled bytecode + co_cellvars tuple of names of cell variables + co_consts tuple of constants used in the bytecode + co_filename name of file in which this code object was created + co_firstlineno number of first line in Python source code + co_flags bitmap: 1=optimized | 2=newlocals | 4=*arg | 8=**arg + | 16=nested | 32=generator | 64=nofree | 128=coroutine + | 256=iterable_coroutine | 512=async_generator + co_freevars tuple of names of free variables + co_posonlyargcount number of positional only arguments + co_kwonlyargcount number of keyword only arguments (not including ** arg) + co_lnotab encoded mapping of line numbers to bytecode indices + co_name name with which this code object was defined + co_names tuple of names of local variables + co_nlocals number of local variables + co_stacksize virtual machine stack space required + co_varnames tuple of names of arguments and local variables'b'Return true if the object is a built-in function or method. + + Built-in functions and methods provide these attributes: + __doc__ documentation string + __name__ original name of this function or method + __self__ instance to which a method is bound, or None'u'Return true if the object is a built-in function or method. + + Built-in functions and methods provide these attributes: + __doc__ documentation string + __name__ original name of this function or method + __self__ instance to which a method is bound, or None'b'Return true if the object is any kind of function or method.'u'Return true if the object is any kind of function or method.'b'Return true if the object is an abstract base class (ABC).'u'Return true if the object is an abstract base class (ABC).'b'Return all members of an object as (name, value) pairs sorted by name. + Optionally, only return members that satisfy a given predicate.'u'Return all members of an object as (name, value) pairs sorted by name. + Optionally, only return members that satisfy a given predicate.'b'Attribute'u'Attribute'b'name kind defining_class object'u'name kind defining_class object'b'Return list of attribute-descriptor tuples. + + For each name in dir(cls), the return list contains a 4-tuple + with these elements: + + 0. The name (a string). + + 1. The kind of attribute this is, one of these strings: + 'class method' created via classmethod() + 'static method' created via staticmethod() + 'property' created via property() + 'method' any other flavor of method or descriptor + 'data' not a method + + 2. The class which defined this attribute (a class). + + 3. The object as obtained by calling getattr; if this fails, or if the + resulting object does not live anywhere in the class' mro (including + metaclasses) then the object is looked up in the defining class's + dict (found by walking the mro). + + If one of the items in dir(cls) is stored in the metaclass it will now + be discovered and not have None be listed as the class in which it was + defined. Any items whose home class cannot be discovered are skipped. + 'u'Return list of attribute-descriptor tuples. + + For each name in dir(cls), the return list contains a 4-tuple + with these elements: + + 0. The name (a string). + + 1. The kind of attribute this is, one of these strings: + 'class method' created via classmethod() + 'static method' created via staticmethod() + 'property' created via property() + 'method' any other flavor of method or descriptor + 'data' not a method + + 2. The class which defined this attribute (a class). + + 3. The object as obtained by calling getattr; if this fails, or if the + resulting object does not live anywhere in the class' mro (including + metaclasses) then the object is looked up in the defining class's + dict (found by walking the mro). + + If one of the items in dir(cls) is stored in the metaclass it will now + be discovered and not have None be listed as the class in which it was + defined. Any items whose home class cannot be discovered are skipped. + 'b'__dict__ is special, don't want the proxy'u'__dict__ is special, don't want the proxy'b'static method'u'static method'b'class method'u'class method'b'property'u'property'b'method'u'method'b'Return tuple of base classes (including cls) in method resolution order.'u'Return tuple of base classes (including cls) in method resolution order.'b'Get the object wrapped by *func*. + + Follows the chain of :attr:`__wrapped__` attributes returning the last + object in the chain. + + *stop* is an optional callback accepting an object in the wrapper chain + as its sole argument that allows the unwrapping to be terminated early if + the callback returns a true value. If the callback never returns a true + value, the last object in the chain is returned as usual. For example, + :func:`signature` uses this to stop unwrapping if any object in the + chain has a ``__signature__`` attribute defined. + + :exc:`ValueError` is raised if a cycle is encountered. + + 'u'Get the object wrapped by *func*. + + Follows the chain of :attr:`__wrapped__` attributes returning the last + object in the chain. + + *stop* is an optional callback accepting an object in the wrapper chain + as its sole argument that allows the unwrapping to be terminated early if + the callback returns a true value. If the callback never returns a true + value, the last object in the chain is returned as usual. For example, + :func:`signature` uses this to stop unwrapping if any object in the + chain has a ``__signature__`` attribute defined. + + :exc:`ValueError` is raised if a cycle is encountered. + + 'b'__wrapped__'u'__wrapped__'b'wrapper loop when unwrapping {!r}'u'wrapper loop when unwrapping {!r}'b'Return the indent size, in spaces, at the start of a line of text.'u'Return the indent size, in spaces, at the start of a line of text.'b'Get the documentation string for an object. + + All tabs are expanded to spaces. To clean up docstrings that are + indented to line up with blocks of code, any whitespace than can be + uniformly removed from the second line onwards is removed.'u'Get the documentation string for an object. + + All tabs are expanded to spaces. To clean up docstrings that are + indented to line up with blocks of code, any whitespace than can be + uniformly removed from the second line onwards is removed.'b'Clean up indentation from docstrings. + + Any whitespace that can be uniformly removed from the second line + onwards is removed.'u'Clean up indentation from docstrings. + + Any whitespace that can be uniformly removed from the second line + onwards is removed.'b'Work out which source or compiled file an object was defined in.'u'Work out which source or compiled file an object was defined in.'b'{!r} is a built-in module'u'{!r} is a built-in module'b'{!r} is a built-in class'u'{!r} is a built-in class'b'module, class, method, function, traceback, frame, or code object was expected, got {}'u'module, class, method, function, traceback, frame, or code object was expected, got {}'b'Return the module name for a given file, or None.'u'Return the module name for a given file, or None.'b'Return the filename that can be used to locate an object's source. + Return None if no way can be identified to get the source. + 'u'Return the filename that can be used to locate an object's source. + Return None if no way can be identified to get the source. + 'b'Return an absolute path to the source or compiled file for an object. + + The idea is for each object to have a unique origin, so this routine + normalizes the result as much as possible.'u'Return an absolute path to the source or compiled file for an object. + + The idea is for each object to have a unique origin, so this routine + normalizes the result as much as possible.'b'Return the module an object was defined in, or None if not found.'u'Return the module an object was defined in, or None if not found.'b'Return the entire source file and starting line number for an object. + + The argument may be a module, class, method, function, traceback, frame, + or code object. The source code is returned as a list of all the lines + in the file and the line number indexes a line in that list. An OSError + is raised if the source code cannot be retrieved.'u'Return the entire source file and starting line number for an object. + + The argument may be a module, class, method, function, traceback, frame, + or code object. The source code is returned as a list of all the lines + in the file and the line number indexes a line in that list. An OSError + is raised if the source code cannot be retrieved.'b'source code not available'u'source code not available'b'could not get source code'u'could not get source code'b'^(\s*)class\s*'u'^(\s*)class\s*'b'\b'u'\b'b'could not find class definition'u'could not find class definition'b'could not find function definition'u'could not find function definition'b'^(\s*def\s)|(\s*async\s+def\s)|(.*(?'u''b'Return the text of the source code for an object. + + The argument may be a module, class, method, function, traceback, frame, + or code object. The source code is returned as a single string. An + OSError is raised if the source code cannot be retrieved.'u'Return the text of the source code for an object. + + The argument may be a module, class, method, function, traceback, frame, + or code object. The source code is returned as a single string. An + OSError is raised if the source code cannot be retrieved.'b'Recursive helper function for getclasstree().'u'Recursive helper function for getclasstree().'b'Arrange the given list of classes into a hierarchy of nested lists. + + Where a nested list appears, it contains classes derived from the class + whose entry immediately precedes the list. Each entry is a 2-tuple + containing a class and a tuple of its base classes. If the 'unique' + argument is true, exactly one entry appears in the returned structure + for each class in the given list. Otherwise, classes using multiple + inheritance and their descendants will appear multiple times.'u'Arrange the given list of classes into a hierarchy of nested lists. + + Where a nested list appears, it contains classes derived from the class + whose entry immediately precedes the list. Each entry is a 2-tuple + containing a class and a tuple of its base classes. If the 'unique' + argument is true, exactly one entry appears in the returned structure + for each class in the given list. Otherwise, classes using multiple + inheritance and their descendants will appear multiple times.'b'Arguments'u'Arguments'b'args, varargs, varkw'u'args, varargs, varkw'b'Get information about the arguments accepted by a code object. + + Three things are returned: (args, varargs, varkw), where + 'args' is the list of argument names. Keyword-only arguments are + appended. 'varargs' and 'varkw' are the names of the * and ** + arguments or None.'u'Get information about the arguments accepted by a code object. + + Three things are returned: (args, varargs, varkw), where + 'args' is the list of argument names. Keyword-only arguments are + appended. 'varargs' and 'varkw' are the names of the * and ** + arguments or None.'b'{!r} is not a code object'u'{!r} is not a code object'b'ArgSpec'u'ArgSpec'b'args varargs keywords defaults'u'args varargs keywords defaults'b'Get the names and default values of a function's parameters. + + A tuple of four things is returned: (args, varargs, keywords, defaults). + 'args' is a list of the argument names, including keyword-only argument names. + 'varargs' and 'keywords' are the names of the * and ** parameters or None. + 'defaults' is an n-tuple of the default values of the last n parameters. + + This function is deprecated, as it does not support annotations or + keyword-only parameters and will raise ValueError if either is present + on the supplied callable. + + For a more structured introspection API, use inspect.signature() instead. + + Alternatively, use getfullargspec() for an API with a similar namedtuple + based interface, but full support for annotations and keyword-only + parameters. + + Deprecated since Python 3.5, use `inspect.getfullargspec()`. + 'u'Get the names and default values of a function's parameters. + + A tuple of four things is returned: (args, varargs, keywords, defaults). + 'args' is a list of the argument names, including keyword-only argument names. + 'varargs' and 'keywords' are the names of the * and ** parameters or None. + 'defaults' is an n-tuple of the default values of the last n parameters. + + This function is deprecated, as it does not support annotations or + keyword-only parameters and will raise ValueError if either is present + on the supplied callable. + + For a more structured introspection API, use inspect.signature() instead. + + Alternatively, use getfullargspec() for an API with a similar namedtuple + based interface, but full support for annotations and keyword-only + parameters. + + Deprecated since Python 3.5, use `inspect.getfullargspec()`. + 'b'inspect.getargspec() is deprecated since Python 3.0, use inspect.signature() or inspect.getfullargspec()'u'inspect.getargspec() is deprecated since Python 3.0, use inspect.signature() or inspect.getfullargspec()'b'Function has keyword-only parameters or annotations, use inspect.signature() API which can support them'u'Function has keyword-only parameters or annotations, use inspect.signature() API which can support them'b'FullArgSpec'u'FullArgSpec'b'args, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotations'u'args, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotations'b'Get the names and default values of a callable object's parameters. + + A tuple of seven things is returned: + (args, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotations). + 'args' is a list of the parameter names. + 'varargs' and 'varkw' are the names of the * and ** parameters or None. + 'defaults' is an n-tuple of the default values of the last n parameters. + 'kwonlyargs' is a list of keyword-only parameter names. + 'kwonlydefaults' is a dictionary mapping names from kwonlyargs to defaults. + 'annotations' is a dictionary mapping parameter names to annotations. + + Notable differences from inspect.signature(): + - the "self" parameter is always reported, even for bound methods + - wrapper chains defined by __wrapped__ *not* unwrapped automatically + 'u'Get the names and default values of a callable object's parameters. + + A tuple of seven things is returned: + (args, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotations). + 'args' is a list of the parameter names. + 'varargs' and 'varkw' are the names of the * and ** parameters or None. + 'defaults' is an n-tuple of the default values of the last n parameters. + 'kwonlyargs' is a list of keyword-only parameter names. + 'kwonlydefaults' is a dictionary mapping names from kwonlyargs to defaults. + 'annotations' is a dictionary mapping parameter names to annotations. + + Notable differences from inspect.signature(): + - the "self" parameter is always reported, even for bound methods + - wrapper chains defined by __wrapped__ *not* unwrapped automatically + 'b'unsupported callable'u'unsupported callable'b'ArgInfo'u'ArgInfo'b'args varargs keywords locals'u'args varargs keywords locals'b'Get information about arguments passed into a particular frame. + + A tuple of four things is returned: (args, varargs, varkw, locals). + 'args' is a list of the argument names. + 'varargs' and 'varkw' are the names of the * and ** arguments or None. + 'locals' is the locals dictionary of the given frame.'u'Get information about arguments passed into a particular frame. + + A tuple of four things is returned: (args, varargs, varkw, locals). + 'args' is a list of the argument names. + 'varargs' and 'varkw' are the names of the * and ** arguments or None. + 'locals' is the locals dictionary of the given frame.'b'typing'u'typing'b'typing.'u'typing.'b' -> 'u' -> 'b'Format an argument spec from the values returned by getfullargspec. + + The first seven arguments are (args, varargs, varkw, defaults, + kwonlyargs, kwonlydefaults, annotations). The other five arguments + are the corresponding optional formatting functions that are called to + turn names and values into strings. The last argument is an optional + function to format the sequence of arguments. + + Deprecated since Python 3.5: use the `signature` function and `Signature` + objects. + 'u'Format an argument spec from the values returned by getfullargspec. + + The first seven arguments are (args, varargs, varkw, defaults, + kwonlyargs, kwonlydefaults, annotations). The other five arguments + are the corresponding optional formatting functions that are called to + turn names and values into strings. The last argument is an optional + function to format the sequence of arguments. + + Deprecated since Python 3.5: use the `signature` function and `Signature` + objects. + 'b'`formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly'u'`formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly'b'Format an argument spec from the 4 values returned by getargvalues. + + The first four arguments are (args, varargs, varkw, locals). The + next four arguments are the corresponding optional formatting functions + that are called to turn names and values into strings. The ninth + argument is an optional function to format the sequence of arguments.'u'Format an argument spec from the 4 values returned by getargvalues. + + The first four arguments are (args, varargs, varkw, locals). The + next four arguments are the corresponding optional formatting functions + that are called to turn names and values into strings. The ninth + argument is an optional function to format the sequence of arguments.'b'{} and {}'u'{} and {}'b', {} and {}'u', {} and {}'b'%s() missing %i required %s argument%s: %s'u'%s() missing %i required %s argument%s: %s'b'positional'u'positional'b'keyword-only'u'keyword-only'b'at least %d'u'at least %d'b'from %d to %d'u'from %d to %d'b' positional argument%s (and %d keyword-only argument%s)'u' positional argument%s (and %d keyword-only argument%s)'b'%s() takes %s positional argument%s but %d%s %s given'u'%s() takes %s positional argument%s but %d%s %s given'b'was'u'was'b'were'u'were'b'Get the mapping of arguments to values. + + A dict is returned, with keys the function argument names (including the + names of the * and ** arguments, if any), and values the respective bound + values from 'positional' and 'named'.'u'Get the mapping of arguments to values. + + A dict is returned, with keys the function argument names (including the + names of the * and ** arguments, if any), and values the respective bound + values from 'positional' and 'named'.'b'%s() got an unexpected keyword argument %r'u'%s() got an unexpected keyword argument %r'b'%s() got multiple values for argument %r'u'%s() got multiple values for argument %r'b'ClosureVars'u'ClosureVars'b'nonlocals globals builtins unbound'u'nonlocals globals builtins unbound'b' + Get the mapping of free variables to their current values. + + Returns a named tuple of dicts mapping the current nonlocal, global + and builtin references as seen by the body of the function. A final + set of unbound names that could not be resolved is also provided. + 'u' + Get the mapping of free variables to their current values. + + Returns a named tuple of dicts mapping the current nonlocal, global + and builtin references as seen by the body of the function. A final + set of unbound names that could not be resolved is also provided. + 'b'{!r} is not a Python function'u'{!r} is not a Python function'b'__builtins__'u'__builtins__'b'True'u'True'b'False'u'False'b'Traceback'u'Traceback'b'filename lineno function code_context index'u'filename lineno function code_context index'b'Get information about a frame or traceback object. + + A tuple of five things is returned: the filename, the line number of + the current line, the function name, a list of lines of context from + the source code, and the index of the current line within that list. + The optional second argument specifies the number of lines of context + to return, which are centered around the current line.'u'Get information about a frame or traceback object. + + A tuple of five things is returned: the filename, the line number of + the current line, the function name, a list of lines of context from + the source code, and the index of the current line within that list. + The optional second argument specifies the number of lines of context + to return, which are centered around the current line.'b'{!r} is not a frame or traceback object'u'{!r} is not a frame or traceback object'b'Get the line number from a frame object, allowing for optimization.'u'Get the line number from a frame object, allowing for optimization.'b'FrameInfo'u'FrameInfo'b'Get a list of records for a frame and all higher (calling) frames. + + Each record contains a frame object, filename, line number, function + name, a list of lines of context, and index within the context.'u'Get a list of records for a frame and all higher (calling) frames. + + Each record contains a frame object, filename, line number, function + name, a list of lines of context, and index within the context.'b'Get a list of records for a traceback's frame and all lower frames. + + Each record contains a frame object, filename, line number, function + name, a list of lines of context, and index within the context.'u'Get a list of records for a traceback's frame and all lower frames. + + Each record contains a frame object, filename, line number, function + name, a list of lines of context, and index within the context.'b'Return the frame of the caller or None if this is not possible.'u'Return the frame of the caller or None if this is not possible.'b'Return a list of records for the stack above the caller's frame.'u'Return a list of records for the stack above the caller's frame.'b'Return a list of records for the stack below the current exception.'u'Return a list of records for the stack below the current exception.'b'Retrieve attributes without triggering dynamic lookup via the + descriptor protocol, __getattr__ or __getattribute__. + + Note: this function may not be able to retrieve all attributes + that getattr can fetch (like dynamically created attributes) + and may find attributes that getattr can't (like descriptors + that raise AttributeError). It can also return descriptor objects + instead of instance members in some cases. See the + documentation for details. + 'u'Retrieve attributes without triggering dynamic lookup via the + descriptor protocol, __getattr__ or __getattribute__. + + Note: this function may not be able to retrieve all attributes + that getattr can fetch (like dynamically created attributes) + and may find attributes that getattr can't (like descriptors + that raise AttributeError). It can also return descriptor objects + instead of instance members in some cases. See the + documentation for details. + 'b'GEN_CREATED'u'GEN_CREATED'b'GEN_RUNNING'u'GEN_RUNNING'b'GEN_SUSPENDED'u'GEN_SUSPENDED'b'GEN_CLOSED'u'GEN_CLOSED'b'Get current state of a generator-iterator. + + Possible states are: + GEN_CREATED: Waiting to start execution. + GEN_RUNNING: Currently being executed by the interpreter. + GEN_SUSPENDED: Currently suspended at a yield expression. + GEN_CLOSED: Execution has completed. + 'u'Get current state of a generator-iterator. + + Possible states are: + GEN_CREATED: Waiting to start execution. + GEN_RUNNING: Currently being executed by the interpreter. + GEN_SUSPENDED: Currently suspended at a yield expression. + GEN_CLOSED: Execution has completed. + 'b' + Get the mapping of generator local variables to their current values. + + A dict is returned, with the keys the local variable names and values the + bound values.'u' + Get the mapping of generator local variables to their current values. + + A dict is returned, with the keys the local variable names and values the + bound values.'b'{!r} is not a Python generator'u'{!r} is not a Python generator'b'CORO_CREATED'u'CORO_CREATED'b'CORO_RUNNING'u'CORO_RUNNING'b'CORO_SUSPENDED'u'CORO_SUSPENDED'b'CORO_CLOSED'u'CORO_CLOSED'b'Get current state of a coroutine object. + + Possible states are: + CORO_CREATED: Waiting to start execution. + CORO_RUNNING: Currently being executed by the interpreter. + CORO_SUSPENDED: Currently suspended at an await expression. + CORO_CLOSED: Execution has completed. + 'u'Get current state of a coroutine object. + + Possible states are: + CORO_CREATED: Waiting to start execution. + CORO_RUNNING: Currently being executed by the interpreter. + CORO_SUSPENDED: Currently suspended at an await expression. + CORO_CLOSED: Execution has completed. + 'b' + Get the mapping of coroutine local variables to their current values. + + A dict is returned, with the keys the local variable names and values the + bound values.'u' + Get the mapping of coroutine local variables to their current values. + + A dict is returned, with the keys the local variable names and values the + bound values.'b'from_bytes'u'from_bytes'b'Private helper. Checks if ``cls`` has an attribute + named ``method_name`` and returns it only if it is a + pure python function. + 'u'Private helper. Checks if ``cls`` has an attribute + named ``method_name`` and returns it only if it is a + pure python function. + 'b'Private helper to calculate how 'wrapped_sig' signature will + look like after applying a 'functools.partial' object (or alike) + on it. + 'u'Private helper to calculate how 'wrapped_sig' signature will + look like after applying a 'functools.partial' object (or alike) + on it. + 'b'partial object {!r} has incorrect arguments'u'partial object {!r} has incorrect arguments'b'Private helper to transform signatures for unbound + functions to bound methods. + 'u'Private helper to transform signatures for unbound + functions to bound methods. + 'b'invalid method signature'u'invalid method signature'b'invalid argument type'u'invalid argument type'b'Private helper to test if `obj` is a callable that might + support Argument Clinic's __text_signature__ protocol. + 'u'Private helper to test if `obj` is a callable that might + support Argument Clinic's __text_signature__ protocol. + 'b'Private helper to test if `obj` is a duck type of FunctionType. + A good example of such objects are functions compiled with + Cython, which have all attributes that a pure Python function + would have, but have their code statically compiled. + 'u'Private helper to test if `obj` is a duck type of FunctionType. + A good example of such objects are functions compiled with + Cython, which have all attributes that a pure Python function + would have, but have their code statically compiled. + 'b'__defaults__'u'__defaults__'b'__kwdefaults__'u'__kwdefaults__'b' Private helper to get first parameter name from a + __text_signature__ of a builtin method, which should + be in the following format: '($param1, ...)'. + Assumptions are that the first argument won't have + a default value or an annotation. + 'u' Private helper to get first parameter name from a + __text_signature__ of a builtin method, which should + be in the following format: '($param1, ...)'. + Assumptions are that the first argument won't have + a default value or an annotation. + 'b'($'u'($'b' + Private helper function. Takes a signature in Argument Clinic's + extended signature format. + + Returns a tuple of three things: + * that signature re-rendered in standard Python syntax, + * the index of the "self" parameter (generally 0), or None if + the function does not have a "self" parameter, and + * the index of the last "positional only" parameter, + or None if the signature has no positional-only parameters. + 'u' + Private helper function. Takes a signature in Argument Clinic's + extended signature format. + + Returns a tuple of three things: + * that signature re-rendered in standard Python syntax, + * the index of the "self" parameter (generally 0), or None if + the function does not have a "self" parameter, and + * the index of the last "positional only" parameter, + or None if the signature has no positional-only parameters. + 'b'Private helper to parse content of '__text_signature__' + and return a Signature based on it. + 'u'Private helper to parse content of '__text_signature__' + and return a Signature based on it. + 'b'def foo'u'def foo'b': pass'u': pass'b'{!r} builtin has invalid signature'u'{!r} builtin has invalid signature'b'Annotations are not currently supported'u'Annotations are not currently supported'b'Private helper function to get signature for + builtin callables. + 'u'Private helper function to get signature for + builtin callables. + 'b'{!r} is not a Python builtin function'u'{!r} is not a Python builtin function'b'__text_signature__'u'__text_signature__'b'no signature found for builtin {!r}'u'no signature found for builtin {!r}'b'Private helper: constructs Signature for the given python function.'u'Private helper: constructs Signature for the given python function.'b'Private helper function to get signature for arbitrary + callable objects. + 'u'Private helper function to get signature for arbitrary + callable objects. + 'b'{!r} is not a callable object'u'{!r} is not a callable object'b'__signature__'u'__signature__'b'unexpected object {!r} in __signature__ attribute'u'unexpected object {!r} in __signature__ attribute'b'no signature found for builtin type {!r}'u'no signature found for builtin type {!r}'b'no signature found for {!r}'u'no signature found for {!r}'b'no signature found for builtin function {!r}'u'no signature found for builtin function {!r}'b'callable {!r} is not supported by signature'u'callable {!r} is not supported by signature'b'A private marker - used in Parameter & Signature.'u'A private marker - used in Parameter & Signature.'b'Marker object for Signature.empty and Parameter.empty.'u'Marker object for Signature.empty and Parameter.empty.'b'positional-only'u'positional-only'b'positional or keyword'u'positional or keyword'b'variadic positional'u'variadic positional'b'variadic keyword'u'variadic keyword'b'Represents a parameter in a function signature. + + Has the following public attributes: + + * name : str + The name of the parameter as a string. + * default : object + The default value for the parameter if specified. If the + parameter has no default value, this attribute is set to + `Parameter.empty`. + * annotation + The annotation for the parameter if specified. If the + parameter has no annotation, this attribute is set to + `Parameter.empty`. + * kind : str + Describes how argument values are bound to the parameter. + Possible values: `Parameter.POSITIONAL_ONLY`, + `Parameter.POSITIONAL_OR_KEYWORD`, `Parameter.VAR_POSITIONAL`, + `Parameter.KEYWORD_ONLY`, `Parameter.VAR_KEYWORD`. + 'u'Represents a parameter in a function signature. + + Has the following public attributes: + + * name : str + The name of the parameter as a string. + * default : object + The default value for the parameter if specified. If the + parameter has no default value, this attribute is set to + `Parameter.empty`. + * annotation + The annotation for the parameter if specified. If the + parameter has no annotation, this attribute is set to + `Parameter.empty`. + * kind : str + Describes how argument values are bound to the parameter. + Possible values: `Parameter.POSITIONAL_ONLY`, + `Parameter.POSITIONAL_OR_KEYWORD`, `Parameter.VAR_POSITIONAL`, + `Parameter.KEYWORD_ONLY`, `Parameter.VAR_KEYWORD`. + 'b'_kind'u'_kind'b'_default'u'_default'b'_annotation'u'_annotation'b'value 'u'value 'b' is not a valid Parameter.kind'u' is not a valid Parameter.kind'b'{} parameters cannot have default values'u'{} parameters cannot have default values'b'name is a required attribute for Parameter'u'name is a required attribute for Parameter'b'name must be a str, not a {}'u'name must be a str, not a {}'b'implicit arguments must be passed as positional or keyword arguments, not {}'u'implicit arguments must be passed as positional or keyword arguments, not {}'b'implicit{}'u'implicit{}'b'{!r} is not a valid parameter name'u'{!r} is not a valid parameter name'b'Creates a customized copy of the Parameter.'u'Creates a customized copy of the Parameter.'b'{}: {}'u'{}: {}'b'{} = {}'u'{} = {}'b'{}={}'u'{}={}'b'<{} "{}">'u'<{} "{}">'b'Result of `Signature.bind` call. Holds the mapping of arguments + to the function's parameters. + + Has the following public attributes: + + * arguments : OrderedDict + An ordered mutable mapping of parameters' names to arguments' values. + Does not contain arguments' default values. + * signature : Signature + The Signature object that created this instance. + * args : tuple + Tuple of positional arguments values. + * kwargs : dict + Dict of keyword arguments values. + 'u'Result of `Signature.bind` call. Holds the mapping of arguments + to the function's parameters. + + Has the following public attributes: + + * arguments : OrderedDict + An ordered mutable mapping of parameters' names to arguments' values. + Does not contain arguments' default values. + * signature : Signature + The Signature object that created this instance. + * args : tuple + Tuple of positional arguments values. + * kwargs : dict + Dict of keyword arguments values. + 'b'arguments'u'arguments'b'_signature'u'_signature'b'Set default values for missing arguments. + + For variable-positional arguments (*args) the default is an + empty tuple. + + For variable-keyword arguments (**kwargs) the default is an + empty dict. + 'u'Set default values for missing arguments. + + For variable-positional arguments (*args) the default is an + empty tuple. + + For variable-keyword arguments (**kwargs) the default is an + empty dict. + 'b'<{} ({})>'u'<{} ({})>'b'A Signature object represents the overall signature of a function. + It stores a Parameter object for each parameter accepted by the + function, as well as information specific to the function itself. + + A Signature object has the following public attributes and methods: + + * parameters : OrderedDict + An ordered mapping of parameters' names to the corresponding + Parameter objects (keyword-only arguments are in the same order + as listed in `code.co_varnames`). + * return_annotation : object + The annotation for the return type of the function if specified. + If the function has no annotation for its return type, this + attribute is set to `Signature.empty`. + * bind(*args, **kwargs) -> BoundArguments + Creates a mapping from positional and keyword arguments to + parameters. + * bind_partial(*args, **kwargs) -> BoundArguments + Creates a partial mapping from positional and keyword arguments + to parameters (simulating 'functools.partial' behavior.) + 'u'A Signature object represents the overall signature of a function. + It stores a Parameter object for each parameter accepted by the + function, as well as information specific to the function itself. + + A Signature object has the following public attributes and methods: + + * parameters : OrderedDict + An ordered mapping of parameters' names to the corresponding + Parameter objects (keyword-only arguments are in the same order + as listed in `code.co_varnames`). + * return_annotation : object + The annotation for the return type of the function if specified. + If the function has no annotation for its return type, this + attribute is set to `Signature.empty`. + * bind(*args, **kwargs) -> BoundArguments + Creates a mapping from positional and keyword arguments to + parameters. + * bind_partial(*args, **kwargs) -> BoundArguments + Creates a partial mapping from positional and keyword arguments + to parameters (simulating 'functools.partial' behavior.) + 'b'_return_annotation'u'_return_annotation'b'_parameters'u'_parameters'b'Constructs Signature from the given list of Parameter + objects and 'return_annotation'. All arguments are optional. + 'u'Constructs Signature from the given list of Parameter + objects and 'return_annotation'. All arguments are optional. + 'b'wrong parameter order: {} parameter before {} parameter'u'wrong parameter order: {} parameter before {} parameter'b'non-default argument follows default argument'u'non-default argument follows default argument'b'duplicate parameter name: {!r}'u'duplicate parameter name: {!r}'b'Constructs Signature for the given python function. + + Deprecated since Python 3.5, use `Signature.from_callable()`. + 'u'Constructs Signature for the given python function. + + Deprecated since Python 3.5, use `Signature.from_callable()`. + 'b'inspect.Signature.from_function() is deprecated since Python 3.5, use Signature.from_callable()'u'inspect.Signature.from_function() is deprecated since Python 3.5, use Signature.from_callable()'b'Constructs Signature for the given builtin function. + + Deprecated since Python 3.5, use `Signature.from_callable()`. + 'u'Constructs Signature for the given builtin function. + + Deprecated since Python 3.5, use `Signature.from_callable()`. + 'b'inspect.Signature.from_builtin() is deprecated since Python 3.5, use Signature.from_callable()'u'inspect.Signature.from_builtin() is deprecated since Python 3.5, use Signature.from_callable()'b'Constructs Signature for the given callable object.'u'Constructs Signature for the given callable object.'b'Creates a customized copy of the Signature. + Pass 'parameters' and/or 'return_annotation' arguments + to override them in the new copy. + 'u'Creates a customized copy of the Signature. + Pass 'parameters' and/or 'return_annotation' arguments + to override them in the new copy. + 'b'Private method. Don't use directly.'u'Private method. Don't use directly.'b'too many positional arguments'u'too many positional arguments'b'multiple values for argument {arg!r}'u'multiple values for argument {arg!r}'b'{arg!r} parameter is positional only, but was passed as a keyword'u'{arg!r} parameter is positional only, but was passed as a keyword'b'missing a required argument: {arg!r}'u'missing a required argument: {arg!r}'b'got an unexpected keyword argument {arg!r}'u'got an unexpected keyword argument {arg!r}'b'Get a BoundArguments object, that maps the passed `args` + and `kwargs` to the function's signature. Raises `TypeError` + if the passed arguments can not be bound. + 'u'Get a BoundArguments object, that maps the passed `args` + and `kwargs` to the function's signature. Raises `TypeError` + if the passed arguments can not be bound. + 'b'Get a BoundArguments object, that partially maps the + passed `args` and `kwargs` to the function's signature. + Raises `TypeError` if the passed arguments can not be bound. + 'u'Get a BoundArguments object, that partially maps the + passed `args` and `kwargs` to the function's signature. + Raises `TypeError` if the passed arguments can not be bound. + 'b' -> {}'u' -> {}'b'Get a signature object for the passed callable.'u'Get a signature object for the passed callable.'b' Logic for inspecting an object given at command line 'u' Logic for inspecting an object given at command line 'b'object'u'object'b'The object to be analysed. It supports the 'module:qualname' syntax'u'The object to be analysed. It supports the 'module:qualname' syntax'b'--details'u'--details'b'Display info about the module rather than its source code'u'Display info about the module rather than its source code'b'Failed to import {} ({}: {})'u'Failed to import {} ({}: {})'b'Can't get info for builtin modules.'u'Can't get info for builtin modules.'b'Target: {}'u'Target: {}'b'Origin: {}'u'Origin: {}'b'Cached: {}'u'Cached: {}'b'Loader: {}'u'Loader: {}'b'Submodule search path: {}'u'Submodule search path: {}'b'Line: {}'u'Line: {}'u'inspect'The io module provides the Python interfaces to stream handling. The +builtin open function is defined in this module. + +At the top of the I/O hierarchy is the abstract base class IOBase. It +defines the basic interface to a stream. Note, however, that there is no +separation between reading and writing to streams; implementations are +allowed to raise an OSError if they do not support a given operation. + +Extending IOBase is RawIOBase which deals simply with the reading and +writing of raw bytes to a stream. FileIO subclasses RawIOBase to provide +an interface to OS files. + +BufferedIOBase deals with buffering on a raw byte stream (RawIOBase). Its +subclasses, BufferedWriter, BufferedReader, and BufferedRWPair buffer +streams that are readable, writable, and both respectively. +BufferedRandom provides a buffered interface to random access +streams. BytesIO is a simple stream of in-memory bytes. + +Another IOBase subclass, TextIOBase, deals with the encoding and decoding +of streams into text. TextIOWrapper, which extends it, is a buffered text +interface to a buffered raw stream (`BufferedIOBase`). Finally, StringIO +is an in-memory stream for text. + +Argument names are not part of the specification, and only the arguments +of open() are intended to be used as keyword arguments. + +data: + +DEFAULT_BUFFER_SIZE + + An int containing the default buffer size used by the module's buffered + I/O classes. open() uses the file's blksize (as obtained by os.stat) if + possible. +Guido van Rossum , Mike Verdone , Mark Russell , Antoine Pitrou , Amaury Forgeot d'Arc , Benjamin Peterson "Guido van Rossum , ""Mike Verdone , ""Mark Russell , ""Antoine Pitrou , ""Amaury Forgeot d'Arc , ""Benjamin Peterson "IOBaseOpenWrapper_WindowsConsoleIO# New I/O library conforming to PEP 3116.# for compatibility with _pyio# Pretend this exception was created here.# for seek()# Declaring ABCs in C is tricky so we do it here.# Method descriptions and default implementations are inherited from the C# version however.b'The io module provides the Python interfaces to stream handling. The +builtin open function is defined in this module. + +At the top of the I/O hierarchy is the abstract base class IOBase. It +defines the basic interface to a stream. Note, however, that there is no +separation between reading and writing to streams; implementations are +allowed to raise an OSError if they do not support a given operation. + +Extending IOBase is RawIOBase which deals simply with the reading and +writing of raw bytes to a stream. FileIO subclasses RawIOBase to provide +an interface to OS files. + +BufferedIOBase deals with buffering on a raw byte stream (RawIOBase). Its +subclasses, BufferedWriter, BufferedReader, and BufferedRWPair buffer +streams that are readable, writable, and both respectively. +BufferedRandom provides a buffered interface to random access +streams. BytesIO is a simple stream of in-memory bytes. + +Another IOBase subclass, TextIOBase, deals with the encoding and decoding +of streams into text. TextIOWrapper, which extends it, is a buffered text +interface to a buffered raw stream (`BufferedIOBase`). Finally, StringIO +is an in-memory stream for text. + +Argument names are not part of the specification, and only the arguments +of open() are intended to be used as keyword arguments. + +data: + +DEFAULT_BUFFER_SIZE + + An int containing the default buffer size used by the module's buffered + I/O classes. open() uses the file's blksize (as obtained by os.stat) if + possible. +'b'Guido van Rossum , Mike Verdone , Mark Russell , Antoine Pitrou , Amaury Forgeot d'Arc , Benjamin Peterson 'u'Guido van Rossum , Mike Verdone , Mark Russell , Antoine Pitrou , Amaury Forgeot d'Arc , Benjamin Peterson 'b'BlockingIOError'u'BlockingIOError'b'open_code'u'open_code'b'IOBase'u'IOBase'b'RawIOBase'u'RawIOBase'b'FileIO'u'FileIO'b'BytesIO'u'BytesIO'b'BufferedIOBase'u'BufferedIOBase'b'BufferedReader'u'BufferedReader'b'BufferedWriter'u'BufferedWriter'b'BufferedRWPair'u'BufferedRWPair'b'BufferedRandom'u'BufferedRandom'b'TextIOBase'u'TextIOBase'b'TextIOWrapper'u'TextIOWrapper'b'UnsupportedOperation'u'UnsupportedOperation'b'SEEK_SET'u'SEEK_SET'b'SEEK_CUR'u'SEEK_CUR'b'SEEK_END'u'SEEK_END'u'Functional tools for creating and using iterators. + +Infinite iterators: +count(start=0, step=1) --> start, start+step, start+2*step, ... +cycle(p) --> p0, p1, ... plast, p0, p1, ... +repeat(elem [,n]) --> elem, elem, elem, ... endlessly or up to n times + +Iterators terminating on the shortest input sequence: +accumulate(p[, func]) --> p0, p0+p1, p0+p1+p2 +chain(p, q, ...) --> p0, p1, ... plast, q0, q1, ... +chain.from_iterable([p, q, ...]) --> p0, p1, ... plast, q0, q1, ... +compress(data, selectors) --> (d[0] if s[0]), (d[1] if s[1]), ... +dropwhile(pred, seq) --> seq[n], seq[n+1], starting when pred fails +groupby(iterable[, keyfunc]) --> sub-iterators grouped by value of keyfunc(v) +filterfalse(pred, seq) --> elements of seq where pred(elem) is False +islice(seq, [start,] stop [, step]) --> elements from + seq[start:stop:step] +starmap(fun, seq) --> fun(*seq[0]), fun(*seq[1]), ... +tee(it, n=2) --> (it1, it2 , ... itn) splits one iterator into n +takewhile(pred, seq) --> seq[0], seq[1], until pred fails +zip_longest(p, q, ...) --> (p[0], q[0]), (p[1], q[1]), ... + +Combinatoric generators: +product(p, q, ... [repeat=1]) --> cartesian product +permutations(p[, r]) +combinations(p, r) +combinations_with_replacement(p, r) +'itertools._grouper_grouperu'Iterator wrapped to make it copyable.'itertools._tee_teeu'teedataobject(iterable, values, next, /) +-- + +Data container common to multiple tee objects.'itertools._tee_dataobject_tee_dataobjectu'Return series of accumulated sums (or other binary function results).'itertools.accumulateaccumulateu'chain(*iterables) --> chain object + +Return a chain object whose .__next__() method returns elements from the +first iterable until it is exhausted, then elements from the next +iterable, until all of the iterables are exhausted.'itertools.chainu'Return successive r-length combinations of elements in the iterable. + +combinations(range(4), 3) --> (0,1,2), (0,1,3), (0,2,3), (1,2,3)'itertools.combinationscombinationsu'Return successive r-length combinations of elements in the iterable allowing individual elements to have successive repeats. + +combinations_with_replacement('ABC', 2) --> AA AB AC BB BC CC"'itertools.combinations_with_replacementcombinations_with_replacementu'Return data elements corresponding to true selector elements. + +Forms a shorter iterator from selected data elements using the selectors to +choose the data elements.'itertools.compressu'Return a count object whose .__next__() method returns consecutive values. + +Equivalent to: + def count(firstval=0, step=1): + x = firstval + while 1: + yield x + x += step'itertools.countu'Return elements from the iterable until it is exhausted. Then repeat the sequence indefinitely.'itertools.cyclecycleu'Drop items from the iterable while predicate(item) is true. + +Afterwards, return every element until the iterable is exhausted.'itertools.dropwhiledropwhileu'Return those items of iterable for which function(item) is false. + +If function is None, return the items that are false.'itertools.filterfalseu'make an iterator that returns consecutive keys and groups from the iterable + + iterable + Elements to divide into groups according to the key function. + key + A function for computing the group category for each element. + If the key function is not specified or is None, the element itself + is used for grouping.'itertools.groupbygroupbyu'islice(iterable, stop) --> islice object +islice(iterable, start, stop[, step]) --> islice object + +Return an iterator whose next() method returns selected values from an +iterable. If start is specified, will skip all preceding elements; +otherwise, start defaults to zero. Step defaults to one. If +specified as another value, step determines how many values are +skipped between successive calls. Works like a slice() on a list +but returns an iterator.'itertools.isliceisliceu'Return successive r-length permutations of elements in the iterable. + +permutations(range(3), 2) --> (0,1), (0,2), (1,0), (1,2), (2,0), (2,1)'itertools.permutationspermutationsu'product(*iterables, repeat=1) --> product object + +Cartesian product of input iterables. Equivalent to nested for-loops. + +For example, product(A, B) returns the same as: ((x,y) for x in A for y in B). +The leftmost iterators are in the outermost for-loop, so the output tuples +cycle in a manner similar to an odometer (with the rightmost element changing +on every iteration). + +To compute the product of an iterable with itself, specify the number +of repetitions with the optional repeat keyword argument. For example, +product(A, repeat=4) means the same as product(A, A, A, A). + +product('ab', range(3)) --> ('a',0) ('a',1) ('a',2) ('b',0) ('b',1) ('b',2) +product((0,1), (0,1), (0,1)) --> (0,0,0) (0,0,1) (0,1,0) (0,1,1) (1,0,0) ...'itertools.productu'repeat(object [,times]) -> create an iterator which returns the object +for the specified number of times. If not specified, returns the object +endlessly.'itertools.repeatu'Return an iterator whose values are returned from the function evaluated with an argument tuple taken from the given sequence.'itertools.starmapu'Return successive entries from an iterable as long as the predicate evaluates to true for each entry.'itertools.takewhiletakewhileteeu'zip_longest(iter1 [,iter2 [...]], [fillvalue=None]) --> zip_longest object + +Return a zip_longest object whose .__next__() method returns a tuple where +the i-th element comes from the i-th iterable argument. The .__next__() +method continues until the longest iterable in the argument sequence +is exhausted and then it raises StopIteration. When the shorter iterables +are exhausted, the fillvalue is substituted in their place. The fillvalue +defaults to None or can be specified by a keyword argument. +'itertools.zip_longestKeywords (from "Grammar/Grammar") + +This file is automatically generated; please don't muck it up! + +To update the symbols in this file, 'cd' to the top directory of +the python source tree and run: + + python3 -m Parser.pgen.keywordgen Grammar/Grammar Grammar/Tokens Lib/keyword.py + +Alternatively, you can run 'make regen-keyword'. +kwlistassertasyncawaitbreakcontinuedelelifelseexceptfinallyglobalisnonlocaltrywhilewithyieldb'Keywords (from "Grammar/Grammar") + +This file is automatically generated; please don't muck it up! + +To update the symbols in this file, 'cd' to the top directory of +the python source tree and run: + + python3 -m Parser.pgen.keywordgen Grammar/Grammar Grammar/Tokens Lib/keyword.py + +Alternatively, you can run 'make regen-keyword'. +'u'Keywords (from "Grammar/Grammar") + +This file is automatically generated; please don't muck it up! + +To update the symbols in this file, 'cd' to the top directory of +the python source tree and run: + + python3 -m Parser.pgen.keywordgen Grammar/Grammar Grammar/Tokens Lib/keyword.py + +Alternatively, you can run 'make regen-keyword'. +'b'iskeyword'u'iskeyword'b'kwlist'u'kwlist'b'assert'u'assert'b'async'u'async'b'await'u'await'b'break'u'break'b'continue'u'continue'b'del'u'del'b'elif'u'elif'b'else'u'else'b'except'u'except'b'finally'u'finally'b'global'u'global'b'is'u'is'b'nonlocal'u'nonlocal'b'try'u'try'b'while'u'while'b'with'u'with'b'yield'u'yield'u'keyword'Cache lines from Python source files. + +This is intended to read lines from modules imported -- hence if a filename +is not found, it will look down the module search path for a file by +that name. +clearcacheClear the cache entirely.Get the lines for a Python source file from the cache. + Update the cache if it doesn't contain an entry for this file already.updatecacheDiscard cache entries that are out of date. + (This is not checked upon each call!)filenamesUpdate a cache entry and return its list of lines. + If something's wrong, print a message, discard the cache entry, + and return an empty list.lazycacheSeed the cache for filename with module_globals. + + The module loader will be asked for the source only when getlines is + called, not immediately. + + If there is an entry in the cache already, it is not altered. + + :return: True if a lazy load is registered in the cache, + otherwise False. To register such a load a module loader with a + get_source method must be found, the filename must be a cachable + filename, and the filename must not be already cached. + # The cache# The cache. Maps filenames to either a thunk which will provide source code,# or a tuple (size, mtime, lines, fullname) once loaded.# lazy cache entry, leave it lazy.# no-op for files loaded via a __loader__# Realise a lazy loader based lookup if there is one# otherwise try to lookup right now.# No luck, the PEP302 loader cannot find the source# for this module.# Try looking through the module search path, which is only useful# when handling a relative filename.# Not sufficiently string-like to do anything useful with.# Try for a __loader__, if availableb'Cache lines from Python source files. + +This is intended to read lines from modules imported -- hence if a filename +is not found, it will look down the module search path for a file by +that name. +'u'Cache lines from Python source files. + +This is intended to read lines from modules imported -- hence if a filename +is not found, it will look down the module search path for a file by +that name. +'b'getline'u'getline'b'clearcache'u'clearcache'b'checkcache'u'checkcache'b'Clear the cache entirely.'u'Clear the cache entirely.'b'Get the lines for a Python source file from the cache. + Update the cache if it doesn't contain an entry for this file already.'u'Get the lines for a Python source file from the cache. + Update the cache if it doesn't contain an entry for this file already.'b'Discard cache entries that are out of date. + (This is not checked upon each call!)'u'Discard cache entries that are out of date. + (This is not checked upon each call!)'b'Update a cache entry and return its list of lines. + If something's wrong, print a message, discard the cache entry, + and return an empty list.'u'Update a cache entry and return its list of lines. + If something's wrong, print a message, discard the cache entry, + and return an empty list.'b'Seed the cache for filename with module_globals. + + The module loader will be asked for the source only when getlines is + called, not immediately. + + If there is an entry in the cache already, it is not altered. + + :return: True if a lazy load is registered in the cache, + otherwise False. To register such a load a module loader with a + get_source method must be found, the filename must be a cachable + filename, and the filename must not be already cached. + 'u'Seed the cache for filename with module_globals. + + The module loader will be asked for the source only when getlines is + called, not immediately. + + If there is an entry in the cache already, it is not altered. + + :return: True if a lazy load is registered in the cache, + otherwise False. To register such a load a module loader with a + get_source method must be found, the filename must be a cachable + filename, and the filename must not be already cached. + 'b'get_source'u'get_source'u'linecache'Safely evaluate Python string literals without using eval(). simple_escapeseschexesinvalid hex string escape ('\%s')invalid octal string escape ('\%s')evalString\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})b'Safely evaluate Python string literals without using eval().'u'Safely evaluate Python string literals without using eval().'b''u''u''b' 'u' 'b' 'u' 'b'v'u'v'b'invalid hex string escape ('\%s')'u'invalid hex string escape ('\%s')'b'invalid octal string escape ('\%s')'u'invalid octal string escape ('\%s')'b'\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})'u'\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})'u'lib2to3.pgen2.literals'u'pgen2.literals'u'literals'Loading unittests.[_a-z]\w*\.py$VALID_MODULE_NAME_FailedTesttestFailure_make_failed_import_testsuiteClassFailed to import test module: %s +%sformat_exc_make_failed_test_make_failed_load_testsFailed to call load_tests: +%s_make_skipped_testtestSkippedModuleSkippedTestClass_jython_aware_splitext$py.class + This class is responsible for loading tests according to various criteria + and returning them wrapped in a TestSuite + testMethodPrefixthree_way_cmpsortTestMethodsUsingtestNamePatterns_top_level_dir_loading_packagesloadTestsFromTestCasetestCaseClassReturn a suite of all test cases contained in testCaseClassTest cases should not be derived from TestSuite. Maybe you meant to derive from TestCase?"Test cases should not be derived from ""TestSuite. Maybe you meant to derive from ""TestCase?"testCaseNamesloaded_suiteloadTestsFromModuleReturn a suite of all test cases contained in the given moduleuse_load_testsuse_load_tests is deprecated and ignoredcomplaintloadTestsFromModule() takes 1 positional argument but {} were givenloadTestsFromModule() got an unexpected keyword argument '{}'error_caseerror_messageloadTestsFromNameReturn a suite of all test cases given a string specifier. + + The name may resolve either to a module, a test case class, a + test method within a test case class, or a callable object which + returns a TestCase or TestSuite instance. + + The method optionally resolves the names relative to a given module. + parts_copynext_attributeFailed to access attribute: +%scalling %s returned %s, not a testdon't know how to make test from: %sloadTestsFromNamesReturn a suite of all test cases found using the given sequence + of string specifiers. See 'loadTestsFromName()'. + suitesReturn a sorted sequence of method names found within testCaseClass + shouldIncludeMethod%s.%s.%sfullNametestFnNamestest*.pyFind and return all test modules from the specified start + directory, recursing into subdirectories to find them and return all + tests found within them. Only test files that match the pattern will + be loaded. (Using shell style pattern matching.) + + All test modules must be importable from the top level of the project. + If the start directory is not the top level directory then the top + level directory must be specified separately. + + If a test package name (directory with '__init__.py') matches the + pattern then the package will be checked for a 'load_tests' function. If + this exists then it will be called with (loader, tests, pattern) unless + the package has already had load_tests called from the same discovery + invocation, in which case the package module object is not scanned for + tests - this ensures that when a package uses discover to further + discover child tests that infinite recursion does not happen. + + If load_tests exists then discovery does *not* recurse into the package, + load_tests is responsible for loading all tests in the package. + + The pattern is deliberately not stored as a loader attribute so that + packages can continue discovery themselves. top_level_dir is stored so + load_tests does not need to pass this argument in to loader.discover(). + + Paths are sorted before being imported to ensure reproducible execution + order even on filesystems with non-alphabetical ordering like ext3/4. + set_implicit_topis_not_importablethe_moduletop_part_find_testsCan not use builtin modules as dotted module names'Can not use builtin modules ''as dotted module names'don't know how to discover from {!r}_get_directory_containing_moduleStart directory is not importable: %r_get_name_from_path_relpathPath must be within the project_get_module_from_name_match_pathUsed by discovery. Yields test suites it loads._find_test_pathshould_recurseUsed by discovery. + + Loads tests from a single file, or a directories' __init__.py when + passed the directory. + + Returns a tuple (None_or_tests_from_file, should_recurse). + mod_filefullpath_noextmodule_direxpected_dir%r module incorrectly imported from %r. Expected %r. Is this module globally installed?"%r module incorrectly imported from %r. Expected ""%r. Is this module globally installed?"_makeLoadersortUsing# what about .pyc (etc)# we would need to avoid loading the same tests multiple times# from '.py', *and* '.pyc'# Tracks packages which we have called into via load_tests, to# avoid infinite re-entrancy.# XXX After Python 3.5, remove backward compatibility hacks for# use_load_tests deprecation via *args and **kws. See issue 16662.# This method used to take an undocumented and unofficial# use_load_tests argument. For backward compatibility, we still# accept the argument (which can also be the first position) but we# ignore it and issue a deprecation warning if it's present.# Complain about the number of arguments, but don't forget the# required `module` argument.# Since the keyword arguments are unsorted (see PEP 468), just# pick the alphabetically sorted first argument to complain about,# if multiple were given. At least the error message will be# predictable.# Last error so we can give it to the user if needed.# Even the top level import failed: report that error.# We can't traverse some part of the name.# This is a package (no __path__ per importlib docs), and we# encountered an error importing something. We cannot tell# the difference between package.WrongNameTestClass and# package.wrong_module_name so we just report the# ImportError - it is more informative.# Otherwise, we signal that an AttributeError has occurred.# static methods follow a different path# make top_level_dir optional if called from load_tests in a package# all test modules must be importable from the top level directory# should we *unconditionally* put the start directory in first# in sys.path to minimise likelihood of conflicts between installed# modules and development versions?# support for discovery from dotted module names# look for namespace packages# builtin module# here we have been given a module rather than a package - so# all we can do is search the *same* directory the module is in# should an exception be raised instead# override this method to use alternative matching strategy# Handle the __init__ in this package# name is '.' when start_dir == top_level_dir (and top_level_dir is by# definition not a package).# name is in self._loading_packages while we have called into# loadTestsFromModule with name.# Either an error occurred, or load_tests was used by the# package.# Handle the contents.# we found a package that didn't use load_tests.# valid Python identifiers only# if the test file matches, load it# Mark this package as being in load_tests (possibly ;))# loadTestsFromModule(package) has loaded tests for us.b'Loading unittests.'u'Loading unittests.'b'[_a-z]\w*\.py$'u'[_a-z]\w*\.py$'b'Failed to import test module: %s +%s'u'Failed to import test module: %s +%s'b'Failed to call load_tests: +%s'u'Failed to call load_tests: +%s'b'ModuleSkipped'u'ModuleSkipped'b'$py.class'u'$py.class'b' + This class is responsible for loading tests according to various criteria + and returning them wrapped in a TestSuite + 'u' + This class is responsible for loading tests according to various criteria + and returning them wrapped in a TestSuite + 'b'test'b'Return a suite of all test cases contained in testCaseClass'u'Return a suite of all test cases contained in testCaseClass'b'Test cases should not be derived from TestSuite. Maybe you meant to derive from TestCase?'u'Test cases should not be derived from TestSuite. Maybe you meant to derive from TestCase?'b'Return a suite of all test cases contained in the given module'u'Return a suite of all test cases contained in the given module'b'use_load_tests'u'use_load_tests'b'use_load_tests is deprecated and ignored'u'use_load_tests is deprecated and ignored'b'loadTestsFromModule() takes 1 positional argument but {} were given'u'loadTestsFromModule() takes 1 positional argument but {} were given'b'loadTestsFromModule() got an unexpected keyword argument '{}''u'loadTestsFromModule() got an unexpected keyword argument '{}''b'load_tests'u'load_tests'b'Return a suite of all test cases given a string specifier. + + The name may resolve either to a module, a test case class, a + test method within a test case class, or a callable object which + returns a TestCase or TestSuite instance. + + The method optionally resolves the names relative to a given module. + 'u'Return a suite of all test cases given a string specifier. + + The name may resolve either to a module, a test case class, a + test method within a test case class, or a callable object which + returns a TestCase or TestSuite instance. + + The method optionally resolves the names relative to a given module. + 'b'Failed to access attribute: +%s'u'Failed to access attribute: +%s'b'calling %s returned %s, not a test'u'calling %s returned %s, not a test'b'don't know how to make test from: %s'u'don't know how to make test from: %s'b'Return a suite of all test cases found using the given sequence + of string specifiers. See 'loadTestsFromName()'. + 'u'Return a suite of all test cases found using the given sequence + of string specifiers. See 'loadTestsFromName()'. + 'b'Return a sorted sequence of method names found within testCaseClass + 'u'Return a sorted sequence of method names found within testCaseClass + 'b'%s.%s.%s'u'%s.%s.%s'b'test*.py'u'test*.py'b'Find and return all test modules from the specified start + directory, recursing into subdirectories to find them and return all + tests found within them. Only test files that match the pattern will + be loaded. (Using shell style pattern matching.) + + All test modules must be importable from the top level of the project. + If the start directory is not the top level directory then the top + level directory must be specified separately. + + If a test package name (directory with '__init__.py') matches the + pattern then the package will be checked for a 'load_tests' function. If + this exists then it will be called with (loader, tests, pattern) unless + the package has already had load_tests called from the same discovery + invocation, in which case the package module object is not scanned for + tests - this ensures that when a package uses discover to further + discover child tests that infinite recursion does not happen. + + If load_tests exists then discovery does *not* recurse into the package, + load_tests is responsible for loading all tests in the package. + + The pattern is deliberately not stored as a loader attribute so that + packages can continue discovery themselves. top_level_dir is stored so + load_tests does not need to pass this argument in to loader.discover(). + + Paths are sorted before being imported to ensure reproducible execution + order even on filesystems with non-alphabetical ordering like ext3/4. + 'u'Find and return all test modules from the specified start + directory, recursing into subdirectories to find them and return all + tests found within them. Only test files that match the pattern will + be loaded. (Using shell style pattern matching.) + + All test modules must be importable from the top level of the project. + If the start directory is not the top level directory then the top + level directory must be specified separately. + + If a test package name (directory with '__init__.py') matches the + pattern then the package will be checked for a 'load_tests' function. If + this exists then it will be called with (loader, tests, pattern) unless + the package has already had load_tests called from the same discovery + invocation, in which case the package module object is not scanned for + tests - this ensures that when a package uses discover to further + discover child tests that infinite recursion does not happen. + + If load_tests exists then discovery does *not* recurse into the package, + load_tests is responsible for loading all tests in the package. + + The pattern is deliberately not stored as a loader attribute so that + packages can continue discovery themselves. top_level_dir is stored so + load_tests does not need to pass this argument in to loader.discover(). + + Paths are sorted before being imported to ensure reproducible execution + order even on filesystems with non-alphabetical ordering like ext3/4. + 'b'Can not use builtin modules as dotted module names'u'Can not use builtin modules as dotted module names'b'don't know how to discover from {!r}'u'don't know how to discover from {!r}'b'Start directory is not importable: %r'u'Start directory is not importable: %r'b'Path must be within the project'u'Path must be within the project'b'Used by discovery. Yields test suites it loads.'u'Used by discovery. Yields test suites it loads.'b'Used by discovery. + + Loads tests from a single file, or a directories' __init__.py when + passed the directory. + + Returns a tuple (None_or_tests_from_file, should_recurse). + 'u'Used by discovery. + + Loads tests from a single file, or a directories' __init__.py when + passed the directory. + + Returns a tuple (None_or_tests_from_file, should_recurse). + 'b'%r module incorrectly imported from %r. Expected %r. Is this module globally installed?'u'%r module incorrectly imported from %r. Expected %r. Is this module globally installed?'u'unittest.loader'u'loader'Locale support module. + +The module provides low-level access to the C lib's locale APIs and adds high +level number formatting APIs as well as a locale aliasing engine to complement +these. + +The aliasing engine includes support for many commonly used locale names and +maps them to values suitable for passing to the C lib's setlocale() function. It +also includes default encodings for all supported locale names. + +encodings.aliases_builtin_strresetlocaleatofatoicurrency_strcoll strcoll(string,string) -> int. + Compares two strings according to the locale. + _strxfrm strxfrm(string) -> string. + Returns a string that behaves for cmp locale-aware. + localeconv() -> dict. + Returns numeric and monetary locale-specific parameters. + currency_symboln_sign_posnp_cs_precedesn_cs_precedesmon_groupingn_sep_by_spacenegative_signpositive_signp_sep_by_spaceint_curr_symbolp_sign_posnmon_thousands_sepfrac_digitsmon_decimal_pointint_frac_digits setlocale(integer,string=None) -> string. + Activates/queries locale processing. + _locale emulation only supports "C" locale_override_localeconv_grouping_intervalslast_intervalinvalid grouping_groupmonetaryright_spacesleft_spaces0123456789_strip_paddingamountlposrpos%(?:\((?P.*?)\))?(?P[-#0-9 +*.hlL]*?)[eEfFgGdiouxXcrs%]r'%(?:\((?P.*?)\))?'r'(?P[-#0-9 +*.hlL]*?)[eEfFgGdiouxXcrs%]'_percent_repercentadditionaleEfFgGsepsdiuFormats a string in the same way that the % formatting would use, + but takes the current locale into account. + + Grouping is applied if the third parameter is true. + Conversion uses monetary thousands separator and grouping strings if + forth parameter monetary is true.percentsnew_fpercmodifiersstarcountDeprecated, use format_string instead.This method will be removed in a future version of Python. Use 'locale.format_string()' instead."This method will be removed in a future version of Python. ""Use 'locale.format_string()' instead."format() must be given exactly one %%char format specifier, %s not valid"format() must be given exactly one %%char ""format specifier, %s not valid"internationalFormats val according to the currency settings + in the current locale.Currency formatting is not possible using the 'C' locale."Currency formatting is not possible using ""the 'C' locale."%%.%ifsmbprecedesseparatedsign_posConvert float to string, taking the locale into account.%.12gdelocalizeParses a string as a normalized number according to the locale settings.Parses a string as a float according to the locale settings.Converts a string to an integer according to the locale settings.1234567893.14_setlocale_replace_encodinglangnamelocale_encoding_alias_append_modifier.ISO8859-15ISO8859-15ISO8859-1localename Returns a normalized locale code for the given locale + name. + + The returned locale code is formatted for use with + setlocale(). + + If normalization fails, the original name is returned + unchanged. + + If the given encoding is not known, the function defaults to + the default encoding for the locale code just like setlocale() + does. + + lang_enclookup_namelocale_aliasdefmod_parse_localename Parses the locale code for localename and returns the + result as tuple (language code, encoding). + + The localename is normalized and passed through the locale + alias engine. A ValueError is raised in case the locale name + cannot be parsed. + + The language code corresponds to RFC 1766. code and encoding + can be None in case the values cannot be determined or are + unknown to this implementation. + + unknown locale: %s_build_localenamelocaletuple Builds a locale code from the given tuple (language code, + encoding). + + No aliasing or normalizing takes place. + + Locale must be None, a string, or an iterable of two strings -- language code, encoding.'Locale must be None, a string, or an iterable of ''two strings -- language code, encoding.'envvars Tries to determine the default locale settings and returns + them as tuple (language code, encoding). + + According to POSIX, a program which has not called + setlocale(LC_ALL, "") runs using the portable 'C' locale. + Calling setlocale(LC_ALL, "") lets it use the default locale as + defined by the LANG variable. Since we don't want to interfere + with the current locale setting we thus emulate the behavior + in the way described above. + + To maintain compatibility with other platforms, not only the + LANG variable is tested, but a list of variables given as + envvars parameter. The first found to be defined will be + used. envvars defaults to the search path used in GNU gettext; + it must always contain the variable name 'LANG'. + + Except for the code 'C', the language code corresponds to RFC + 1766. code and encoding can be None in case the values cannot + be determined. + + 0xwindows_locale Returns the current setting for the given locale category as + tuple (language code, encoding). + + category may be one of the LC_* value except LC_ALL. It + defaults to LC_CTYPE. + + Except for the code 'C', the language code corresponds to RFC + 1766. code and encoding can be None in case the values cannot + be determined. + + category LC_ALL is not supported Set the locale for the given category. The locale can be + a string, an iterable of two strings (language code and encoding), + or None. + + Iterables are converted to strings using the locale aliasing + engine. Locale strings are passed directly to the C lib. + + category may be given as one of the LC_* values. + + Sets the locale for category to the default setting. + + The default setting is determined by calling + getdefaultlocale(). category defaults to LC_ALL. + + Return the charset that the user is likely using._bootlocaleReturn the charset that the user is likely using, + according to the system configuration.oldlocReturn the charset that the user is likely using, + by looking at environment variables.enJIS7jisjis7eucJPajecKOI8-Ckoi8cCP1251microsoftcp1251CP1255microsoftcp1255CP1256microsoftcp125688591ISO8859-288592ISO8859-588595885915ISO8859-10ISO8859-11ISO8859-13ISO8859-14ISO8859-16ISO8859-3ISO8859-4ISO8859-6ISO8859-7ISO8859-8ISO8859-9SJISTACTISeucKRKOI8-RKOI8-Tkoi8_tKOI8-Ukoi8_uRK1048az_AZ.KOI8-Ca3a3_aza3_az.koicaa_DJ.ISO8859-1aa_djaa_ER.UTF-8aa_eraa_ET.UTF-8aa_etaf_ZA.ISO8859-1af_zaagr_PE.UTF-8agr_peak_GH.UTF-8ak_gham_ET.UTF-8amam_eten_US.ISO8859-1americanan_ES.ISO8859-15an_esanp_IN.UTF-8anp_inar_AA.ISO8859-6arar_aaar_AE.ISO8859-6ar_aear_BH.ISO8859-6ar_bhar_DZ.ISO8859-6ar_dzar_EG.ISO8859-6ar_egar_IN.UTF-8ar_inar_IQ.ISO8859-6ar_iqar_JO.ISO8859-6ar_joar_KW.ISO8859-6ar_kwar_LB.ISO8859-6ar_lbar_LY.ISO8859-6ar_lyar_MA.ISO8859-6ar_maar_OM.ISO8859-6ar_omar_QA.ISO8859-6ar_qaar_SA.ISO8859-6ar_saar_SD.ISO8859-6ar_sdar_SS.UTF-8ar_ssar_SY.ISO8859-6ar_syar_TN.ISO8859-6ar_tnar_YE.ISO8859-6ar_yeas_IN.UTF-8as_inast_ES.ISO8859-15ast_esayc_PE.UTF-8ayc_peaz_AZ.ISO8859-9Eazaz_azaz_az.iso88599eaz_IR.UTF-8az_irbe_BY.CP1251bebe_BY.UTF-8@latinbe@latinbg_BG.UTF-8be_bg.utf8be_bybe_by@latinbem_ZM.UTF-8bem_zmber_DZ.UTF-8ber_dzber_MA.UTF-8ber_mabg_BG.CP1251bgbg_bgbhb_IN.UTF-8bhb_in.utf8bho_IN.UTF-8bho_inbho_NP.UTF-8bho_npbi_VU.UTF-8bi_vubn_BD.UTF-8bn_bdbn_IN.UTF-8bn_inbo_CN.UTF-8bo_cnbo_IN.UTF-8bo_innb_NO.ISO8859-1bokmalbokmålbr_FR.ISO8859-1br_frbrx_IN.UTF-8brx_inbs_BA.ISO8859-2bsbs_babulgarianbyn_ER.UTF-8byn_erfr_CA.ISO8859-1c-frenchc.asciic.enc.iso88591en_US.UTF-8c.utf8c_cc_c.cca_ES.ISO8859-1ca_AD.ISO8859-1ca_adca_esca_ES.UTF-8@valenciaca_es@valenciaca_FR.ISO8859-1ca_frca_IT.ISO8859-1ca_itcatalance_RU.UTF-8ce_rucextendzh_CN.eucCNchinese-szh_TW.eucTWchinese-tchr_US.UTF-8chr_usckb_IQ.UTF-8ckb_iqcmn_TW.UTF-8cmn_twcrh_UA.UTF-8crh_uahr_HR.ISO8859-2croatiancs_CZ.ISO8859-2cscs_cscs_czcsb_PL.UTF-8csb_plcv_RU.UTF-8cv_rucy_GB.ISO8859-1cycy_gbczcz_czczechda_DK.ISO8859-1dada_dkdanishdanskde_DE.ISO8859-1dede_AT.ISO8859-1de_atde_BE.ISO8859-1de_bede_CH.ISO8859-1de_chde_dede_IT.ISO8859-1de_itde_LI.UTF-8de_li.utf8de_LU.ISO8859-1de_ludeutschdoi_IN.UTF-8doi_innl_NL.ISO8859-1dutchnl_BE.ISO8859-1dutch.iso88591dv_MV.UTF-8dv_mvdz_BT.UTF-8dz_btee_EE.ISO8859-4eeee_eeet_EE.ISO8859-1eestiel_GR.ISO8859-7elel_CY.ISO8859-7el_cyel_grel_GR.ISO8859-15el_gr@euroen_AG.UTF-8en_agen_AU.ISO8859-1en_auen_BE.ISO8859-1en_been_BW.ISO8859-1en_bwen_CA.ISO8859-1en_caen_DK.ISO8859-1en_dken_DL.UTF-8en_dl.utf8en_GB.ISO8859-1en_gben_HK.ISO8859-1en_hken_IE.ISO8859-1en_ieen_IL.UTF-8en_ilen_IN.ISO8859-1en_inen_NG.UTF-8en_ngen_NZ.ISO8859-1en_nzen_PH.ISO8859-1en_phen_SC.UTF-8en_sc.utf8en_SG.ISO8859-1en_sgen_uken_usen_US.ISO8859-15en_us@euro@euroen_ZA.ISO8859-1en_zaen_ZM.UTF-8en_zmen_ZW.ISO8859-1en_zwen_ZS.UTF-8en_zw.utf8eng_gben_EN.ISO8859-1englishenglish.iso88591english_ukenglish_united-statesenglish_united-states.437english_useo_XX.ISO8859-3eoeo.UTF-8eo.utf8eo_EO.ISO8859-3eo_eoeo_US.UTF-8eo_us.utf8eo_xxes_ES.ISO8859-1eses_AR.ISO8859-1es_ares_BO.ISO8859-1es_boes_CL.ISO8859-1es_cles_CO.ISO8859-1es_coes_CR.ISO8859-1es_cres_CU.UTF-8es_cues_DO.ISO8859-1es_does_EC.ISO8859-1es_eces_eses_GT.ISO8859-1es_gtes_HN.ISO8859-1es_hnes_MX.ISO8859-1es_mxes_NI.ISO8859-1es_nies_PA.ISO8859-1es_paes_PE.ISO8859-1es_pees_PR.ISO8859-1es_pres_PY.ISO8859-1es_pyes_SV.ISO8859-1es_sves_US.ISO8859-1es_uses_UY.ISO8859-1es_uyes_VE.ISO8859-1es_veestonianet_EE.ISO8859-15etet_eeeu_ES.ISO8859-1eueu_eseu_FR.ISO8859-1eu_frfa_IR.UTF-8fafa_irfa_IR.ISIRI-3342fa_ir.isiri3342ff_SN.UTF-8ff_snfi_FI.ISO8859-15fifi_fifil_PH.UTF-8fil_phfi_FI.ISO8859-1finnishfo_FO.ISO8859-1fofo_fofr_FR.ISO8859-1frfr_BE.ISO8859-1fr_befr_cafr_CH.ISO8859-1fr_chfr_frfr_LU.ISO8859-1fr_lufrançaisfre_frfrenchfrench.iso88591french_francefur_IT.UTF-8fur_itfy_DE.UTF-8fy_defy_NL.UTF-8fy_nlga_IE.ISO8859-1gaga_iegl_ES.ISO8859-1galegogaliciangd_GB.ISO8859-1gdgd_gbger_degermangerman.iso88591german_germanygez_ER.UTF-8gez_ergez_ET.UTF-8gez_etglgl_esgu_IN.UTF-8gu_ingv_GB.ISO8859-1gvgv_gbha_NG.UTF-8ha_nghak_TW.UTF-8hak_twhe_IL.ISO8859-8hehe_ilhi_IN.ISCII-DEVhi_inhi_in.isciidevhif_FJ.UTF-8hif_fjhne_IN.UTF-8hnehne_inhr_hrhrvatskihsb_DE.ISO8859-2hsb_deht_HT.UTF-8ht_hthu_HU.ISO8859-2huhu_huhungarianhy_AM.UTF-8hy_amhy_AM.ARMSCII_8hy_am.armscii8ia.UTF-8iaia_FR.UTF-8ia_fris_IS.ISO8859-1icelandicid_ID.ISO8859-1id_idig_NG.UTF-8ig_ngik_CA.UTF-8ik_cain_idis_isiso8859-1iso8859-15it_IT.ISO8859-1it_CH.ISO8859-1it_chit_ititalianiu_CA.NUNACOM-8iuiu_caiu_ca.nunacom8iwiw_iliw_IL.UTF-8iw_il.utf8ja_JP.eucJPjaja_jpja_jp.eucja_JP.SJISja_jp.mscodeja_jp.pckjapanjapanesejapanese-eucjapanese.eucjp_jpka_GE.GEORGIAN-ACADEMYkaka_geka_ge.georgianacademyka_GE.GEORGIAN-PSka_ge.georgianpska_ge.georgianrskab_DZ.UTF-8kab_dzkk_KZ.ptcp154kk_kzkl_GL.ISO8859-1klkl_glkm_KH.UTF-8km_khkn_IN.UTF-8knkn_inko_KR.eucKRkoko_krko_kr.euckok_IN.UTF-8kok_inkorean.eucks_IN.UTF-8ksks_inks_IN.UTF-8@devanagariks_in@devanagari.utf8ku_TR.ISO8859-9ku_trkw_GB.ISO8859-1kw_gbky_KG.UTF-8kyky_kglb_LU.UTF-8lb_lulg_UG.ISO8859-10lg_ugli_BE.UTF-8li_beli_NL.UTF-8li_nllij_IT.UTF-8lij_itlt_LT.ISO8859-13lithuanianln_CD.UTF-8ln_cdlo_LA.MULELAO-1lo_lalo_LA.IBM-CP1133lo_la.cp1133lo_la.ibmcp1133lo_la.mulelao1lt_ltlv_LV.ISO8859-13lvlv_lvlzh_TW.UTF-8lzh_twmag_IN.UTF-8mag_inmai_IN.UTF-8maimai_inmai_NP.UTF-8mai_npmfe_MU.UTF-8mfe_mumg_MG.ISO8859-15mg_mgmhr_RU.UTF-8mhr_rumi_NZ.ISO8859-1mimi_nzmiq_NI.UTF-8miq_nimjw_IN.UTF-8mjw_inmk_MK.ISO8859-5mkmk_mkml_IN.UTF-8mlml_inmn_MN.UTF-8mn_mnmni_IN.UTF-8mni_inmr_IN.UTF-8mrmr_inms_MY.ISO8859-1ms_mymt_MT.ISO8859-3mtmt_mtmy_MM.UTF-8my_mmnan_TW.UTF-8nan_twnbnb_nonds_DE.UTF-8nds_dends_NL.UTF-8nds_nlne_NP.UTF-8ne_npnhn_MX.UTF-8nhn_mxniu_NU.UTF-8niu_nuniu_NZ.UTF-8niu_nznlnl_AW.UTF-8nl_awnl_benl_nlnn_NO.ISO8859-1nn_nono_NO.ISO8859-1nony_NO.ISO8859-1no@nynorskno_nono_no.iso88591@bokmalno_no.iso88591@nynorsknorwegiannr_ZA.ISO8859-1nrnr_zanso_ZA.ISO8859-15nsonso_zanyny_nonynorskoc_FR.ISO8859-1ococ_from_ET.UTF-8om_etom_KE.ISO8859-1om_keor_IN.UTF-8or_inos_RU.UTF-8os_rupa_IN.UTF-8papa_inpa_PK.UTF-8pa_pkpap_AN.UTF-8pap_anpap_AW.UTF-8pap_awpap_CW.UTF-8pap_cwpd_US.ISO8859-1pdpd_DE.ISO8859-1pd_depd_usph_PH.ISO8859-1ph_phpl_PL.ISO8859-2plpl_plpolishpt_PT.ISO8859-1portuguesept_BR.ISO8859-1portuguese_brazilposix-utf2pp_AN.ISO8859-1pppp_anps_AF.UTF-8ps_afptpt_brpt_ptquz_PE.UTF-8quz_peraj_IN.UTF-8raj_inro_RO.ISO8859-2roro_roromanianru_RU.UTF-8ruru_ruru_UA.KOI8-Uru_uarumanianru_RU.KOI8-Rrussianrw_RW.ISO8859-1rwrw_rwsa_IN.UTF-8sa_insat_IN.UTF-8sat_insc_IT.UTF-8sc_itsd_IN.UTF-8sdsd_insd_IN.UTF-8@devanagarisd_in@devanagari.utf8sd_PK.UTF-8sd_pkse_NO.UTF-8se_nosr_RS.UTF-8@latinserbocroatiansgs_LT.UTF-8sgs_ltshsr_CS.ISO8859-2sh_ba.iso88592@bosniash_HR.ISO8859-2sh_hrsh_hr.iso88592sh_spsh_yushn_MM.UTF-8shn_mmshs_CA.UTF-8shs_casi_LK.UTF-8sisi_lksid_ET.UTF-8sid_etsinhalask_SK.ISO8859-2sksk_sksl_SI.ISO8859-2sl_CS.ISO8859-2sl_cssl_sislovakslovenesloveniansm_WS.UTF-8sm_wsso_DJ.ISO8859-1so_djso_ET.UTF-8so_etso_KE.ISO8859-1so_keso_SO.ISO8859-1so_sosr_CS.ISO8859-5spsp_yuspanishspanish_spainsq_AL.ISO8859-2sqsq_alsq_MK.UTF-8sq_mksr_RS.UTF-8sr@cyrillicsr_CS.UTF-8@latinsr@latnsr_CS.UTF-8sr_cssr_cs.iso88592@latnsr_cs@latnsr_ME.UTF-8sr_mesr_rssr_rs@latnsr_spsr_yusr_CS.CP1251sr_yu.cp1251@cyrillicsr_yu.iso88592sr_yu.iso88595sr_yu.iso88595@cyrillicsr_yu.microsoftcp1251@cyrillicsr_yu.utf8sr_yu.utf8@cyrillicsr_yu@cyrillicss_ZA.ISO8859-1ss_zast_ZA.ISO8859-1st_zasv_SE.ISO8859-1svsv_FI.ISO8859-1sv_fisv_sesw_KE.UTF-8sw_kesw_TZ.UTF-8sw_tzswedishszl_PL.UTF-8szl_plta_IN.TSCII-0tata_inta_in.tsciita_in.tscii0ta_LK.UTF-8ta_lktcy_IN.UTF-8tcy_in.utf8te_IN.UTF-8te_intg_TJ.KOI8-Ctgtg_tjth_TH.ISO8859-11thth_thth_TH.TIS620th_th.tactisth_th.tis620the_NP.UTF-8the_npti_ER.UTF-8ti_erti_ET.UTF-8ti_ettig_ER.UTF-8tig_ertk_TM.UTF-8tk_tmtl_PH.ISO8859-1tltl_phtn_ZA.ISO8859-15tntn_zato_TO.UTF-8to_totpi_PG.UTF-8tpi_pgtr_TR.ISO8859-9trtr_CY.ISO8859-9tr_cytr_trts_ZA.ISO8859-1ts_zatt_RU.TATAR-CYRtttt_rutt_ru.tatarcyrtt_RU.UTF-8@iqteliftt_ru@iqtelifturkishug_CN.UTF-8ug_cnuk_UA.KOI8-Uukuk_uaen_US.utfunivuniversal.utf8@ucs4unm_US.UTF-8unm_usur_PK.CP1256urur_IN.UTF-8ur_inur_pkuz_UZ.UTF-8uzuz_uzuz_uz@cyrillicve_ZA.UTF-8veve_zavi_VN.TCVNvivi_vnvi_vn.tcvnvi_vn.tcvn5712vi_VN.VISCIIvi_vn.visciivi_vn.viscii111wa_BE.ISO8859-1wawa_bewae_CH.UTF-8wae_chwal_ET.UTF-8wal_etwo_SN.UTF-8wo_snxh_ZA.ISO8859-1xhxh_zayi_US.CP1255yiyi_usyo_NG.UTF-8yo_ngyue_HK.UTF-8yue_hkyuw_PG.UTF-8yuw_pgzhzh_CN.gb2312zh_cnzh_TW.big5zh_cn.big5zh_cn.euczh_HK.big5hkscszh_hkzh_hk.big5hkzh_SG.GB2312zh_sgzh_SG.GBKzh_sg.gbkzh_twzh_tw.euczh_tw.euctwzu_ZA.ISO8859-1zuzu_zaaf_ZA10780x0436sq_AL10520x041cgsw_FR11560x0484am_ET11180x045ear_SA0x0401ar_IQ20490x0801ar_EG30730x0c01ar_LY0x1001ar_DZ51210x1401ar_MA61450x1801ar_TN71690x1c01ar_OM81930x2001ar_YE92170x2401ar_SY102410x2801ar_JO112650x2c01ar_LB122890x3001ar_KW133130x3401ar_AE143370x3801ar_BH153610x3c01ar_QA163850x4001hy_AM10670x042bas_IN11010x044daz_AZ10680x042c20920x082cba_RU11330x046deu_ES10690x042dbe_BY10590x0423bn_IN10930x0445bs_BA51460x141abr_FR11500x047ebg_BG0x0402ca_ES10270x0403zh_CHS0x0004zh_TW10280x0404zh_CN20520x0804zh_HK30760x0c04zh_SG0x1004zh_MO51240x1404zh_CHT317480x7c04co_FR11550x0483hr_HR10500x041ahr_BA41220x101acs_CZ10290x0405da_DK10300x0406gbz_AF11640x048cdiv_MV0x0465nl_NL10430x0413nl_BE20670x0813en_US10330x0409en_GB20570x0809en_AU30810x0c09en_CA41050x1009en_NZ51290x1409en_IE61530x1809en_ZA71770x1c09en_JAen_CB92250x2409en_BZ102490x2809en_TT112730x2c09en_ZW122970x3009en_PH133210x3409en_IN163930x4009en_MY174170x4409184410x4809et_EE10610x0425fo_FO10800x0438fil_PH11240x0464fi_FI10350x040bfr_FR10360x040cfr_BE20600x080cfr_CA30840x0c0cfr_CH41080x100cfr_LU51320x140cfr_MC61560x180cfy_NL11220x0462gl_ES11100x0456ka_GE10790x0437de_DE10310x0407de_CH20550x0807de_AT30790x0c07de_LU0x1007de_LI0x1407el_GR10320x0408kl_GL11350x046fgu_IN10950x0447ha_NG11280x0468he_IL10370x040dhi_IN10810x0439hu_HU10380x040eis_IS10390x040fid_ID10570x0421iu_CA11170x045d21410x085dga_IE21080x083cit_IT10400x0410it_CH20640x0810ja_JP10410x0411kn_IN10990x044bkk_KZ10870x043fkh_KH11070x0453qut_GT11580x0486rw_RW11590x0487kok_IN11110x0457ko_KR10420x0412ky_KG10880x0440lo_LA11080x0454lv_LV10620x0426lt_LT10630x0427dsb_DE20940x082elb_LU11340x046emk_MK10710x042fms_MY10860x043ems_BN21100x083eml_IN11000x044cmt_MT10820x043ami_NZ11530x0481arn_CL11460x047amr_IN11020x044emoh_CA11480x047cmn_MN11040x0450mn_CN21280x0850ne_NP11210x0461nb_NO10440x0414nn_NO20680x0814oc_FR11540x0482or_IN10960x0448ps_AF11230x0463fa_IR10650x0429pl_PL10450x0415pt_BR10460x0416pt_PT20700x0816pa_IN10940x0446quz_BO11310x046bquz_EC21550x086bquz_PE31790x0c6bro_RO10480x0418rm_CH10470x0417ru_RU10490x0419smn_FI92750x243bsmj_NO41550x103bsmj_SE51790x143bse_NO10830x043bse_SE21070x083bse_FI31310x0c3bsms_FI82510x203bsma_NO62030x183bsma_SE72270x1c3bsa_IN11030x044fsr_SP30980x0c1asr_BA71940x1c1a20740x081a61700x181asi_LK11150x045bns_ZA11320x046ctn_ZA10740x0432sk_SK10510x041bsl_SI10600x0424es_ES10340x040aes_MX20580x080a30820x0c0aes_GT41060x100aes_CR51300x140aes_PA61540x180aes_DO71780x1c0aes_VE82020x200aes_CO92260x240aes_PE102500x280aes_AR112740x2c0aes_EC122980x300aes_CL133220x340aes_UR143460x380aes_PY153700x3c0aes_BO163940x400aes_SV174180x440aes_HN184420x480aes_NI194660x4c0aes_PR204900x500aes_US215140x540asw_KE10890x0441sv_SE10530x041dsv_FI20770x081dsyr_SY11140x045atg_TJ10640x0428tmz_DZ21430x085fta_IN10970x0449tt_RU10920x0444te_IN10980x044ath_TH10540x041ebo_BT21290x0851bo_CN11050x0451tr_TR10550x041ftk_TM10900x0442ug_CN11520x0480uk_UA10580x0422wen_DE10700x042eur_PK10560x0420ur_IN20800x0820uz_UZ10910x044321150x0843vi_VN10660x042acy_GB11060x0452wo_SN11600x0488xh_ZA10760x0434sah_RU11570x0485ii_CN11440x0478yo_NG11300x046azu_ZA10770x0435_print_locale Test function. + categories_init_categoriesLC_Locale defaults as determined by getdefaultlocale():Language: (undefined)Encoding: Locale settings on startup: Language: Encoding: Locale settings after calling resetlocale():Locale settings after calling setlocale(LC_ALL, ""):NOTE:setlocale(LC_ALL, "") does not support the default localegiven in the OS environment variables.Locale aliasing:Number formatting:# Try importing the _locale module.# If this fails, fall back on a basic 'C' locale emulation.# Yuck: LC_MESSAGES is non-standard: can't tell whether it exists before# trying the import. So __all__ is also fiddled at the end of the file.# Locale emulation# 'C' locale default values# These may or may not exist in _locale, so be sure to set them.# With this dict, you can override some items of localeconv's return value.# This is useful for testing purposes.### Number formatting APIs# Author: Martin von Loewis# improved by Georg Brandl# Iterate over grouping intervals# if grouping is -1, we are done# 0: re-use last group ad infinitum#perform the grouping from right to left# only non-digit characters remain (sign, spaces)# Strip a given amount of excess padding from the given string# floats and decimal ints need special action!# check for illegal values# '<' and '>' are markers if the sign must be inserted between symbol and value# the default if nothing specified;# this should be the most fitting sign position#First, get rid of the grouping#next, replace the decimal point with a dot#do grouping#standard formatting### Locale name aliasing engine# Author: Marc-Andre Lemburg, mal@lemburg.com# Various tweaks by Fredrik Lundh # store away the low-level version of setlocale (it's# overridden below)# Convert the encoding to a C lib compatible encoding string#print('norm encoding: %r' % norm_encoding)#print('aliased encoding: %r' % norm_encoding)#print('found encoding %r' % encoding)# Normalize the locale name and extract the encoding and modifier# ':' is sometimes used as encoding delimiter.# First lookup: fullname (possibly with encoding and modifier)#print('first lookup failed')# Second try: fullname without modifier (possibly with encoding)#print('lookup without modifier succeeded')#print('second lookup failed')# Third try: langname (without encoding, possibly with modifier)#print('lookup without encoding succeeded')# Fourth try: langname (without encoding and modifier)#print('lookup without modifier and encoding succeeded')# Deal with locale modifiers# Assume Latin-9 for @euro locales. This is bogus,# since some systems may use other encodings for these# locales. Also, we ignore other modifiers.# On macOS "LC_CTYPE=UTF-8" is a valid locale setting# for getting UTF-8 handling for text.# check if it's supported by the _locale module# make sure the code/encoding values are valid# map windows language identifier to language name# ...add other platform-specific processing here, if# necessary...# fall back on POSIX behaviour# convert to string# On Win32, this will return the ANSI code page# On Unix, if CODESET is available, use that.# Fall back to parsing environment variables :-(# LANG not set, default conservatively to ASCII### Database# The following data was extracted from the locale.alias file which# comes with X11 and then hand edited removing the explicit encoding# definitions and adding some more aliases. The file is usually# available as /usr/lib/X11/locale/locale.alias.# The local_encoding_alias table maps lowercase encoding alias names# to C locale encoding names (case-sensitive). Note that normalize()# first looks up the encoding in the encodings.aliases dictionary and# then applies this mapping to find the correct C lib name for the# encoding.# Mappings for non-standard encoding names used in locale names# Mappings from Python codec names to C lib encoding names# XXX This list is still incomplete. If you know more# mappings, please file a bug report. Thanks.# The locale_alias table maps lowercase alias names to C locale names# (case-sensitive). Encodings are always separated from the locale# name using a dot ('.'); they should only be given in case the# language name is needed to interpret the given encoding alias# correctly (CJK codes often have this need).# Note that the normalize() function which uses this tables# removes '_' and '-' characters from the encoding part of the# locale name before doing the lookup. This saves a lot of# space in the table.# MAL 2004-12-10:# Updated alias mapping to most recent locale.alias file# from X.org distribution using makelocalealias.py.# These are the differences compared to the old mapping (Python 2.4# and older):# updated 'bg' -> 'bg_BG.ISO8859-5' to 'bg_BG.CP1251'# updated 'bg_bg' -> 'bg_BG.ISO8859-5' to 'bg_BG.CP1251'# updated 'bulgarian' -> 'bg_BG.ISO8859-5' to 'bg_BG.CP1251'# updated 'cz' -> 'cz_CZ.ISO8859-2' to 'cs_CZ.ISO8859-2'# updated 'cz_cz' -> 'cz_CZ.ISO8859-2' to 'cs_CZ.ISO8859-2'# updated 'czech' -> 'cs_CS.ISO8859-2' to 'cs_CZ.ISO8859-2'# updated 'dutch' -> 'nl_BE.ISO8859-1' to 'nl_NL.ISO8859-1'# updated 'et' -> 'et_EE.ISO8859-4' to 'et_EE.ISO8859-15'# updated 'et_ee' -> 'et_EE.ISO8859-4' to 'et_EE.ISO8859-15'# updated 'fi' -> 'fi_FI.ISO8859-1' to 'fi_FI.ISO8859-15'# updated 'fi_fi' -> 'fi_FI.ISO8859-1' to 'fi_FI.ISO8859-15'# updated 'iw' -> 'iw_IL.ISO8859-8' to 'he_IL.ISO8859-8'# updated 'iw_il' -> 'iw_IL.ISO8859-8' to 'he_IL.ISO8859-8'# updated 'japanese' -> 'ja_JP.SJIS' to 'ja_JP.eucJP'# updated 'lt' -> 'lt_LT.ISO8859-4' to 'lt_LT.ISO8859-13'# updated 'lv' -> 'lv_LV.ISO8859-4' to 'lv_LV.ISO8859-13'# updated 'sl' -> 'sl_CS.ISO8859-2' to 'sl_SI.ISO8859-2'# updated 'slovene' -> 'sl_CS.ISO8859-2' to 'sl_SI.ISO8859-2'# updated 'th_th' -> 'th_TH.TACTIS' to 'th_TH.ISO8859-11'# updated 'zh_cn' -> 'zh_CN.eucCN' to 'zh_CN.gb2312'# updated 'zh_cn.big5' -> 'zh_TW.eucTW' to 'zh_TW.big5'# updated 'zh_tw' -> 'zh_TW.eucTW' to 'zh_TW.big5'# MAL 2008-05-30:# These are the differences compared to the old mapping (Python 2.5# updated 'cs_cs.iso88592' -> 'cs_CZ.ISO8859-2' to 'cs_CS.ISO8859-2'# updated 'serbocroatian' -> 'sh_YU.ISO8859-2' to 'sr_CS.ISO8859-2'# updated 'sh' -> 'sh_YU.ISO8859-2' to 'sr_CS.ISO8859-2'# updated 'sh_hr.iso88592' -> 'sh_HR.ISO8859-2' to 'hr_HR.ISO8859-2'# updated 'sh_sp' -> 'sh_YU.ISO8859-2' to 'sr_CS.ISO8859-2'# updated 'sh_yu' -> 'sh_YU.ISO8859-2' to 'sr_CS.ISO8859-2'# updated 'sp' -> 'sp_YU.ISO8859-5' to 'sr_CS.ISO8859-5'# updated 'sp_yu' -> 'sp_YU.ISO8859-5' to 'sr_CS.ISO8859-5'# updated 'sr' -> 'sr_YU.ISO8859-5' to 'sr_CS.ISO8859-5'# updated 'sr@cyrillic' -> 'sr_YU.ISO8859-5' to 'sr_CS.ISO8859-5'# updated 'sr_sp' -> 'sr_SP.ISO8859-2' to 'sr_CS.ISO8859-2'# updated 'sr_yu' -> 'sr_YU.ISO8859-5' to 'sr_CS.ISO8859-5'# updated 'sr_yu.cp1251@cyrillic' -> 'sr_YU.CP1251' to 'sr_CS.CP1251'# updated 'sr_yu.iso88592' -> 'sr_YU.ISO8859-2' to 'sr_CS.ISO8859-2'# updated 'sr_yu.iso88595' -> 'sr_YU.ISO8859-5' to 'sr_CS.ISO8859-5'# updated 'sr_yu.iso88595@cyrillic' -> 'sr_YU.ISO8859-5' to 'sr_CS.ISO8859-5'# updated 'sr_yu.microsoftcp1251@cyrillic' -> 'sr_YU.CP1251' to 'sr_CS.CP1251'# updated 'sr_yu.utf8@cyrillic' -> 'sr_YU.UTF-8' to 'sr_CS.UTF-8'# updated 'sr_yu@cyrillic' -> 'sr_YU.ISO8859-5' to 'sr_CS.ISO8859-5'# AP 2010-04-12:# These are the differences compared to the old mapping (Python 2.6.5# updated 'ru' -> 'ru_RU.ISO8859-5' to 'ru_RU.UTF-8'# updated 'ru_ru' -> 'ru_RU.ISO8859-5' to 'ru_RU.UTF-8'# updated 'serbocroatian' -> 'sr_CS.ISO8859-2' to 'sr_RS.UTF-8@latin'# updated 'sh' -> 'sr_CS.ISO8859-2' to 'sr_RS.UTF-8@latin'# updated 'sh_yu' -> 'sr_CS.ISO8859-2' to 'sr_RS.UTF-8@latin'# updated 'sr' -> 'sr_CS.ISO8859-5' to 'sr_RS.UTF-8'# updated 'sr@cyrillic' -> 'sr_CS.ISO8859-5' to 'sr_RS.UTF-8'# updated 'sr@latn' -> 'sr_CS.ISO8859-2' to 'sr_RS.UTF-8@latin'# updated 'sr_cs.utf8@latn' -> 'sr_CS.UTF-8' to 'sr_RS.UTF-8@latin'# updated 'sr_cs@latn' -> 'sr_CS.ISO8859-2' to 'sr_RS.UTF-8@latin'# updated 'sr_yu' -> 'sr_CS.ISO8859-5' to 'sr_RS.UTF-8@latin'# updated 'sr_yu.utf8@cyrillic' -> 'sr_CS.UTF-8' to 'sr_RS.UTF-8'# updated 'sr_yu@cyrillic' -> 'sr_CS.ISO8859-5' to 'sr_RS.UTF-8'# SS 2013-12-20:# These are the differences compared to the old mapping (Python 3.3.3# updated 'a3' -> 'a3_AZ.KOI8-C' to 'az_AZ.KOI8-C'# updated 'a3_az' -> 'a3_AZ.KOI8-C' to 'az_AZ.KOI8-C'# updated 'a3_az.koi8c' -> 'a3_AZ.KOI8-C' to 'az_AZ.KOI8-C'# updated 'cs_cs.iso88592' -> 'cs_CS.ISO8859-2' to 'cs_CZ.ISO8859-2'# updated 'hebrew' -> 'iw_IL.ISO8859-8' to 'he_IL.ISO8859-8'# updated 'hebrew.iso88598' -> 'iw_IL.ISO8859-8' to 'he_IL.ISO8859-8'# updated 'sd' -> 'sd_IN@devanagari.UTF-8' to 'sd_IN.UTF-8'# updated 'sr@latn' -> 'sr_RS.UTF-8@latin' to 'sr_CS.UTF-8@latin'# updated 'sr_cs' -> 'sr_RS.UTF-8' to 'sr_CS.UTF-8'# updated 'sr_cs.utf8@latn' -> 'sr_RS.UTF-8@latin' to 'sr_CS.UTF-8@latin'# updated 'sr_cs@latn' -> 'sr_RS.UTF-8@latin' to 'sr_CS.UTF-8@latin'# SS 2014-10-01:# Updated alias mapping with glibc 2.19 supported locales.# SS 2018-05-05:# Updated alias mapping with glibc 2.27 supported locales.# These are the differences compared to the old mapping (Python 3.6.5# updated 'ca_es@valencia' -> 'ca_ES.ISO8859-15@valencia' to 'ca_ES.UTF-8@valencia'# updated 'kk_kz' -> 'kk_KZ.RK1048' to 'kk_KZ.ptcp154'# updated 'russian' -> 'ru_RU.ISO8859-5' to 'ru_RU.KOI8-R'# This maps Windows language identifiers to locale strings.# This list has been updated from# http://msdn.microsoft.com/library/default.asp?url=/library/en-us/intl/nls_238z.asp# to include every locale up to Windows Vista.# NOTE: this mapping is incomplete. If your language is missing, please# submit a bug report to the Python bug tracker at http://bugs.python.org/# Make sure you include the missing language identifier and the suggested# locale code.# Afrikaans# Albanian# Alsatian - France# Amharic - Ethiopia# Arabic - Saudi Arabia# Arabic - Iraq# Arabic - Egypt# Arabic - Libya# Arabic - Algeria# Arabic - Morocco# Arabic - Tunisia# Arabic - Oman# Arabic - Yemen# Arabic - Syria# Arabic - Jordan# Arabic - Lebanon# Arabic - Kuwait# Arabic - United Arab Emirates# Arabic - Bahrain# Arabic - Qatar# Armenian# Assamese - India# Azeri - Latin# Azeri - Cyrillic# Bashkir# Basque - Russia# Belarusian# Begali# Bosnian - Cyrillic# Bosnian - Latin# Breton - France# Bulgarian# 0x0455: "my_MM", # Burmese - Not supported# Catalan# Chinese - Simplified# Chinese - Taiwan# Chinese - PRC# Chinese - Hong Kong S.A.R.# Chinese - Singapore# Chinese - Macao S.A.R.# Chinese - Traditional# Corsican - France# Croatian# Croatian - Bosnia# Czech# Danish# Dari - Afghanistan# Divehi - Maldives# Dutch - The Netherlands# Dutch - Belgium# English - United States# English - United Kingdom# English - Australia# English - Canada# English - New Zealand# English - Ireland# English - South Africa# English - Jamaica# English - Caribbean# English - Belize# English - Trinidad# English - Zimbabwe# English - Philippines# English - India# English - Malaysia# English - Singapore# Estonian# Faroese# Filipino# Finnish# French - France# French - Belgium# French - Canada# French - Switzerland# French - Luxembourg# French - Monaco# Frisian - Netherlands# Galician# Georgian# German - Germany# German - Switzerland# German - Austria# German - Luxembourg# German - Liechtenstein# Greek# Greenlandic - Greenland# Gujarati# Hausa - Latin# Hebrew# Hindi# Hungarian# Icelandic# Indonesian# Inuktitut - Syllabics# Inuktitut - Latin# Irish - Ireland# Italian - Italy# Italian - Switzerland# Japanese# Kannada - India# Kazakh# Khmer - Cambodia# K'iche - Guatemala# Kinyarwanda - Rwanda# Konkani# Korean# Kyrgyz# Lao - Lao PDR# Latvian# Lithuanian# Lower Sorbian - Germany# Luxembourgish# FYROM Macedonian# Malay - Malaysia# Malay - Brunei Darussalam# Malayalam - India# Maltese# Maori# Mapudungun# Marathi# Mohawk - Canada# Mongolian - Cyrillic# Mongolian - PRC# Nepali# Norwegian - Bokmal# Norwegian - Nynorsk# Occitan - France# Oriya - India# Pashto - Afghanistan# Persian# Polish# Portuguese - Brazil# Portuguese - Portugal# Punjabi# Quechua (Bolivia)# Quechua (Ecuador)# Quechua (Peru)# Romanian - Romania# Romansh# Russian# Sami Finland# Sami Norway# Sami Sweden# Sami Northern Norway# Sami Northern Sweden# Sami Northern Finland# Sami Skolt# Sami Southern Norway# Sami Southern Sweden# Sanskrit# Serbian - Cyrillic# Serbian - Bosnia Cyrillic# Serbian - Latin# Serbian - Bosnia Latin# Sinhala - Sri Lanka# Northern Sotho# Setswana - Southern Africa# Slovak# Slovenian# Spanish - Spain# Spanish - Mexico# Spanish - Spain (Modern)# Spanish - Guatemala# Spanish - Costa Rica# Spanish - Panama# Spanish - Dominican Republic# Spanish - Venezuela# Spanish - Colombia# Spanish - Peru# Spanish - Argentina# Spanish - Ecuador# Spanish - Chile# Spanish - Uruguay# Spanish - Paraguay# Spanish - Bolivia# Spanish - El Salvador# Spanish - Honduras# Spanish - Nicaragua# Spanish - Puerto Rico# Spanish - United States# 0x0430: "", # Sutu - Not supported# Swahili# Swedish - Sweden# Swedish - Finland# Syriac# Tajik - Cyrillic# Tamazight - Latin# Tamil# Tatar# Telugu# Thai# Tibetan - Bhutan# Tibetan - PRC# Turkish# Turkmen - Cyrillic# Uighur - Arabic# Ukrainian# Upper Sorbian - Germany# Urdu# Urdu - India# Uzbek - Latin# Uzbek - Cyrillic# Vietnamese# Welsh# Wolof - Senegal# Xhosa - South Africa# Yakut - Cyrillic# Yi - PRC# Yoruba - Nigeria# Zulub'Locale support module. + +The module provides low-level access to the C lib's locale APIs and adds high +level number formatting APIs as well as a locale aliasing engine to complement +these. + +The aliasing engine includes support for many commonly used locale names and +maps them to values suitable for passing to the C lib's setlocale() function. It +also includes default encodings for all supported locale names. + +'u'Locale support module. + +The module provides low-level access to the C lib's locale APIs and adds high +level number formatting APIs as well as a locale aliasing engine to complement +these. + +The aliasing engine includes support for many commonly used locale names and +maps them to values suitable for passing to the C lib's setlocale() function. It +also includes default encodings for all supported locale names. + +'b'getlocale'u'getlocale'b'getdefaultlocale'u'getdefaultlocale'b'getpreferredencoding'u'getpreferredencoding'b'setlocale'u'setlocale'b'resetlocale'u'resetlocale'b'localeconv'u'localeconv'b'strcoll'u'strcoll'b'strxfrm'u'strxfrm'b'atof'u'atof'b'atoi'u'atoi'b'format_string'u'format_string'b'currency'u'currency'b'normalize'u'normalize'b'LC_CTYPE'u'LC_CTYPE'b'LC_COLLATE'u'LC_COLLATE'b'LC_TIME'u'LC_TIME'b'LC_MONETARY'u'LC_MONETARY'b'LC_NUMERIC'u'LC_NUMERIC'b'CHAR_MAX'u'CHAR_MAX'b' strcoll(string,string) -> int. + Compares two strings according to the locale. + 'u' strcoll(string,string) -> int. + Compares two strings according to the locale. + 'b' strxfrm(string) -> string. + Returns a string that behaves for cmp locale-aware. + 'u' strxfrm(string) -> string. + Returns a string that behaves for cmp locale-aware. + 'b' localeconv() -> dict. + Returns numeric and monetary locale-specific parameters. + 'u' localeconv() -> dict. + Returns numeric and monetary locale-specific parameters. + 'b'currency_symbol'u'currency_symbol'b'n_sign_posn'u'n_sign_posn'b'p_cs_precedes'u'p_cs_precedes'b'n_cs_precedes'u'n_cs_precedes'b'mon_grouping'u'mon_grouping'b'n_sep_by_space'u'n_sep_by_space'b'negative_sign'u'negative_sign'b'positive_sign'u'positive_sign'b'p_sep_by_space'u'p_sep_by_space'b'int_curr_symbol'u'int_curr_symbol'b'p_sign_posn'u'p_sign_posn'b'mon_thousands_sep'u'mon_thousands_sep'b'frac_digits'u'frac_digits'b'mon_decimal_point'u'mon_decimal_point'b'int_frac_digits'u'int_frac_digits'b' setlocale(integer,string=None) -> string. + Activates/queries locale processing. + 'u' setlocale(integer,string=None) -> string. + Activates/queries locale processing. + 'b'_locale emulation only supports "C" locale'u'_locale emulation only supports "C" locale'b'invalid grouping'u'invalid grouping'b'0123456789'u'0123456789'b'%(?:\((?P.*?)\))?(?P[-#0-9 +*.hlL]*?)[eEfFgGdiouxXcrs%]'u'%(?:\((?P.*?)\))?(?P[-#0-9 +*.hlL]*?)[eEfFgGdiouxXcrs%]'b'eEfFgG'u'eEfFgG'b'diu'u'diu'b'Formats a string in the same way that the % formatting would use, + but takes the current locale into account. + + Grouping is applied if the third parameter is true. + Conversion uses monetary thousands separator and grouping strings if + forth parameter monetary is true.'u'Formats a string in the same way that the % formatting would use, + but takes the current locale into account. + + Grouping is applied if the third parameter is true. + Conversion uses monetary thousands separator and grouping strings if + forth parameter monetary is true.'b'modifiers'u'modifiers'b'Deprecated, use format_string instead.'u'Deprecated, use format_string instead.'b'This method will be removed in a future version of Python. Use 'locale.format_string()' instead.'u'This method will be removed in a future version of Python. Use 'locale.format_string()' instead.'b'format() must be given exactly one %%char format specifier, %s not valid'u'format() must be given exactly one %%char format specifier, %s not valid'b'Formats val according to the currency settings + in the current locale.'u'Formats val according to the currency settings + in the current locale.'b'Currency formatting is not possible using the 'C' locale.'u'Currency formatting is not possible using the 'C' locale.'b'%%.%if'u'%%.%if'b'Convert float to string, taking the locale into account.'u'Convert float to string, taking the locale into account.'b'%.12g'u'%.12g'b'Parses a string as a normalized number according to the locale settings.'u'Parses a string as a normalized number according to the locale settings.'b'Parses a string as a float according to the locale settings.'u'Parses a string as a float according to the locale settings.'b'Converts a string to an integer according to the locale settings.'u'Converts a string to an integer according to the locale settings.'b'.ISO8859-15'u'.ISO8859-15'b'ISO8859-15'u'ISO8859-15'b'ISO8859-1'u'ISO8859-1'b' Returns a normalized locale code for the given locale + name. + + The returned locale code is formatted for use with + setlocale(). + + If normalization fails, the original name is returned + unchanged. + + If the given encoding is not known, the function defaults to + the default encoding for the locale code just like setlocale() + does. + + 'u' Returns a normalized locale code for the given locale + name. + + The returned locale code is formatted for use with + setlocale(). + + If normalization fails, the original name is returned + unchanged. + + If the given encoding is not known, the function defaults to + the default encoding for the locale code just like setlocale() + does. + + 'b' Parses the locale code for localename and returns the + result as tuple (language code, encoding). + + The localename is normalized and passed through the locale + alias engine. A ValueError is raised in case the locale name + cannot be parsed. + + The language code corresponds to RFC 1766. code and encoding + can be None in case the values cannot be determined or are + unknown to this implementation. + + 'u' Parses the locale code for localename and returns the + result as tuple (language code, encoding). + + The localename is normalized and passed through the locale + alias engine. A ValueError is raised in case the locale name + cannot be parsed. + + The language code corresponds to RFC 1766. code and encoding + can be None in case the values cannot be determined or are + unknown to this implementation. + + 'b'unknown locale: %s'u'unknown locale: %s'b' Builds a locale code from the given tuple (language code, + encoding). + + No aliasing or normalizing takes place. + + 'u' Builds a locale code from the given tuple (language code, + encoding). + + No aliasing or normalizing takes place. + + 'b'Locale must be None, a string, or an iterable of two strings -- language code, encoding.'u'Locale must be None, a string, or an iterable of two strings -- language code, encoding.'b' Tries to determine the default locale settings and returns + them as tuple (language code, encoding). + + According to POSIX, a program which has not called + setlocale(LC_ALL, "") runs using the portable 'C' locale. + Calling setlocale(LC_ALL, "") lets it use the default locale as + defined by the LANG variable. Since we don't want to interfere + with the current locale setting we thus emulate the behavior + in the way described above. + + To maintain compatibility with other platforms, not only the + LANG variable is tested, but a list of variables given as + envvars parameter. The first found to be defined will be + used. envvars defaults to the search path used in GNU gettext; + it must always contain the variable name 'LANG'. + + Except for the code 'C', the language code corresponds to RFC + 1766. code and encoding can be None in case the values cannot + be determined. + + 'u' Tries to determine the default locale settings and returns + them as tuple (language code, encoding). + + According to POSIX, a program which has not called + setlocale(LC_ALL, "") runs using the portable 'C' locale. + Calling setlocale(LC_ALL, "") lets it use the default locale as + defined by the LANG variable. Since we don't want to interfere + with the current locale setting we thus emulate the behavior + in the way described above. + + To maintain compatibility with other platforms, not only the + LANG variable is tested, but a list of variables given as + envvars parameter. The first found to be defined will be + used. envvars defaults to the search path used in GNU gettext; + it must always contain the variable name 'LANG'. + + Except for the code 'C', the language code corresponds to RFC + 1766. code and encoding can be None in case the values cannot + be determined. + + 'b'0x'u'0x'b' Returns the current setting for the given locale category as + tuple (language code, encoding). + + category may be one of the LC_* value except LC_ALL. It + defaults to LC_CTYPE. + + Except for the code 'C', the language code corresponds to RFC + 1766. code and encoding can be None in case the values cannot + be determined. + + 'u' Returns the current setting for the given locale category as + tuple (language code, encoding). + + category may be one of the LC_* value except LC_ALL. It + defaults to LC_CTYPE. + + Except for the code 'C', the language code corresponds to RFC + 1766. code and encoding can be None in case the values cannot + be determined. + + 'b'category LC_ALL is not supported'u'category LC_ALL is not supported'b' Set the locale for the given category. The locale can be + a string, an iterable of two strings (language code and encoding), + or None. + + Iterables are converted to strings using the locale aliasing + engine. Locale strings are passed directly to the C lib. + + category may be given as one of the LC_* values. + + 'u' Set the locale for the given category. The locale can be + a string, an iterable of two strings (language code and encoding), + or None. + + Iterables are converted to strings using the locale aliasing + engine. Locale strings are passed directly to the C lib. + + category may be given as one of the LC_* values. + + 'b' Sets the locale for category to the default setting. + + The default setting is determined by calling + getdefaultlocale(). category defaults to LC_ALL. + + 'u' Sets the locale for category to the default setting. + + The default setting is determined by calling + getdefaultlocale(). category defaults to LC_ALL. + + 'b'Return the charset that the user is likely using.'u'Return the charset that the user is likely using.'b'Return the charset that the user is likely using, + according to the system configuration.'u'Return the charset that the user is likely using, + according to the system configuration.'b'Return the charset that the user is likely using, + by looking at environment variables.'u'Return the charset that the user is likely using, + by looking at environment variables.'b'en'u'en'b'JIS7'u'JIS7'b'jis'u'jis'b'jis7'u'jis7'b'eucJP'u'eucJP'b'ajec'u'ajec'b'KOI8-C'u'KOI8-C'b'koi8c'u'koi8c'b'CP1251'u'CP1251'b'microsoftcp1251'u'microsoftcp1251'b'CP1255'u'CP1255'b'microsoftcp1255'u'microsoftcp1255'b'CP1256'u'CP1256'b'microsoftcp1256'u'microsoftcp1256'b'88591'u'88591'b'ISO8859-2'u'ISO8859-2'b'88592'u'88592'b'ISO8859-5'u'ISO8859-5'b'88595'u'88595'b'885915'u'885915'b'ISO8859-10'u'ISO8859-10'b'ISO8859-11'u'ISO8859-11'b'ISO8859-13'u'ISO8859-13'b'ISO8859-14'u'ISO8859-14'b'ISO8859-16'u'ISO8859-16'b'ISO8859-3'u'ISO8859-3'b'ISO8859-4'u'ISO8859-4'b'ISO8859-6'u'ISO8859-6'b'ISO8859-7'u'ISO8859-7'b'ISO8859-8'u'ISO8859-8'b'ISO8859-9'u'ISO8859-9'b'SJIS'u'SJIS'b'TACTIS'u'TACTIS'b'eucKR'u'eucKR'b'KOI8-R'u'KOI8-R'b'KOI8-T'u'KOI8-T'b'koi8_t'u'koi8_t'b'KOI8-U'u'KOI8-U'b'koi8_u'u'koi8_u'b'RK1048'u'RK1048'b'az_AZ.KOI8-C'u'az_AZ.KOI8-C'b'a3'u'a3'b'a3_az'u'a3_az'b'a3_az.koic'u'a3_az.koic'b'aa_DJ.ISO8859-1'u'aa_DJ.ISO8859-1'b'aa_dj'u'aa_dj'b'aa_ER.UTF-8'u'aa_ER.UTF-8'b'aa_er'u'aa_er'b'aa_ET.UTF-8'u'aa_ET.UTF-8'b'aa_et'u'aa_et'b'af_ZA.ISO8859-1'u'af_ZA.ISO8859-1'b'af'u'af'b'af_za'u'af_za'b'agr_PE.UTF-8'u'agr_PE.UTF-8'b'agr_pe'u'agr_pe'b'ak_GH.UTF-8'u'ak_GH.UTF-8'b'ak_gh'u'ak_gh'b'am_ET.UTF-8'u'am_ET.UTF-8'b'am'u'am'b'am_et'u'am_et'b'en_US.ISO8859-1'u'en_US.ISO8859-1'b'american'u'american'b'an_ES.ISO8859-15'u'an_ES.ISO8859-15'b'an_es'u'an_es'b'anp_IN.UTF-8'u'anp_IN.UTF-8'b'anp_in'u'anp_in'b'ar_AA.ISO8859-6'u'ar_AA.ISO8859-6'b'ar'u'ar'b'ar_aa'u'ar_aa'b'ar_AE.ISO8859-6'u'ar_AE.ISO8859-6'b'ar_ae'u'ar_ae'b'ar_BH.ISO8859-6'u'ar_BH.ISO8859-6'b'ar_bh'u'ar_bh'b'ar_DZ.ISO8859-6'u'ar_DZ.ISO8859-6'b'ar_dz'u'ar_dz'b'ar_EG.ISO8859-6'u'ar_EG.ISO8859-6'b'ar_eg'u'ar_eg'b'ar_IN.UTF-8'u'ar_IN.UTF-8'b'ar_in'u'ar_in'b'ar_IQ.ISO8859-6'u'ar_IQ.ISO8859-6'b'ar_iq'u'ar_iq'b'ar_JO.ISO8859-6'u'ar_JO.ISO8859-6'b'ar_jo'u'ar_jo'b'ar_KW.ISO8859-6'u'ar_KW.ISO8859-6'b'ar_kw'u'ar_kw'b'ar_LB.ISO8859-6'u'ar_LB.ISO8859-6'b'ar_lb'u'ar_lb'b'ar_LY.ISO8859-6'u'ar_LY.ISO8859-6'b'ar_ly'u'ar_ly'b'ar_MA.ISO8859-6'u'ar_MA.ISO8859-6'b'ar_ma'u'ar_ma'b'ar_OM.ISO8859-6'u'ar_OM.ISO8859-6'b'ar_om'u'ar_om'b'ar_QA.ISO8859-6'u'ar_QA.ISO8859-6'b'ar_qa'u'ar_qa'b'ar_SA.ISO8859-6'u'ar_SA.ISO8859-6'b'ar_sa'u'ar_sa'b'ar_SD.ISO8859-6'u'ar_SD.ISO8859-6'b'ar_sd'u'ar_sd'b'ar_SS.UTF-8'u'ar_SS.UTF-8'b'ar_ss'u'ar_ss'b'ar_SY.ISO8859-6'u'ar_SY.ISO8859-6'b'ar_sy'u'ar_sy'b'ar_TN.ISO8859-6'u'ar_TN.ISO8859-6'b'ar_tn'u'ar_tn'b'ar_YE.ISO8859-6'u'ar_YE.ISO8859-6'b'ar_ye'u'ar_ye'b'as_IN.UTF-8'u'as_IN.UTF-8'b'as_in'u'as_in'b'ast_ES.ISO8859-15'u'ast_ES.ISO8859-15'b'ast_es'u'ast_es'b'ayc_PE.UTF-8'u'ayc_PE.UTF-8'b'ayc_pe'u'ayc_pe'b'az_AZ.ISO8859-9E'u'az_AZ.ISO8859-9E'b'az'u'az'b'az_az'u'az_az'b'az_az.iso88599e'u'az_az.iso88599e'b'az_IR.UTF-8'u'az_IR.UTF-8'b'az_ir'u'az_ir'b'be_BY.CP1251'u'be_BY.CP1251'b'be'u'be'b'be_BY.UTF-8@latin'u'be_BY.UTF-8@latin'b'be@latin'u'be@latin'b'bg_BG.UTF-8'u'bg_BG.UTF-8'b'be_bg.utf8'u'be_bg.utf8'b'be_by'u'be_by'b'be_by@latin'u'be_by@latin'b'bem_ZM.UTF-8'u'bem_ZM.UTF-8'b'bem_zm'u'bem_zm'b'ber_DZ.UTF-8'u'ber_DZ.UTF-8'b'ber_dz'u'ber_dz'b'ber_MA.UTF-8'u'ber_MA.UTF-8'b'ber_ma'u'ber_ma'b'bg_BG.CP1251'u'bg_BG.CP1251'b'bg'u'bg'b'bg_bg'u'bg_bg'b'bhb_IN.UTF-8'u'bhb_IN.UTF-8'b'bhb_in.utf8'u'bhb_in.utf8'b'bho_IN.UTF-8'u'bho_IN.UTF-8'b'bho_in'u'bho_in'b'bho_NP.UTF-8'u'bho_NP.UTF-8'b'bho_np'u'bho_np'b'bi_VU.UTF-8'u'bi_VU.UTF-8'b'bi_vu'u'bi_vu'b'bn_BD.UTF-8'u'bn_BD.UTF-8'b'bn_bd'u'bn_bd'b'bn_IN.UTF-8'u'bn_IN.UTF-8'b'bn_in'u'bn_in'b'bo_CN.UTF-8'u'bo_CN.UTF-8'b'bo_cn'u'bo_cn'b'bo_IN.UTF-8'u'bo_IN.UTF-8'b'bo_in'u'bo_in'b'nb_NO.ISO8859-1'u'nb_NO.ISO8859-1'b'bokmal'u'bokmal'b'bokmål'u'bokmål'b'br_FR.ISO8859-1'u'br_FR.ISO8859-1'b'br_fr'u'br_fr'b'brx_IN.UTF-8'u'brx_IN.UTF-8'b'brx_in'u'brx_in'b'bs_BA.ISO8859-2'u'bs_BA.ISO8859-2'b'bs'u'bs'b'bs_ba'u'bs_ba'b'bulgarian'u'bulgarian'b'byn_ER.UTF-8'u'byn_ER.UTF-8'b'byn_er'u'byn_er'b'fr_CA.ISO8859-1'u'fr_CA.ISO8859-1'b'c-french'u'c-french'b'c.ascii'u'c.ascii'b'c.en'u'c.en'b'c.iso88591'u'c.iso88591'b'en_US.UTF-8'u'en_US.UTF-8'b'c.utf8'u'c.utf8'b'c_c'u'c_c'b'c_c.c'u'c_c.c'b'ca_ES.ISO8859-1'u'ca_ES.ISO8859-1'b'ca'u'ca'b'ca_AD.ISO8859-1'u'ca_AD.ISO8859-1'b'ca_ad'u'ca_ad'b'ca_es'u'ca_es'b'ca_ES.UTF-8@valencia'u'ca_ES.UTF-8@valencia'b'ca_es@valencia'u'ca_es@valencia'b'ca_FR.ISO8859-1'u'ca_FR.ISO8859-1'b'ca_fr'u'ca_fr'b'ca_IT.ISO8859-1'u'ca_IT.ISO8859-1'b'ca_it'u'ca_it'b'catalan'u'catalan'b'ce_RU.UTF-8'u'ce_RU.UTF-8'b'ce_ru'u'ce_ru'b'cextend'u'cextend'b'zh_CN.eucCN'u'zh_CN.eucCN'b'chinese-s'u'chinese-s'b'zh_TW.eucTW'u'zh_TW.eucTW'b'chinese-t'u'chinese-t'b'chr_US.UTF-8'u'chr_US.UTF-8'b'chr_us'u'chr_us'b'ckb_IQ.UTF-8'u'ckb_IQ.UTF-8'b'ckb_iq'u'ckb_iq'b'cmn_TW.UTF-8'u'cmn_TW.UTF-8'b'cmn_tw'u'cmn_tw'b'crh_UA.UTF-8'u'crh_UA.UTF-8'b'crh_ua'u'crh_ua'b'hr_HR.ISO8859-2'u'hr_HR.ISO8859-2'b'croatian'u'croatian'b'cs_CZ.ISO8859-2'u'cs_CZ.ISO8859-2'b'cs'u'cs'b'cs_cs'u'cs_cs'b'cs_cz'u'cs_cz'b'csb_PL.UTF-8'u'csb_PL.UTF-8'b'csb_pl'u'csb_pl'b'cv_RU.UTF-8'u'cv_RU.UTF-8'b'cv_ru'u'cv_ru'b'cy_GB.ISO8859-1'u'cy_GB.ISO8859-1'b'cy'u'cy'b'cy_gb'u'cy_gb'b'cz'u'cz'b'cz_cz'u'cz_cz'b'czech'u'czech'b'da_DK.ISO8859-1'u'da_DK.ISO8859-1'b'da'u'da'b'da_dk'u'da_dk'b'danish'u'danish'b'dansk'u'dansk'b'de_DE.ISO8859-1'u'de_DE.ISO8859-1'b'de'u'de'b'de_AT.ISO8859-1'u'de_AT.ISO8859-1'b'de_at'u'de_at'b'de_BE.ISO8859-1'u'de_BE.ISO8859-1'b'de_be'u'de_be'b'de_CH.ISO8859-1'u'de_CH.ISO8859-1'b'de_ch'u'de_ch'b'de_de'u'de_de'b'de_IT.ISO8859-1'u'de_IT.ISO8859-1'b'de_it'u'de_it'b'de_LI.UTF-8'u'de_LI.UTF-8'b'de_li.utf8'u'de_li.utf8'b'de_LU.ISO8859-1'u'de_LU.ISO8859-1'b'de_lu'u'de_lu'b'deutsch'u'deutsch'b'doi_IN.UTF-8'u'doi_IN.UTF-8'b'doi_in'u'doi_in'b'nl_NL.ISO8859-1'u'nl_NL.ISO8859-1'b'dutch'u'dutch'b'nl_BE.ISO8859-1'u'nl_BE.ISO8859-1'b'dutch.iso88591'u'dutch.iso88591'b'dv_MV.UTF-8'u'dv_MV.UTF-8'b'dv_mv'u'dv_mv'b'dz_BT.UTF-8'u'dz_BT.UTF-8'b'dz_bt'u'dz_bt'b'ee_EE.ISO8859-4'u'ee_EE.ISO8859-4'b'ee'u'ee'b'ee_ee'u'ee_ee'b'et_EE.ISO8859-1'u'et_EE.ISO8859-1'b'eesti'u'eesti'b'el_GR.ISO8859-7'u'el_GR.ISO8859-7'b'el'u'el'b'el_CY.ISO8859-7'u'el_CY.ISO8859-7'b'el_cy'u'el_cy'b'el_gr'u'el_gr'b'el_GR.ISO8859-15'u'el_GR.ISO8859-15'b'el_gr@euro'u'el_gr@euro'b'en_AG.UTF-8'u'en_AG.UTF-8'b'en_ag'u'en_ag'b'en_AU.ISO8859-1'u'en_AU.ISO8859-1'b'en_au'u'en_au'b'en_BE.ISO8859-1'u'en_BE.ISO8859-1'b'en_be'u'en_be'b'en_BW.ISO8859-1'u'en_BW.ISO8859-1'b'en_bw'u'en_bw'b'en_CA.ISO8859-1'u'en_CA.ISO8859-1'b'en_ca'u'en_ca'b'en_DK.ISO8859-1'u'en_DK.ISO8859-1'b'en_dk'u'en_dk'b'en_DL.UTF-8'u'en_DL.UTF-8'b'en_dl.utf8'u'en_dl.utf8'b'en_GB.ISO8859-1'u'en_GB.ISO8859-1'b'en_gb'u'en_gb'b'en_HK.ISO8859-1'u'en_HK.ISO8859-1'b'en_hk'u'en_hk'b'en_IE.ISO8859-1'u'en_IE.ISO8859-1'b'en_ie'u'en_ie'b'en_IL.UTF-8'u'en_IL.UTF-8'b'en_il'u'en_il'b'en_IN.ISO8859-1'u'en_IN.ISO8859-1'b'en_in'u'en_in'b'en_NG.UTF-8'u'en_NG.UTF-8'b'en_ng'u'en_ng'b'en_NZ.ISO8859-1'u'en_NZ.ISO8859-1'b'en_nz'u'en_nz'b'en_PH.ISO8859-1'u'en_PH.ISO8859-1'b'en_ph'u'en_ph'b'en_SC.UTF-8'u'en_SC.UTF-8'b'en_sc.utf8'u'en_sc.utf8'b'en_SG.ISO8859-1'u'en_SG.ISO8859-1'b'en_sg'u'en_sg'b'en_uk'u'en_uk'b'en_us'u'en_us'b'en_US.ISO8859-15'u'en_US.ISO8859-15'b'en_us@euro@euro'u'en_us@euro@euro'b'en_ZA.ISO8859-1'u'en_ZA.ISO8859-1'b'en_za'u'en_za'b'en_ZM.UTF-8'u'en_ZM.UTF-8'b'en_zm'u'en_zm'b'en_ZW.ISO8859-1'u'en_ZW.ISO8859-1'b'en_zw'u'en_zw'b'en_ZS.UTF-8'u'en_ZS.UTF-8'b'en_zw.utf8'u'en_zw.utf8'b'eng_gb'u'eng_gb'b'en_EN.ISO8859-1'u'en_EN.ISO8859-1'b'english'u'english'b'english.iso88591'u'english.iso88591'b'english_uk'u'english_uk'b'english_united-states'u'english_united-states'b'english_united-states.437'u'english_united-states.437'b'english_us'u'english_us'b'eo_XX.ISO8859-3'u'eo_XX.ISO8859-3'b'eo'u'eo'b'eo.UTF-8'u'eo.UTF-8'b'eo.utf8'u'eo.utf8'b'eo_EO.ISO8859-3'u'eo_EO.ISO8859-3'b'eo_eo'u'eo_eo'b'eo_US.UTF-8'u'eo_US.UTF-8'b'eo_us.utf8'u'eo_us.utf8'b'eo_xx'u'eo_xx'b'es_ES.ISO8859-1'u'es_ES.ISO8859-1'b'es'u'es'b'es_AR.ISO8859-1'u'es_AR.ISO8859-1'b'es_ar'u'es_ar'b'es_BO.ISO8859-1'u'es_BO.ISO8859-1'b'es_bo'u'es_bo'b'es_CL.ISO8859-1'u'es_CL.ISO8859-1'b'es_cl'u'es_cl'b'es_CO.ISO8859-1'u'es_CO.ISO8859-1'b'es_co'u'es_co'b'es_CR.ISO8859-1'u'es_CR.ISO8859-1'b'es_cr'u'es_cr'b'es_CU.UTF-8'u'es_CU.UTF-8'b'es_cu'u'es_cu'b'es_DO.ISO8859-1'u'es_DO.ISO8859-1'b'es_do'u'es_do'b'es_EC.ISO8859-1'u'es_EC.ISO8859-1'b'es_ec'u'es_ec'b'es_es'u'es_es'b'es_GT.ISO8859-1'u'es_GT.ISO8859-1'b'es_gt'u'es_gt'b'es_HN.ISO8859-1'u'es_HN.ISO8859-1'b'es_hn'u'es_hn'b'es_MX.ISO8859-1'u'es_MX.ISO8859-1'b'es_mx'u'es_mx'b'es_NI.ISO8859-1'u'es_NI.ISO8859-1'b'es_ni'u'es_ni'b'es_PA.ISO8859-1'u'es_PA.ISO8859-1'b'es_pa'u'es_pa'b'es_PE.ISO8859-1'u'es_PE.ISO8859-1'b'es_pe'u'es_pe'b'es_PR.ISO8859-1'u'es_PR.ISO8859-1'b'es_pr'u'es_pr'b'es_PY.ISO8859-1'u'es_PY.ISO8859-1'b'es_py'u'es_py'b'es_SV.ISO8859-1'u'es_SV.ISO8859-1'b'es_sv'u'es_sv'b'es_US.ISO8859-1'u'es_US.ISO8859-1'b'es_us'u'es_us'b'es_UY.ISO8859-1'u'es_UY.ISO8859-1'b'es_uy'u'es_uy'b'es_VE.ISO8859-1'u'es_VE.ISO8859-1'b'es_ve'u'es_ve'b'estonian'u'estonian'b'et_EE.ISO8859-15'u'et_EE.ISO8859-15'b'et'u'et'b'et_ee'u'et_ee'b'eu_ES.ISO8859-1'u'eu_ES.ISO8859-1'b'eu'u'eu'b'eu_es'u'eu_es'b'eu_FR.ISO8859-1'u'eu_FR.ISO8859-1'b'eu_fr'u'eu_fr'b'fa_IR.UTF-8'u'fa_IR.UTF-8'b'fa'u'fa'b'fa_ir'u'fa_ir'b'fa_IR.ISIRI-3342'u'fa_IR.ISIRI-3342'b'fa_ir.isiri3342'u'fa_ir.isiri3342'b'ff_SN.UTF-8'u'ff_SN.UTF-8'b'ff_sn'u'ff_sn'b'fi_FI.ISO8859-15'u'fi_FI.ISO8859-15'b'fi'u'fi'b'fi_fi'u'fi_fi'b'fil_PH.UTF-8'u'fil_PH.UTF-8'b'fil_ph'u'fil_ph'b'fi_FI.ISO8859-1'u'fi_FI.ISO8859-1'b'finnish'u'finnish'b'fo_FO.ISO8859-1'u'fo_FO.ISO8859-1'b'fo'u'fo'b'fo_fo'u'fo_fo'b'fr_FR.ISO8859-1'u'fr_FR.ISO8859-1'b'fr'u'fr'b'fr_BE.ISO8859-1'u'fr_BE.ISO8859-1'b'fr_be'u'fr_be'b'fr_ca'u'fr_ca'b'fr_CH.ISO8859-1'u'fr_CH.ISO8859-1'b'fr_ch'u'fr_ch'b'fr_fr'u'fr_fr'b'fr_LU.ISO8859-1'u'fr_LU.ISO8859-1'b'fr_lu'u'fr_lu'b'français'u'français'b'fre_fr'u'fre_fr'b'french'u'french'b'french.iso88591'u'french.iso88591'b'french_france'u'french_france'b'fur_IT.UTF-8'u'fur_IT.UTF-8'b'fur_it'u'fur_it'b'fy_DE.UTF-8'u'fy_DE.UTF-8'b'fy_de'u'fy_de'b'fy_NL.UTF-8'u'fy_NL.UTF-8'b'fy_nl'u'fy_nl'b'ga_IE.ISO8859-1'u'ga_IE.ISO8859-1'b'ga'u'ga'b'ga_ie'u'ga_ie'b'gl_ES.ISO8859-1'u'gl_ES.ISO8859-1'b'galego'u'galego'b'galician'u'galician'b'gd_GB.ISO8859-1'u'gd_GB.ISO8859-1'b'gd'u'gd'b'gd_gb'u'gd_gb'b'ger_de'u'ger_de'b'german'u'german'b'german.iso88591'u'german.iso88591'b'german_germany'u'german_germany'b'gez_ER.UTF-8'u'gez_ER.UTF-8'b'gez_er'u'gez_er'b'gez_ET.UTF-8'u'gez_ET.UTF-8'b'gez_et'u'gez_et'b'gl'u'gl'b'gl_es'u'gl_es'b'gu_IN.UTF-8'u'gu_IN.UTF-8'b'gu_in'u'gu_in'b'gv_GB.ISO8859-1'u'gv_GB.ISO8859-1'b'gv'u'gv'b'gv_gb'u'gv_gb'b'ha_NG.UTF-8'u'ha_NG.UTF-8'b'ha_ng'u'ha_ng'b'hak_TW.UTF-8'u'hak_TW.UTF-8'b'hak_tw'u'hak_tw'b'he_IL.ISO8859-8'u'he_IL.ISO8859-8'b'he'u'he'b'he_il'u'he_il'b'hi_IN.ISCII-DEV'u'hi_IN.ISCII-DEV'b'hi'u'hi'b'hi_in'u'hi_in'b'hi_in.isciidev'u'hi_in.isciidev'b'hif_FJ.UTF-8'u'hif_FJ.UTF-8'b'hif_fj'u'hif_fj'b'hne_IN.UTF-8'u'hne_IN.UTF-8'b'hne'u'hne'b'hne_in'u'hne_in'b'hr_hr'u'hr_hr'b'hrvatski'u'hrvatski'b'hsb_DE.ISO8859-2'u'hsb_DE.ISO8859-2'b'hsb_de'u'hsb_de'b'ht_HT.UTF-8'u'ht_HT.UTF-8'b'ht_ht'u'ht_ht'b'hu_HU.ISO8859-2'u'hu_HU.ISO8859-2'b'hu'u'hu'b'hu_hu'u'hu_hu'b'hungarian'u'hungarian'b'hy_AM.UTF-8'u'hy_AM.UTF-8'b'hy_am'u'hy_am'b'hy_AM.ARMSCII_8'u'hy_AM.ARMSCII_8'b'hy_am.armscii8'u'hy_am.armscii8'b'ia.UTF-8'u'ia.UTF-8'b'ia'u'ia'b'ia_FR.UTF-8'u'ia_FR.UTF-8'b'ia_fr'u'ia_fr'b'is_IS.ISO8859-1'u'is_IS.ISO8859-1'b'icelandic'u'icelandic'b'id_ID.ISO8859-1'u'id_ID.ISO8859-1'b'id_id'u'id_id'b'ig_NG.UTF-8'u'ig_NG.UTF-8'b'ig_ng'u'ig_ng'b'ik_CA.UTF-8'u'ik_CA.UTF-8'b'ik_ca'u'ik_ca'b'in_id'u'in_id'b'is_is'u'is_is'b'iso8859-1'u'iso8859-1'b'iso8859-15'u'iso8859-15'b'it_IT.ISO8859-1'u'it_IT.ISO8859-1'b'it'u'it'b'it_CH.ISO8859-1'u'it_CH.ISO8859-1'b'it_ch'u'it_ch'b'it_it'u'it_it'b'italian'u'italian'b'iu_CA.NUNACOM-8'u'iu_CA.NUNACOM-8'b'iu'u'iu'b'iu_ca'u'iu_ca'b'iu_ca.nunacom8'u'iu_ca.nunacom8'b'iw'u'iw'b'iw_il'u'iw_il'b'iw_IL.UTF-8'u'iw_IL.UTF-8'b'iw_il.utf8'u'iw_il.utf8'b'ja_JP.eucJP'u'ja_JP.eucJP'b'ja'u'ja'b'ja_jp'u'ja_jp'b'ja_jp.euc'u'ja_jp.euc'b'ja_JP.SJIS'u'ja_JP.SJIS'b'ja_jp.mscode'u'ja_jp.mscode'b'ja_jp.pck'u'ja_jp.pck'b'japan'u'japan'b'japanese'u'japanese'b'japanese-euc'u'japanese-euc'b'japanese.euc'u'japanese.euc'b'jp_jp'u'jp_jp'b'ka_GE.GEORGIAN-ACADEMY'u'ka_GE.GEORGIAN-ACADEMY'b'ka'u'ka'b'ka_ge'u'ka_ge'b'ka_ge.georgianacademy'u'ka_ge.georgianacademy'b'ka_GE.GEORGIAN-PS'u'ka_GE.GEORGIAN-PS'b'ka_ge.georgianps'u'ka_ge.georgianps'b'ka_ge.georgianrs'u'ka_ge.georgianrs'b'kab_DZ.UTF-8'u'kab_DZ.UTF-8'b'kab_dz'u'kab_dz'b'kk_KZ.ptcp154'u'kk_KZ.ptcp154'b'kk_kz'u'kk_kz'b'kl_GL.ISO8859-1'u'kl_GL.ISO8859-1'b'kl'u'kl'b'kl_gl'u'kl_gl'b'km_KH.UTF-8'u'km_KH.UTF-8'b'km_kh'u'km_kh'b'kn_IN.UTF-8'u'kn_IN.UTF-8'b'kn'u'kn'b'kn_in'u'kn_in'b'ko_KR.eucKR'u'ko_KR.eucKR'b'ko'u'ko'b'ko_kr'u'ko_kr'b'ko_kr.euc'u'ko_kr.euc'b'kok_IN.UTF-8'u'kok_IN.UTF-8'b'kok_in'u'kok_in'b'korean.euc'u'korean.euc'b'ks_IN.UTF-8'u'ks_IN.UTF-8'b'ks'u'ks'b'ks_in'u'ks_in'b'ks_IN.UTF-8@devanagari'u'ks_IN.UTF-8@devanagari'b'ks_in@devanagari.utf8'u'ks_in@devanagari.utf8'b'ku_TR.ISO8859-9'u'ku_TR.ISO8859-9'b'ku_tr'u'ku_tr'b'kw_GB.ISO8859-1'u'kw_GB.ISO8859-1'b'kw'u'kw'b'kw_gb'u'kw_gb'b'ky_KG.UTF-8'u'ky_KG.UTF-8'b'ky'u'ky'b'ky_kg'u'ky_kg'b'lb_LU.UTF-8'u'lb_LU.UTF-8'b'lb_lu'u'lb_lu'b'lg_UG.ISO8859-10'u'lg_UG.ISO8859-10'b'lg_ug'u'lg_ug'b'li_BE.UTF-8'u'li_BE.UTF-8'b'li_be'u'li_be'b'li_NL.UTF-8'u'li_NL.UTF-8'b'li_nl'u'li_nl'b'lij_IT.UTF-8'u'lij_IT.UTF-8'b'lij_it'u'lij_it'b'lt_LT.ISO8859-13'u'lt_LT.ISO8859-13'b'lithuanian'u'lithuanian'b'ln_CD.UTF-8'u'ln_CD.UTF-8'b'ln_cd'u'ln_cd'b'lo_LA.MULELAO-1'u'lo_LA.MULELAO-1'b'lo'u'lo'b'lo_la'u'lo_la'b'lo_LA.IBM-CP1133'u'lo_LA.IBM-CP1133'b'lo_la.cp1133'u'lo_la.cp1133'b'lo_la.ibmcp1133'u'lo_la.ibmcp1133'b'lo_la.mulelao1'u'lo_la.mulelao1'b'lt_lt'u'lt_lt'b'lv_LV.ISO8859-13'u'lv_LV.ISO8859-13'b'lv'u'lv'b'lv_lv'u'lv_lv'b'lzh_TW.UTF-8'u'lzh_TW.UTF-8'b'lzh_tw'u'lzh_tw'b'mag_IN.UTF-8'u'mag_IN.UTF-8'b'mag_in'u'mag_in'b'mai_IN.UTF-8'u'mai_IN.UTF-8'b'mai'u'mai'b'mai_in'u'mai_in'b'mai_NP.UTF-8'u'mai_NP.UTF-8'b'mai_np'u'mai_np'b'mfe_MU.UTF-8'u'mfe_MU.UTF-8'b'mfe_mu'u'mfe_mu'b'mg_MG.ISO8859-15'u'mg_MG.ISO8859-15'b'mg_mg'u'mg_mg'b'mhr_RU.UTF-8'u'mhr_RU.UTF-8'b'mhr_ru'u'mhr_ru'b'mi_NZ.ISO8859-1'u'mi_NZ.ISO8859-1'b'mi'u'mi'b'mi_nz'u'mi_nz'b'miq_NI.UTF-8'u'miq_NI.UTF-8'b'miq_ni'u'miq_ni'b'mjw_IN.UTF-8'u'mjw_IN.UTF-8'b'mjw_in'u'mjw_in'b'mk_MK.ISO8859-5'u'mk_MK.ISO8859-5'b'mk'u'mk'b'mk_mk'u'mk_mk'b'ml_IN.UTF-8'u'ml_IN.UTF-8'b'ml'u'ml'b'ml_in'u'ml_in'b'mn_MN.UTF-8'u'mn_MN.UTF-8'b'mn_mn'u'mn_mn'b'mni_IN.UTF-8'u'mni_IN.UTF-8'b'mni_in'u'mni_in'b'mr_IN.UTF-8'u'mr_IN.UTF-8'b'mr'u'mr'b'mr_in'u'mr_in'b'ms_MY.ISO8859-1'u'ms_MY.ISO8859-1'b'ms'u'ms'b'ms_my'u'ms_my'b'mt_MT.ISO8859-3'u'mt_MT.ISO8859-3'b'mt'u'mt'b'mt_mt'u'mt_mt'b'my_MM.UTF-8'u'my_MM.UTF-8'b'my_mm'u'my_mm'b'nan_TW.UTF-8'u'nan_TW.UTF-8'b'nan_tw'u'nan_tw'b'nb'u'nb'b'nb_no'u'nb_no'b'nds_DE.UTF-8'u'nds_DE.UTF-8'b'nds_de'u'nds_de'b'nds_NL.UTF-8'u'nds_NL.UTF-8'b'nds_nl'u'nds_nl'b'ne_NP.UTF-8'u'ne_NP.UTF-8'b'ne_np'u'ne_np'b'nhn_MX.UTF-8'u'nhn_MX.UTF-8'b'nhn_mx'u'nhn_mx'b'niu_NU.UTF-8'u'niu_NU.UTF-8'b'niu_nu'u'niu_nu'b'niu_NZ.UTF-8'u'niu_NZ.UTF-8'b'niu_nz'u'niu_nz'b'nl'u'nl'b'nl_AW.UTF-8'u'nl_AW.UTF-8'b'nl_aw'u'nl_aw'b'nl_be'u'nl_be'b'nl_nl'u'nl_nl'b'nn_NO.ISO8859-1'u'nn_NO.ISO8859-1'b'nn'u'nn'b'nn_no'u'nn_no'b'no_NO.ISO8859-1'u'no_NO.ISO8859-1'b'no'u'no'b'ny_NO.ISO8859-1'u'ny_NO.ISO8859-1'b'no@nynorsk'u'no@nynorsk'b'no_no'u'no_no'b'no_no.iso88591@bokmal'u'no_no.iso88591@bokmal'b'no_no.iso88591@nynorsk'u'no_no.iso88591@nynorsk'b'norwegian'u'norwegian'b'nr_ZA.ISO8859-1'u'nr_ZA.ISO8859-1'b'nr'u'nr'b'nr_za'u'nr_za'b'nso_ZA.ISO8859-15'u'nso_ZA.ISO8859-15'b'nso'u'nso'b'nso_za'u'nso_za'b'ny'u'ny'b'ny_no'u'ny_no'b'nynorsk'u'nynorsk'b'oc_FR.ISO8859-1'u'oc_FR.ISO8859-1'b'oc'u'oc'b'oc_fr'u'oc_fr'b'om_ET.UTF-8'u'om_ET.UTF-8'b'om_et'u'om_et'b'om_KE.ISO8859-1'u'om_KE.ISO8859-1'b'om_ke'u'om_ke'b'or_IN.UTF-8'u'or_IN.UTF-8'b'or_in'u'or_in'b'os_RU.UTF-8'u'os_RU.UTF-8'b'os_ru'u'os_ru'b'pa_IN.UTF-8'u'pa_IN.UTF-8'b'pa'u'pa'b'pa_in'u'pa_in'b'pa_PK.UTF-8'u'pa_PK.UTF-8'b'pa_pk'u'pa_pk'b'pap_AN.UTF-8'u'pap_AN.UTF-8'b'pap_an'u'pap_an'b'pap_AW.UTF-8'u'pap_AW.UTF-8'b'pap_aw'u'pap_aw'b'pap_CW.UTF-8'u'pap_CW.UTF-8'b'pap_cw'u'pap_cw'b'pd_US.ISO8859-1'u'pd_US.ISO8859-1'b'pd'u'pd'b'pd_DE.ISO8859-1'u'pd_DE.ISO8859-1'b'pd_de'u'pd_de'b'pd_us'u'pd_us'b'ph_PH.ISO8859-1'u'ph_PH.ISO8859-1'b'ph'u'ph'b'ph_ph'u'ph_ph'b'pl_PL.ISO8859-2'u'pl_PL.ISO8859-2'b'pl'u'pl'b'pl_pl'u'pl_pl'b'polish'u'polish'b'pt_PT.ISO8859-1'u'pt_PT.ISO8859-1'b'portuguese'u'portuguese'b'pt_BR.ISO8859-1'u'pt_BR.ISO8859-1'b'portuguese_brazil'u'portuguese_brazil'b'posix-utf2'u'posix-utf2'b'pp_AN.ISO8859-1'u'pp_AN.ISO8859-1'b'pp'u'pp'b'pp_an'u'pp_an'b'ps_AF.UTF-8'u'ps_AF.UTF-8'b'ps_af'u'ps_af'b'pt'u'pt'b'pt_br'u'pt_br'b'pt_pt'u'pt_pt'b'quz_PE.UTF-8'u'quz_PE.UTF-8'b'quz_pe'u'quz_pe'b'raj_IN.UTF-8'u'raj_IN.UTF-8'b'raj_in'u'raj_in'b'ro_RO.ISO8859-2'u'ro_RO.ISO8859-2'b'ro'u'ro'b'ro_ro'u'ro_ro'b'romanian'u'romanian'b'ru_RU.UTF-8'u'ru_RU.UTF-8'b'ru'u'ru'b'ru_ru'u'ru_ru'b'ru_UA.KOI8-U'u'ru_UA.KOI8-U'b'ru_ua'u'ru_ua'b'rumanian'u'rumanian'b'ru_RU.KOI8-R'u'ru_RU.KOI8-R'b'russian'u'russian'b'rw_RW.ISO8859-1'u'rw_RW.ISO8859-1'b'rw'u'rw'b'rw_rw'u'rw_rw'b'sa_IN.UTF-8'u'sa_IN.UTF-8'b'sa_in'u'sa_in'b'sat_IN.UTF-8'u'sat_IN.UTF-8'b'sat_in'u'sat_in'b'sc_IT.UTF-8'u'sc_IT.UTF-8'b'sc_it'u'sc_it'b'sd_IN.UTF-8'u'sd_IN.UTF-8'b'sd'u'sd'b'sd_in'u'sd_in'b'sd_IN.UTF-8@devanagari'u'sd_IN.UTF-8@devanagari'b'sd_in@devanagari.utf8'u'sd_in@devanagari.utf8'b'sd_PK.UTF-8'u'sd_PK.UTF-8'b'sd_pk'u'sd_pk'b'se_NO.UTF-8'u'se_NO.UTF-8'b'se_no'u'se_no'b'sr_RS.UTF-8@latin'u'sr_RS.UTF-8@latin'b'serbocroatian'u'serbocroatian'b'sgs_LT.UTF-8'u'sgs_LT.UTF-8'b'sgs_lt'u'sgs_lt'b'sh'u'sh'b'sr_CS.ISO8859-2'u'sr_CS.ISO8859-2'b'sh_ba.iso88592@bosnia'u'sh_ba.iso88592@bosnia'b'sh_HR.ISO8859-2'u'sh_HR.ISO8859-2'b'sh_hr'u'sh_hr'b'sh_hr.iso88592'u'sh_hr.iso88592'b'sh_sp'u'sh_sp'b'sh_yu'u'sh_yu'b'shn_MM.UTF-8'u'shn_MM.UTF-8'b'shn_mm'u'shn_mm'b'shs_CA.UTF-8'u'shs_CA.UTF-8'b'shs_ca'u'shs_ca'b'si_LK.UTF-8'u'si_LK.UTF-8'b'si'u'si'b'si_lk'u'si_lk'b'sid_ET.UTF-8'u'sid_ET.UTF-8'b'sid_et'u'sid_et'b'sinhala'u'sinhala'b'sk_SK.ISO8859-2'u'sk_SK.ISO8859-2'b'sk'u'sk'b'sk_sk'u'sk_sk'b'sl_SI.ISO8859-2'u'sl_SI.ISO8859-2'b'sl'u'sl'b'sl_CS.ISO8859-2'u'sl_CS.ISO8859-2'b'sl_cs'u'sl_cs'b'sl_si'u'sl_si'b'slovak'u'slovak'b'slovene'u'slovene'b'slovenian'u'slovenian'b'sm_WS.UTF-8'u'sm_WS.UTF-8'b'sm_ws'u'sm_ws'b'so_DJ.ISO8859-1'u'so_DJ.ISO8859-1'b'so_dj'u'so_dj'b'so_ET.UTF-8'u'so_ET.UTF-8'b'so_et'u'so_et'b'so_KE.ISO8859-1'u'so_KE.ISO8859-1'b'so_ke'u'so_ke'b'so_SO.ISO8859-1'u'so_SO.ISO8859-1'b'so_so'u'so_so'b'sr_CS.ISO8859-5'u'sr_CS.ISO8859-5'b'sp'u'sp'b'sp_yu'u'sp_yu'b'spanish'u'spanish'b'spanish_spain'u'spanish_spain'b'sq_AL.ISO8859-2'u'sq_AL.ISO8859-2'b'sq'u'sq'b'sq_al'u'sq_al'b'sq_MK.UTF-8'u'sq_MK.UTF-8'b'sq_mk'u'sq_mk'b'sr_RS.UTF-8'u'sr_RS.UTF-8'b'sr'u'sr'b'sr@cyrillic'u'sr@cyrillic'b'sr_CS.UTF-8@latin'u'sr_CS.UTF-8@latin'b'sr@latn'u'sr@latn'b'sr_CS.UTF-8'u'sr_CS.UTF-8'b'sr_cs'u'sr_cs'b'sr_cs.iso88592@latn'u'sr_cs.iso88592@latn'b'sr_cs@latn'u'sr_cs@latn'b'sr_ME.UTF-8'u'sr_ME.UTF-8'b'sr_me'u'sr_me'b'sr_rs'u'sr_rs'b'sr_rs@latn'u'sr_rs@latn'b'sr_sp'u'sr_sp'b'sr_yu'u'sr_yu'b'sr_CS.CP1251'u'sr_CS.CP1251'b'sr_yu.cp1251@cyrillic'u'sr_yu.cp1251@cyrillic'b'sr_yu.iso88592'u'sr_yu.iso88592'b'sr_yu.iso88595'u'sr_yu.iso88595'b'sr_yu.iso88595@cyrillic'u'sr_yu.iso88595@cyrillic'b'sr_yu.microsoftcp1251@cyrillic'u'sr_yu.microsoftcp1251@cyrillic'b'sr_yu.utf8'u'sr_yu.utf8'b'sr_yu.utf8@cyrillic'u'sr_yu.utf8@cyrillic'b'sr_yu@cyrillic'u'sr_yu@cyrillic'b'ss_ZA.ISO8859-1'u'ss_ZA.ISO8859-1'b'ss'u'ss'b'ss_za'u'ss_za'b'st_ZA.ISO8859-1'u'st_ZA.ISO8859-1'b'st'u'st'b'st_za'u'st_za'b'sv_SE.ISO8859-1'u'sv_SE.ISO8859-1'b'sv'u'sv'b'sv_FI.ISO8859-1'u'sv_FI.ISO8859-1'b'sv_fi'u'sv_fi'b'sv_se'u'sv_se'b'sw_KE.UTF-8'u'sw_KE.UTF-8'b'sw_ke'u'sw_ke'b'sw_TZ.UTF-8'u'sw_TZ.UTF-8'b'sw_tz'u'sw_tz'b'swedish'u'swedish'b'szl_PL.UTF-8'u'szl_PL.UTF-8'b'szl_pl'u'szl_pl'b'ta_IN.TSCII-0'u'ta_IN.TSCII-0'b'ta'u'ta'b'ta_in'u'ta_in'b'ta_in.tscii'u'ta_in.tscii'b'ta_in.tscii0'u'ta_in.tscii0'b'ta_LK.UTF-8'u'ta_LK.UTF-8'b'ta_lk'u'ta_lk'b'tcy_IN.UTF-8'u'tcy_IN.UTF-8'b'tcy_in.utf8'u'tcy_in.utf8'b'te_IN.UTF-8'u'te_IN.UTF-8'b'te'u'te'b'te_in'u'te_in'b'tg_TJ.KOI8-C'u'tg_TJ.KOI8-C'b'tg'u'tg'b'tg_tj'u'tg_tj'b'th_TH.ISO8859-11'u'th_TH.ISO8859-11'b'th'u'th'b'th_th'u'th_th'b'th_TH.TIS620'u'th_TH.TIS620'b'th_th.tactis'u'th_th.tactis'b'th_th.tis620'u'th_th.tis620'b'the_NP.UTF-8'u'the_NP.UTF-8'b'the_np'u'the_np'b'ti_ER.UTF-8'u'ti_ER.UTF-8'b'ti_er'u'ti_er'b'ti_ET.UTF-8'u'ti_ET.UTF-8'b'ti_et'u'ti_et'b'tig_ER.UTF-8'u'tig_ER.UTF-8'b'tig_er'u'tig_er'b'tk_TM.UTF-8'u'tk_TM.UTF-8'b'tk_tm'u'tk_tm'b'tl_PH.ISO8859-1'u'tl_PH.ISO8859-1'b'tl'u'tl'b'tl_ph'u'tl_ph'b'tn_ZA.ISO8859-15'u'tn_ZA.ISO8859-15'b'tn'u'tn'b'tn_za'u'tn_za'b'to_TO.UTF-8'u'to_TO.UTF-8'b'to_to'u'to_to'b'tpi_PG.UTF-8'u'tpi_PG.UTF-8'b'tpi_pg'u'tpi_pg'b'tr_TR.ISO8859-9'u'tr_TR.ISO8859-9'b'tr'u'tr'b'tr_CY.ISO8859-9'u'tr_CY.ISO8859-9'b'tr_cy'u'tr_cy'b'tr_tr'u'tr_tr'b'ts_ZA.ISO8859-1'u'ts_ZA.ISO8859-1'b'ts'u'ts'b'ts_za'u'ts_za'b'tt_RU.TATAR-CYR'u'tt_RU.TATAR-CYR'b'tt'u'tt'b'tt_ru'u'tt_ru'b'tt_ru.tatarcyr'u'tt_ru.tatarcyr'b'tt_RU.UTF-8@iqtelif'u'tt_RU.UTF-8@iqtelif'b'tt_ru@iqtelif'u'tt_ru@iqtelif'b'turkish'u'turkish'b'ug_CN.UTF-8'u'ug_CN.UTF-8'b'ug_cn'u'ug_cn'b'uk_UA.KOI8-U'u'uk_UA.KOI8-U'b'uk'u'uk'b'uk_ua'u'uk_ua'b'en_US.utf'u'en_US.utf'b'univ'u'univ'b'universal.utf8@ucs4'u'universal.utf8@ucs4'b'unm_US.UTF-8'u'unm_US.UTF-8'b'unm_us'u'unm_us'b'ur_PK.CP1256'u'ur_PK.CP1256'b'ur'u'ur'b'ur_IN.UTF-8'u'ur_IN.UTF-8'b'ur_in'u'ur_in'b'ur_pk'u'ur_pk'b'uz_UZ.UTF-8'u'uz_UZ.UTF-8'b'uz'u'uz'b'uz_uz'u'uz_uz'b'uz_uz@cyrillic'u'uz_uz@cyrillic'b've_ZA.UTF-8'u've_ZA.UTF-8'b've'u've'b've_za'u've_za'b'vi_VN.TCVN'u'vi_VN.TCVN'b'vi'u'vi'b'vi_vn'u'vi_vn'b'vi_vn.tcvn'u'vi_vn.tcvn'b'vi_vn.tcvn5712'u'vi_vn.tcvn5712'b'vi_VN.VISCII'u'vi_VN.VISCII'b'vi_vn.viscii'u'vi_vn.viscii'b'vi_vn.viscii111'u'vi_vn.viscii111'b'wa_BE.ISO8859-1'u'wa_BE.ISO8859-1'b'wa'u'wa'b'wa_be'u'wa_be'b'wae_CH.UTF-8'u'wae_CH.UTF-8'b'wae_ch'u'wae_ch'b'wal_ET.UTF-8'u'wal_ET.UTF-8'b'wal_et'u'wal_et'b'wo_SN.UTF-8'u'wo_SN.UTF-8'b'wo_sn'u'wo_sn'b'xh_ZA.ISO8859-1'u'xh_ZA.ISO8859-1'b'xh'u'xh'b'xh_za'u'xh_za'b'yi_US.CP1255'u'yi_US.CP1255'b'yi'u'yi'b'yi_us'u'yi_us'b'yo_NG.UTF-8'u'yo_NG.UTF-8'b'yo_ng'u'yo_ng'b'yue_HK.UTF-8'u'yue_HK.UTF-8'b'yue_hk'u'yue_hk'b'yuw_PG.UTF-8'u'yuw_PG.UTF-8'b'yuw_pg'u'yuw_pg'b'zh'u'zh'b'zh_CN.gb2312'u'zh_CN.gb2312'b'zh_cn'u'zh_cn'b'zh_TW.big5'u'zh_TW.big5'b'zh_cn.big5'u'zh_cn.big5'b'zh_cn.euc'u'zh_cn.euc'b'zh_HK.big5hkscs'u'zh_HK.big5hkscs'b'zh_hk'u'zh_hk'b'zh_hk.big5hk'u'zh_hk.big5hk'b'zh_SG.GB2312'u'zh_SG.GB2312'b'zh_sg'u'zh_sg'b'zh_SG.GBK'u'zh_SG.GBK'b'zh_sg.gbk'u'zh_sg.gbk'b'zh_tw'u'zh_tw'b'zh_tw.euc'u'zh_tw.euc'b'zh_tw.euctw'u'zh_tw.euctw'b'zu_ZA.ISO8859-1'u'zu_ZA.ISO8859-1'b'zu'u'zu'b'zu_za'u'zu_za'b'af_ZA'u'af_ZA'b'sq_AL'u'sq_AL'b'gsw_FR'u'gsw_FR'b'am_ET'u'am_ET'b'ar_SA'u'ar_SA'b'ar_IQ'u'ar_IQ'b'ar_EG'u'ar_EG'b'ar_LY'u'ar_LY'b'ar_DZ'u'ar_DZ'b'ar_MA'u'ar_MA'b'ar_TN'u'ar_TN'b'ar_OM'u'ar_OM'b'ar_YE'u'ar_YE'b'ar_SY'u'ar_SY'b'ar_JO'u'ar_JO'b'ar_LB'u'ar_LB'b'ar_KW'u'ar_KW'b'ar_AE'u'ar_AE'b'ar_BH'u'ar_BH'b'ar_QA'u'ar_QA'b'hy_AM'u'hy_AM'b'as_IN'u'as_IN'b'az_AZ'u'az_AZ'b'ba_RU'u'ba_RU'b'eu_ES'u'eu_ES'b'be_BY'u'be_BY'b'bn_IN'u'bn_IN'b'bs_BA'u'bs_BA'b'br_FR'u'br_FR'b'bg_BG'u'bg_BG'b'ca_ES'u'ca_ES'b'zh_CHS'u'zh_CHS'b'zh_TW'u'zh_TW'b'zh_CN'u'zh_CN'b'zh_HK'u'zh_HK'b'zh_SG'u'zh_SG'b'zh_MO'u'zh_MO'b'zh_CHT'u'zh_CHT'b'co_FR'u'co_FR'b'hr_HR'u'hr_HR'b'hr_BA'u'hr_BA'b'cs_CZ'u'cs_CZ'b'da_DK'u'da_DK'b'gbz_AF'u'gbz_AF'b'div_MV'u'div_MV'b'nl_NL'u'nl_NL'b'nl_BE'u'nl_BE'b'en_US'u'en_US'b'en_GB'u'en_GB'b'en_AU'u'en_AU'b'en_CA'u'en_CA'b'en_NZ'u'en_NZ'b'en_IE'u'en_IE'b'en_ZA'u'en_ZA'b'en_JA'u'en_JA'b'en_CB'u'en_CB'b'en_BZ'u'en_BZ'b'en_TT'u'en_TT'b'en_ZW'u'en_ZW'b'en_PH'u'en_PH'b'en_IN'u'en_IN'b'en_MY'u'en_MY'b'et_EE'u'et_EE'b'fo_FO'u'fo_FO'b'fil_PH'u'fil_PH'b'fi_FI'u'fi_FI'b'fr_FR'u'fr_FR'b'fr_BE'u'fr_BE'b'fr_CA'u'fr_CA'b'fr_CH'u'fr_CH'b'fr_LU'u'fr_LU'b'fr_MC'u'fr_MC'b'fy_NL'u'fy_NL'b'gl_ES'u'gl_ES'b'ka_GE'u'ka_GE'b'de_DE'u'de_DE'b'de_CH'u'de_CH'b'de_AT'u'de_AT'b'de_LU'u'de_LU'b'de_LI'u'de_LI'b'el_GR'u'el_GR'b'kl_GL'u'kl_GL'b'gu_IN'u'gu_IN'b'ha_NG'u'ha_NG'b'he_IL'u'he_IL'b'hi_IN'u'hi_IN'b'hu_HU'u'hu_HU'b'is_IS'u'is_IS'b'id_ID'u'id_ID'b'iu_CA'u'iu_CA'b'ga_IE'u'ga_IE'b'it_IT'u'it_IT'b'it_CH'u'it_CH'b'ja_JP'u'ja_JP'b'kn_IN'u'kn_IN'b'kk_KZ'u'kk_KZ'b'kh_KH'u'kh_KH'b'qut_GT'u'qut_GT'b'rw_RW'u'rw_RW'b'kok_IN'u'kok_IN'b'ko_KR'u'ko_KR'b'ky_KG'u'ky_KG'b'lo_LA'u'lo_LA'b'lv_LV'u'lv_LV'b'lt_LT'u'lt_LT'b'dsb_DE'u'dsb_DE'b'lb_LU'u'lb_LU'b'mk_MK'u'mk_MK'b'ms_MY'u'ms_MY'b'ms_BN'u'ms_BN'b'ml_IN'u'ml_IN'b'mt_MT'u'mt_MT'b'mi_NZ'u'mi_NZ'b'arn_CL'u'arn_CL'b'mr_IN'u'mr_IN'b'moh_CA'u'moh_CA'b'mn_MN'u'mn_MN'b'mn_CN'u'mn_CN'b'ne_NP'u'ne_NP'b'nb_NO'u'nb_NO'b'nn_NO'u'nn_NO'b'oc_FR'u'oc_FR'b'or_IN'u'or_IN'b'ps_AF'u'ps_AF'b'fa_IR'u'fa_IR'b'pl_PL'u'pl_PL'b'pt_BR'u'pt_BR'b'pt_PT'u'pt_PT'b'pa_IN'u'pa_IN'b'quz_BO'u'quz_BO'b'quz_EC'u'quz_EC'b'quz_PE'u'quz_PE'b'ro_RO'u'ro_RO'b'rm_CH'u'rm_CH'b'ru_RU'u'ru_RU'b'smn_FI'u'smn_FI'b'smj_NO'u'smj_NO'b'smj_SE'u'smj_SE'b'se_NO'u'se_NO'b'se_SE'u'se_SE'b'se_FI'u'se_FI'b'sms_FI'u'sms_FI'b'sma_NO'u'sma_NO'b'sma_SE'u'sma_SE'b'sa_IN'u'sa_IN'b'sr_SP'u'sr_SP'b'sr_BA'u'sr_BA'b'si_LK'u'si_LK'b'ns_ZA'u'ns_ZA'b'tn_ZA'u'tn_ZA'b'sk_SK'u'sk_SK'b'sl_SI'u'sl_SI'b'es_ES'u'es_ES'b'es_MX'u'es_MX'b'es_GT'u'es_GT'b'es_CR'u'es_CR'b'es_PA'u'es_PA'b'es_DO'u'es_DO'b'es_VE'u'es_VE'b'es_CO'u'es_CO'b'es_PE'u'es_PE'b'es_AR'u'es_AR'b'es_EC'u'es_EC'b'es_CL'u'es_CL'b'es_UR'u'es_UR'b'es_PY'u'es_PY'b'es_BO'u'es_BO'b'es_SV'u'es_SV'b'es_HN'u'es_HN'b'es_NI'u'es_NI'b'es_PR'u'es_PR'b'es_US'u'es_US'b'sw_KE'u'sw_KE'b'sv_SE'u'sv_SE'b'sv_FI'u'sv_FI'b'syr_SY'u'syr_SY'b'tg_TJ'u'tg_TJ'b'tmz_DZ'u'tmz_DZ'b'ta_IN'u'ta_IN'b'tt_RU'u'tt_RU'b'te_IN'u'te_IN'b'th_TH'u'th_TH'b'bo_BT'u'bo_BT'b'bo_CN'u'bo_CN'b'tr_TR'u'tr_TR'b'tk_TM'u'tk_TM'b'ug_CN'u'ug_CN'b'uk_UA'u'uk_UA'b'wen_DE'u'wen_DE'b'ur_PK'u'ur_PK'b'ur_IN'u'ur_IN'b'uz_UZ'u'uz_UZ'b'vi_VN'u'vi_VN'b'cy_GB'u'cy_GB'b'wo_SN'u'wo_SN'b'xh_ZA'u'xh_ZA'b'sah_RU'u'sah_RU'b'ii_CN'u'ii_CN'b'yo_NG'u'yo_NG'b'zu_ZA'u'zu_ZA'b' Test function. + 'u' Test function. + 'b'LC_'u'LC_'b'Locale defaults as determined by getdefaultlocale():'u'Locale defaults as determined by getdefaultlocale():'b'Language: 'u'Language: 'b'(undefined)'u'(undefined)'b'Encoding: 'u'Encoding: 'b'Locale settings on startup:'u'Locale settings on startup:'b' Language: 'u' Language: 'b' Encoding: 'u' Encoding: 'b'Locale settings after calling resetlocale():'u'Locale settings after calling resetlocale():'b'Locale settings after calling setlocale(LC_ALL, ""):'u'Locale settings after calling setlocale(LC_ALL, ""):'b'NOTE:'u'NOTE:'b'setlocale(LC_ALL, "") does not support the default locale'u'setlocale(LC_ALL, "") does not support the default locale'b'given in the OS environment variables.'u'given in the OS environment variables.'b'Locale aliasing:'u'Locale aliasing:'b'Number formatting:'u'Number formatting:'Synchronization primitives.Context manager. + + This enables the following idiom for acquiring and releasing a + lock around a block: + + with (yield from lock): + + + while failing loudly when accidentally using: + + with lock: + + + Deprecated, use 'async with' statement: + async with lock: + + _ContextManagerMixin"yield from" should be used as context manager expression'with (yield from lock)' is deprecated use 'async with lock' instead"'with (yield from lock)' is deprecated ""use 'async with lock' instead"__acquire_ctx'with await lock' is deprecated use 'async with lock' instead"'with await lock' is deprecated "Primitive lock objects. + + A primitive lock is a synchronization primitive that is not owned + by a particular coroutine when locked. A primitive lock is in one + of two states, 'locked' or 'unlocked'. + + It is created in the unlocked state. It has two basic methods, + acquire() and release(). When the state is unlocked, acquire() + changes the state to locked and returns immediately. When the + state is locked, acquire() blocks until a call to release() in + another coroutine changes it to unlocked, then the acquire() call + resets it to locked and returns. The release() method should only + be called in the locked state; it changes the state to unlocked + and returns immediately. If an attempt is made to release an + unlocked lock, a RuntimeError will be raised. + + When more than one coroutine is blocked in acquire() waiting for + the state to turn to unlocked, only one coroutine proceeds when a + release() call resets the state to unlocked; first coroutine which + is blocked in acquire() is being processed. + + acquire() is a coroutine and should be called with 'await'. + + Locks also support the asynchronous context management protocol. + 'async with lock' statement should be used. + + Usage: + + lock = Lock() + ... + await lock.acquire() + try: + ... + finally: + lock.release() + + Context manager usage: + + lock = Lock() + ... + async with lock: + ... + + Lock objects can be tested for locking state: + + if not lock.locked(): + await lock.acquire() + else: + # lock is acquired + ... + + _lockedThe loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10."The loop argument is deprecated since Python 3.8, ""and scheduled for removal in Python 3.10."unlocked, waiters:]>Return True if lock is acquired.Acquire a lock. + + This method blocks until the lock is unlocked, then sets it to + locked and returns True. + _wake_up_firstRelease a lock. + + When the lock is locked, reset it to unlocked, and return. + If any other coroutines are blocked waiting for the lock to become + unlocked, allow exactly one of them to proceed. + + When invoked on an unlocked lock, a RuntimeError is raised. + + There is no return value. + Lock is not acquired.Wake up the first waiter if it isn't done.Asynchronous equivalent to threading.Event. + + Class implementing event objects. An event manages a flag that can be set + to true with the set() method and reset to false with the clear() method. + The wait() method blocks until the flag is true. The flag is initially + false. + is_setReturn True if and only if the internal flag is true.Set the internal flag to true. All coroutines waiting for it to + become true are awakened. Coroutine that call wait() once the flag is + true will not block at all. + Reset the internal flag to false. Subsequently, coroutines calling + wait() will block until set() is called to set the internal flag + to true again.Block until the internal flag is true. + + If the internal flag is true on entry, return True + immediately. Otherwise, block until another coroutine calls + set() to set the flag to true, then return True. + Asynchronous equivalent to threading.Condition. + + This class implements condition variable objects. A condition variable + allows one or more coroutines to wait until they are notified by another + coroutine. + + A new Lock object is created and used as the underlying lock. + loop argument must agree with lockWait until notified. + + If the calling coroutine has not acquired the lock when this + method is called, a RuntimeError is raised. + + This method releases the underlying lock, and then blocks + until it is awakened by a notify() or notify_all() call for + the same condition variable in another coroutine. Once + awakened, it re-acquires the lock and returns True. + cannot wait on un-acquired lockwait_forWait until a predicate becomes true. + + The predicate should be a callable which result will be + interpreted as a boolean value. The final predicate value is + the return value. + notifyBy default, wake up one coroutine waiting on this condition, if any. + If the calling coroutine has not acquired the lock when this method + is called, a RuntimeError is raised. + + This method wakes up at most n of the coroutines waiting for the + condition variable; it is a no-op if no coroutines are waiting. + + Note: an awakened coroutine does not actually return from its + wait() call until it can reacquire the lock. Since notify() does + not release the lock, its caller should. + cannot notify on un-acquired lockWake up all threads waiting on this condition. This method acts + like notify(), but wakes up all waiting threads instead of one. If the + calling thread has not acquired the lock when this method is called, + a RuntimeError is raised. + A Semaphore implementation. + + A semaphore manages an internal counter which is decremented by each + acquire() call and incremented by each release() call. The counter + can never go below zero; when acquire() finds that it is zero, it blocks, + waiting until some other thread calls release(). + + Semaphores also support the context management protocol. + + The optional argument gives the initial value for the internal + counter; it defaults to 1. If the value given is less than 0, + ValueError is raised. + Semaphore initial value must be >= 0unlocked, value:_wake_up_nextReturns True if semaphore can not be acquired immediately.Acquire a semaphore. + + If the internal counter is larger than zero on entry, + decrement it by one and return True immediately. If it is + zero on entry, block, waiting until some other coroutine has + called release() to make it larger than 0, and then return + True. + Release a semaphore, incrementing the internal counter by one. + When it was zero on entry and another coroutine is waiting for it to + become larger than zero again, wake up that coroutine. + A bounded semaphore implementation. + + This raises ValueError in release() if it would increase the value + above the initial value. + _bound_valueBoundedSemaphore released too many times# We have no use for the "as ..." clause in the with# statement for locks.# Crudely prevent reuse.# This must exist because __enter__ exists, even though that# always raises; that's how the with-statement works.# This is not a coroutine. It is meant to enable the idiom:# with (yield from lock):# # as an alternative to:# yield from lock.acquire()# try:# finally:# lock.release()# Deprecated, use 'async with' statement:# async with lock:# The flag is needed for legacy asyncio.iscoroutine()# To make "with await lock" work.# Finally block should be called before the CancelledError# handling as we don't want CancelledError to call# _wake_up_first() and attempt to wake up itself.# .done() necessarily means that a waiter will wake up later on and# either take the lock, or, if it was cancelled and lock wasn't# taken already, will hit this again and wake up a new waiter.# Export the lock's locked(), acquire() and release() methods.# Must reacquire lock even if wait is cancelled# See the similar code in Queue.get.b'Synchronization primitives.'u'Synchronization primitives.'b'Event'u'Event'b'Condition'u'Condition'b'Semaphore'u'Semaphore'b'BoundedSemaphore'u'BoundedSemaphore'b'Context manager. + + This enables the following idiom for acquiring and releasing a + lock around a block: + + with (yield from lock): + + + while failing loudly when accidentally using: + + with lock: + + + Deprecated, use 'async with' statement: + async with lock: + + 'u'Context manager. + + This enables the following idiom for acquiring and releasing a + lock around a block: + + with (yield from lock): + + + while failing loudly when accidentally using: + + with lock: + + + Deprecated, use 'async with' statement: + async with lock: + + 'b'"yield from" should be used as context manager expression'u'"yield from" should be used as context manager expression'b''with (yield from lock)' is deprecated use 'async with lock' instead'u''with (yield from lock)' is deprecated use 'async with lock' instead'b''with await lock' is deprecated use 'async with lock' instead'u''with await lock' is deprecated use 'async with lock' instead'b'Primitive lock objects. + + A primitive lock is a synchronization primitive that is not owned + by a particular coroutine when locked. A primitive lock is in one + of two states, 'locked' or 'unlocked'. + + It is created in the unlocked state. It has two basic methods, + acquire() and release(). When the state is unlocked, acquire() + changes the state to locked and returns immediately. When the + state is locked, acquire() blocks until a call to release() in + another coroutine changes it to unlocked, then the acquire() call + resets it to locked and returns. The release() method should only + be called in the locked state; it changes the state to unlocked + and returns immediately. If an attempt is made to release an + unlocked lock, a RuntimeError will be raised. + + When more than one coroutine is blocked in acquire() waiting for + the state to turn to unlocked, only one coroutine proceeds when a + release() call resets the state to unlocked; first coroutine which + is blocked in acquire() is being processed. + + acquire() is a coroutine and should be called with 'await'. + + Locks also support the asynchronous context management protocol. + 'async with lock' statement should be used. + + Usage: + + lock = Lock() + ... + await lock.acquire() + try: + ... + finally: + lock.release() + + Context manager usage: + + lock = Lock() + ... + async with lock: + ... + + Lock objects can be tested for locking state: + + if not lock.locked(): + await lock.acquire() + else: + # lock is acquired + ... + + 'u'Primitive lock objects. + + A primitive lock is a synchronization primitive that is not owned + by a particular coroutine when locked. A primitive lock is in one + of two states, 'locked' or 'unlocked'. + + It is created in the unlocked state. It has two basic methods, + acquire() and release(). When the state is unlocked, acquire() + changes the state to locked and returns immediately. When the + state is locked, acquire() blocks until a call to release() in + another coroutine changes it to unlocked, then the acquire() call + resets it to locked and returns. The release() method should only + be called in the locked state; it changes the state to unlocked + and returns immediately. If an attempt is made to release an + unlocked lock, a RuntimeError will be raised. + + When more than one coroutine is blocked in acquire() waiting for + the state to turn to unlocked, only one coroutine proceeds when a + release() call resets the state to unlocked; first coroutine which + is blocked in acquire() is being processed. + + acquire() is a coroutine and should be called with 'await'. + + Locks also support the asynchronous context management protocol. + 'async with lock' statement should be used. + + Usage: + + lock = Lock() + ... + await lock.acquire() + try: + ... + finally: + lock.release() + + Context manager usage: + + lock = Lock() + ... + async with lock: + ... + + Lock objects can be tested for locking state: + + if not lock.locked(): + await lock.acquire() + else: + # lock is acquired + ... + + 'b'The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.'u'The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.'b'locked'u'locked'b'unlocked'u'unlocked'b', waiters:'u', waiters:'b']>'u']>'b'Return True if lock is acquired.'u'Return True if lock is acquired.'b'Acquire a lock. + + This method blocks until the lock is unlocked, then sets it to + locked and returns True. + 'u'Acquire a lock. + + This method blocks until the lock is unlocked, then sets it to + locked and returns True. + 'b'Release a lock. + + When the lock is locked, reset it to unlocked, and return. + If any other coroutines are blocked waiting for the lock to become + unlocked, allow exactly one of them to proceed. + + When invoked on an unlocked lock, a RuntimeError is raised. + + There is no return value. + 'u'Release a lock. + + When the lock is locked, reset it to unlocked, and return. + If any other coroutines are blocked waiting for the lock to become + unlocked, allow exactly one of them to proceed. + + When invoked on an unlocked lock, a RuntimeError is raised. + + There is no return value. + 'b'Lock is not acquired.'u'Lock is not acquired.'b'Wake up the first waiter if it isn't done.'u'Wake up the first waiter if it isn't done.'b'Asynchronous equivalent to threading.Event. + + Class implementing event objects. An event manages a flag that can be set + to true with the set() method and reset to false with the clear() method. + The wait() method blocks until the flag is true. The flag is initially + false. + 'u'Asynchronous equivalent to threading.Event. + + Class implementing event objects. An event manages a flag that can be set + to true with the set() method and reset to false with the clear() method. + The wait() method blocks until the flag is true. The flag is initially + false. + 'b'Return True if and only if the internal flag is true.'u'Return True if and only if the internal flag is true.'b'Set the internal flag to true. All coroutines waiting for it to + become true are awakened. Coroutine that call wait() once the flag is + true will not block at all. + 'u'Set the internal flag to true. All coroutines waiting for it to + become true are awakened. Coroutine that call wait() once the flag is + true will not block at all. + 'b'Reset the internal flag to false. Subsequently, coroutines calling + wait() will block until set() is called to set the internal flag + to true again.'u'Reset the internal flag to false. Subsequently, coroutines calling + wait() will block until set() is called to set the internal flag + to true again.'b'Block until the internal flag is true. + + If the internal flag is true on entry, return True + immediately. Otherwise, block until another coroutine calls + set() to set the flag to true, then return True. + 'u'Block until the internal flag is true. + + If the internal flag is true on entry, return True + immediately. Otherwise, block until another coroutine calls + set() to set the flag to true, then return True. + 'b'Asynchronous equivalent to threading.Condition. + + This class implements condition variable objects. A condition variable + allows one or more coroutines to wait until they are notified by another + coroutine. + + A new Lock object is created and used as the underlying lock. + 'u'Asynchronous equivalent to threading.Condition. + + This class implements condition variable objects. A condition variable + allows one or more coroutines to wait until they are notified by another + coroutine. + + A new Lock object is created and used as the underlying lock. + 'b'loop argument must agree with lock'u'loop argument must agree with lock'b'Wait until notified. + + If the calling coroutine has not acquired the lock when this + method is called, a RuntimeError is raised. + + This method releases the underlying lock, and then blocks + until it is awakened by a notify() or notify_all() call for + the same condition variable in another coroutine. Once + awakened, it re-acquires the lock and returns True. + 'u'Wait until notified. + + If the calling coroutine has not acquired the lock when this + method is called, a RuntimeError is raised. + + This method releases the underlying lock, and then blocks + until it is awakened by a notify() or notify_all() call for + the same condition variable in another coroutine. Once + awakened, it re-acquires the lock and returns True. + 'b'cannot wait on un-acquired lock'u'cannot wait on un-acquired lock'b'Wait until a predicate becomes true. + + The predicate should be a callable which result will be + interpreted as a boolean value. The final predicate value is + the return value. + 'u'Wait until a predicate becomes true. + + The predicate should be a callable which result will be + interpreted as a boolean value. The final predicate value is + the return value. + 'b'By default, wake up one coroutine waiting on this condition, if any. + If the calling coroutine has not acquired the lock when this method + is called, a RuntimeError is raised. + + This method wakes up at most n of the coroutines waiting for the + condition variable; it is a no-op if no coroutines are waiting. + + Note: an awakened coroutine does not actually return from its + wait() call until it can reacquire the lock. Since notify() does + not release the lock, its caller should. + 'u'By default, wake up one coroutine waiting on this condition, if any. + If the calling coroutine has not acquired the lock when this method + is called, a RuntimeError is raised. + + This method wakes up at most n of the coroutines waiting for the + condition variable; it is a no-op if no coroutines are waiting. + + Note: an awakened coroutine does not actually return from its + wait() call until it can reacquire the lock. Since notify() does + not release the lock, its caller should. + 'b'cannot notify on un-acquired lock'u'cannot notify on un-acquired lock'b'Wake up all threads waiting on this condition. This method acts + like notify(), but wakes up all waiting threads instead of one. If the + calling thread has not acquired the lock when this method is called, + a RuntimeError is raised. + 'u'Wake up all threads waiting on this condition. This method acts + like notify(), but wakes up all waiting threads instead of one. If the + calling thread has not acquired the lock when this method is called, + a RuntimeError is raised. + 'b'A Semaphore implementation. + + A semaphore manages an internal counter which is decremented by each + acquire() call and incremented by each release() call. The counter + can never go below zero; when acquire() finds that it is zero, it blocks, + waiting until some other thread calls release(). + + Semaphores also support the context management protocol. + + The optional argument gives the initial value for the internal + counter; it defaults to 1. If the value given is less than 0, + ValueError is raised. + 'u'A Semaphore implementation. + + A semaphore manages an internal counter which is decremented by each + acquire() call and incremented by each release() call. The counter + can never go below zero; when acquire() finds that it is zero, it blocks, + waiting until some other thread calls release(). + + Semaphores also support the context management protocol. + + The optional argument gives the initial value for the internal + counter; it defaults to 1. If the value given is less than 0, + ValueError is raised. + 'b'Semaphore initial value must be >= 0'u'Semaphore initial value must be >= 0'b'unlocked, value:'u'unlocked, value:'b'Returns True if semaphore can not be acquired immediately.'u'Returns True if semaphore can not be acquired immediately.'b'Acquire a semaphore. + + If the internal counter is larger than zero on entry, + decrement it by one and return True immediately. If it is + zero on entry, block, waiting until some other coroutine has + called release() to make it larger than 0, and then return + True. + 'u'Acquire a semaphore. + + If the internal counter is larger than zero on entry, + decrement it by one and return True immediately. If it is + zero on entry, block, waiting until some other coroutine has + called release() to make it larger than 0, and then return + True. + 'b'Release a semaphore, incrementing the internal counter by one. + When it was zero on entry and another coroutine is waiting for it to + become larger than zero again, wake up that coroutine. + 'u'Release a semaphore, incrementing the internal counter by one. + When it was zero on entry and another coroutine is waiting for it to + become larger than zero again, wake up that coroutine. + 'b'A bounded semaphore implementation. + + This raises ValueError in release() if it would increase the value + above the initial value. + 'u'A bounded semaphore implementation. + + This raises ValueError in release() if it would increase the value + above the initial value. + 'b'BoundedSemaphore released too many times'u'BoundedSemaphore released too many times'u'asyncio.locks'u'locks'A simple log mechanism styled after PEP 282.Logthreshold%s wrong log level_global_logset_verbosity# The class here is styled after PEP 282 so that it could later be# replaced with a standard Python logging implementation.# emulate backslashreplace error handler# return the old threshold for use from testsb'A simple log mechanism styled after PEP 282.'u'A simple log mechanism styled after PEP 282.'b'%s wrong log level'u'%s wrong log level'u'distutils.log'Logging configuration.# Name the logger after the package.b'Logging configuration.'u'Logging configuration.'u'asyncio.log'Interface to the liblzma compression library. + +This module provides a class for reading and writing compressed files, +classes for incremental (de)compression, and convenience functions for +one-shot (de)compression. + +These classes and functions support both the XZ and legacy LZMA +container formats, as well as raw compressed data streams. +LZMAFileA file object providing transparent LZMA (de)compression. + + An LZMAFile can act as a wrapper for an existing file object, or + refer directly to a named file on disk. + + Note that LZMAFile provides a *binary* file interface - data read + is returned as bytes, and data to be written must be given as bytes. + presetOpen an LZMA-compressed file in binary mode. + + filename can be either an actual file name (given as a str, + bytes, or PathLike object), in which case the named file is + opened, or it can be an existing file object to read from or + write to. + + mode can be "r" for reading (default), "w" for (over)writing, + "x" for creating exclusively, or "a" for appending. These can + equivalently be given as "rb", "wb", "xb" and "ab" respectively. + + format specifies the container format to use for the file. + If mode is "r", this defaults to FORMAT_AUTO. Otherwise, the + default is FORMAT_XZ. + + check specifies the integrity check to use. This argument can + only be used when opening a file for writing. For FORMAT_XZ, + the default is CHECK_CRC64. FORMAT_ALONE and FORMAT_RAW do not + support integrity checks - for these formats, check must be + omitted, or be CHECK_NONE. + + When opening a file for reading, the *preset* argument is not + meaningful, and should be omitted. The *filters* argument should + also be omitted, except when format is FORMAT_RAW (in which case + it is required). + + When opening a file for writing, the settings used by the + compressor can be specified either as a preset compression + level (with the *preset* argument), or in detail as a custom + filter chain (with the *filters* argument). For FORMAT_XZ and + FORMAT_ALONE, the default is to use the PRESET_DEFAULT preset + level. For FORMAT_RAW, the caller must always specify a filter + chain; the raw compressor does not support preset compression + levels. + + preset (if provided) should be an integer in the range 0-9, + optionally OR-ed with the constant PRESET_EXTREME. + + filters (if provided) should be a sequence of dicts. Each dict + should have an entry for "id" indicating ID of the filter, plus + additional entries for options to the filter. + Cannot specify an integrity check when opening a file for reading"Cannot specify an integrity check ""when opening a file for reading"Cannot specify a preset compression level when opening a file for reading"Cannot specify a preset compression ""level when opening a file for reading"Read up to size uncompressed bytes from the file. + + If size is negative or omitted, read until EOF is reached. + Returns b"" if the file is already at EOF. + Read up to size uncompressed bytes, while trying to avoid + making multiple reads from the underlying stream. Reads up to a + buffer's worth of data if size is negative. + + Returns b"" if the file is at EOF. + Write a bytes object to the file. + + Returns the number of uncompressed bytes written, which is + always len(data). Note that due to buffering, the file on disk + may not reflect the data written until close() is called. + Change the file position. + + The new position is specified by offset, relative to the + position indicated by whence. Possible values for whence are: + + 0: start of stream (default): offset must not be negative + 1: current stream position + 2: end of stream; offset must not be positive + + Returns the new file position. + + Note that seeking is emulated, so depending on the parameters, + this operation may be extremely slow. + Open an LZMA-compressed file in binary or text mode. + + filename can be either an actual file name (given as a str, bytes, + or PathLike object), in which case the named file is opened, or it + can be an existing file object to read from or write to. + + The mode argument can be "r", "rb" (default), "w", "wb", "x", "xb", + "a", or "ab" for binary mode, or "rt", "wt", "xt", or "at" for text + mode. + + The format, check, preset and filters arguments specify the + compression settings, as for LZMACompressor, LZMADecompressor and + LZMAFile. + + For binary mode, this function is equivalent to the LZMAFile + constructor: LZMAFile(filename, mode, ...). In this case, the + encoding, errors and newline arguments must not be provided. + + For text mode, an LZMAFile object is created, and wrapped in an + io.TextIOWrapper instance with the specified encoding, error + handling behavior, and line ending(s). + + lz_modeCompress a block of data. + + Refer to LZMACompressor's docstring for a description of the + optional arguments *format*, *check*, *preset* and *filters*. + + For incremental compression, use an LZMACompressor instead. + Decompress a block of data. + + Refer to LZMADecompressor's docstring for a description of the + optional arguments *format*, *check* and *filters*. + + For incremental decompression, use an LZMADecompressor instead. + # Relies on the undocumented fact that BufferedReader.peek() always# returns at least one byte (except at EOF)# Leftover data is not a valid LZMA/XZ stream; ignore it.b'Interface to the liblzma compression library. + +This module provides a class for reading and writing compressed files, +classes for incremental (de)compression, and convenience functions for +one-shot (de)compression. + +These classes and functions support both the XZ and legacy LZMA +container formats, as well as raw compressed data streams. +'u'Interface to the liblzma compression library. + +This module provides a class for reading and writing compressed files, +classes for incremental (de)compression, and convenience functions for +one-shot (de)compression. + +These classes and functions support both the XZ and legacy LZMA +container formats, as well as raw compressed data streams. +'b'CHECK_NONE'u'CHECK_NONE'b'CHECK_CRC32'u'CHECK_CRC32'b'CHECK_CRC64'u'CHECK_CRC64'b'CHECK_SHA256'u'CHECK_SHA256'b'CHECK_ID_MAX'u'CHECK_ID_MAX'b'CHECK_UNKNOWN'u'CHECK_UNKNOWN'b'FILTER_LZMA1'u'FILTER_LZMA1'b'FILTER_LZMA2'u'FILTER_LZMA2'b'FILTER_DELTA'u'FILTER_DELTA'b'FILTER_X86'u'FILTER_X86'b'FILTER_IA64'u'FILTER_IA64'b'FILTER_ARM'u'FILTER_ARM'b'FILTER_ARMTHUMB'u'FILTER_ARMTHUMB'b'FILTER_POWERPC'u'FILTER_POWERPC'b'FILTER_SPARC'u'FILTER_SPARC'b'FORMAT_AUTO'u'FORMAT_AUTO'b'FORMAT_XZ'u'FORMAT_XZ'b'FORMAT_ALONE'u'FORMAT_ALONE'b'FORMAT_RAW'u'FORMAT_RAW'b'MF_HC3'u'MF_HC3'b'MF_HC4'u'MF_HC4'b'MF_BT2'u'MF_BT2'b'MF_BT3'u'MF_BT3'b'MF_BT4'u'MF_BT4'b'MODE_FAST'u'MODE_FAST'b'MODE_NORMAL'u'MODE_NORMAL'b'PRESET_DEFAULT'u'PRESET_DEFAULT'b'PRESET_EXTREME'u'PRESET_EXTREME'b'LZMACompressor'u'LZMACompressor'b'LZMADecompressor'u'LZMADecompressor'b'LZMAFile'u'LZMAFile'b'LZMAError'u'LZMAError'b'is_check_supported'u'is_check_supported'b'A file object providing transparent LZMA (de)compression. + + An LZMAFile can act as a wrapper for an existing file object, or + refer directly to a named file on disk. + + Note that LZMAFile provides a *binary* file interface - data read + is returned as bytes, and data to be written must be given as bytes. + 'u'A file object providing transparent LZMA (de)compression. + + An LZMAFile can act as a wrapper for an existing file object, or + refer directly to a named file on disk. + + Note that LZMAFile provides a *binary* file interface - data read + is returned as bytes, and data to be written must be given as bytes. + 'b'Open an LZMA-compressed file in binary mode. + + filename can be either an actual file name (given as a str, + bytes, or PathLike object), in which case the named file is + opened, or it can be an existing file object to read from or + write to. + + mode can be "r" for reading (default), "w" for (over)writing, + "x" for creating exclusively, or "a" for appending. These can + equivalently be given as "rb", "wb", "xb" and "ab" respectively. + + format specifies the container format to use for the file. + If mode is "r", this defaults to FORMAT_AUTO. Otherwise, the + default is FORMAT_XZ. + + check specifies the integrity check to use. This argument can + only be used when opening a file for writing. For FORMAT_XZ, + the default is CHECK_CRC64. FORMAT_ALONE and FORMAT_RAW do not + support integrity checks - for these formats, check must be + omitted, or be CHECK_NONE. + + When opening a file for reading, the *preset* argument is not + meaningful, and should be omitted. The *filters* argument should + also be omitted, except when format is FORMAT_RAW (in which case + it is required). + + When opening a file for writing, the settings used by the + compressor can be specified either as a preset compression + level (with the *preset* argument), or in detail as a custom + filter chain (with the *filters* argument). For FORMAT_XZ and + FORMAT_ALONE, the default is to use the PRESET_DEFAULT preset + level. For FORMAT_RAW, the caller must always specify a filter + chain; the raw compressor does not support preset compression + levels. + + preset (if provided) should be an integer in the range 0-9, + optionally OR-ed with the constant PRESET_EXTREME. + + filters (if provided) should be a sequence of dicts. Each dict + should have an entry for "id" indicating ID of the filter, plus + additional entries for options to the filter. + 'u'Open an LZMA-compressed file in binary mode. + + filename can be either an actual file name (given as a str, + bytes, or PathLike object), in which case the named file is + opened, or it can be an existing file object to read from or + write to. + + mode can be "r" for reading (default), "w" for (over)writing, + "x" for creating exclusively, or "a" for appending. These can + equivalently be given as "rb", "wb", "xb" and "ab" respectively. + + format specifies the container format to use for the file. + If mode is "r", this defaults to FORMAT_AUTO. Otherwise, the + default is FORMAT_XZ. + + check specifies the integrity check to use. This argument can + only be used when opening a file for writing. For FORMAT_XZ, + the default is CHECK_CRC64. FORMAT_ALONE and FORMAT_RAW do not + support integrity checks - for these formats, check must be + omitted, or be CHECK_NONE. + + When opening a file for reading, the *preset* argument is not + meaningful, and should be omitted. The *filters* argument should + also be omitted, except when format is FORMAT_RAW (in which case + it is required). + + When opening a file for writing, the settings used by the + compressor can be specified either as a preset compression + level (with the *preset* argument), or in detail as a custom + filter chain (with the *filters* argument). For FORMAT_XZ and + FORMAT_ALONE, the default is to use the PRESET_DEFAULT preset + level. For FORMAT_RAW, the caller must always specify a filter + chain; the raw compressor does not support preset compression + levels. + + preset (if provided) should be an integer in the range 0-9, + optionally OR-ed with the constant PRESET_EXTREME. + + filters (if provided) should be a sequence of dicts. Each dict + should have an entry for "id" indicating ID of the filter, plus + additional entries for options to the filter. + 'b'Cannot specify an integrity check when opening a file for reading'u'Cannot specify an integrity check when opening a file for reading'b'Cannot specify a preset compression level when opening a file for reading'u'Cannot specify a preset compression level when opening a file for reading'b'Read up to size uncompressed bytes from the file. + + If size is negative or omitted, read until EOF is reached. + Returns b"" if the file is already at EOF. + 'u'Read up to size uncompressed bytes from the file. + + If size is negative or omitted, read until EOF is reached. + Returns b"" if the file is already at EOF. + 'b'Read up to size uncompressed bytes, while trying to avoid + making multiple reads from the underlying stream. Reads up to a + buffer's worth of data if size is negative. + + Returns b"" if the file is at EOF. + 'u'Read up to size uncompressed bytes, while trying to avoid + making multiple reads from the underlying stream. Reads up to a + buffer's worth of data if size is negative. + + Returns b"" if the file is at EOF. + 'b'Write a bytes object to the file. + + Returns the number of uncompressed bytes written, which is + always len(data). Note that due to buffering, the file on disk + may not reflect the data written until close() is called. + 'u'Write a bytes object to the file. + + Returns the number of uncompressed bytes written, which is + always len(data). Note that due to buffering, the file on disk + may not reflect the data written until close() is called. + 'b'Change the file position. + + The new position is specified by offset, relative to the + position indicated by whence. Possible values for whence are: + + 0: start of stream (default): offset must not be negative + 1: current stream position + 2: end of stream; offset must not be positive + + Returns the new file position. + + Note that seeking is emulated, so depending on the parameters, + this operation may be extremely slow. + 'u'Change the file position. + + The new position is specified by offset, relative to the + position indicated by whence. Possible values for whence are: + + 0: start of stream (default): offset must not be negative + 1: current stream position + 2: end of stream; offset must not be positive + + Returns the new file position. + + Note that seeking is emulated, so depending on the parameters, + this operation may be extremely slow. + 'b'Open an LZMA-compressed file in binary or text mode. + + filename can be either an actual file name (given as a str, bytes, + or PathLike object), in which case the named file is opened, or it + can be an existing file object to read from or write to. + + The mode argument can be "r", "rb" (default), "w", "wb", "x", "xb", + "a", or "ab" for binary mode, or "rt", "wt", "xt", or "at" for text + mode. + + The format, check, preset and filters arguments specify the + compression settings, as for LZMACompressor, LZMADecompressor and + LZMAFile. + + For binary mode, this function is equivalent to the LZMAFile + constructor: LZMAFile(filename, mode, ...). In this case, the + encoding, errors and newline arguments must not be provided. + + For text mode, an LZMAFile object is created, and wrapped in an + io.TextIOWrapper instance with the specified encoding, error + handling behavior, and line ending(s). + + 'u'Open an LZMA-compressed file in binary or text mode. + + filename can be either an actual file name (given as a str, bytes, + or PathLike object), in which case the named file is opened, or it + can be an existing file object to read from or write to. + + The mode argument can be "r", "rb" (default), "w", "wb", "x", "xb", + "a", or "ab" for binary mode, or "rt", "wt", "xt", or "at" for text + mode. + + The format, check, preset and filters arguments specify the + compression settings, as for LZMACompressor, LZMADecompressor and + LZMAFile. + + For binary mode, this function is equivalent to the LZMAFile + constructor: LZMAFile(filename, mode, ...). In this case, the + encoding, errors and newline arguments must not be provided. + + For text mode, an LZMAFile object is created, and wrapped in an + io.TextIOWrapper instance with the specified encoding, error + handling behavior, and line ending(s). + + 'b'Compress a block of data. + + Refer to LZMACompressor's docstring for a description of the + optional arguments *format*, *check*, *preset* and *filters*. + + For incremental compression, use an LZMACompressor instead. + 'u'Compress a block of data. + + Refer to LZMACompressor's docstring for a description of the + optional arguments *format*, *check*, *preset* and *filters*. + + For incremental compression, use an LZMACompressor instead. + 'b'Decompress a block of data. + + Refer to LZMADecompressor's docstring for a description of the + optional arguments *format*, *check* and *filters*. + + For incremental decompression, use an LZMADecompressor instead. + 'u'Decompress a block of data. + + Refer to LZMADecompressor's docstring for a description of the + optional arguments *format*, *check* and *filters*. + + For incremental decompression, use an LZMADecompressor instead. + 'u'lzma'The machinery of importlib: finders, loaders, hooks, etc.Returns a list of all recognized module suffixes for this processb'The machinery of importlib: finders, loaders, hooks, etc.'u'The machinery of importlib: finders, loaders, hooks, etc.'b'Returns a list of all recognized module suffixes for this process'u'Returns a list of all recognized module suffixes for this process'u'importlib.machinery'u'machinery'main.pyb'python'u'python'b'main.py'u'main.py'u'example.src.main'u'example.src'u'example'u'src.main'u'src'Unittest main programExamples: + %(prog)s test_module - run tests from test_module + %(prog)s module.TestClass - run tests from module.TestClass + %(prog)s module.Class.test_method - run specified test method + %(prog)s path/to/test_file.py - run tests from test_file.py +MAIN_EXAMPLESExamples: + %(prog)s - run default set of tests + %(prog)s MyTestSuite - run suite 'MyTestSuite' + %(prog)s MyTestCase.testSomething - run MyTestCase.testSomething + %(prog)s MyTestCase - run all 'test*' test methods + in MyTestCase +MODULE_EXAMPLES_convert_namerel_pathpardir_convert_names_convert_select_pattern*%s*A command-line program that runs a set of tests; this is primarily + for making test modules conveniently executable. + catchbreakprogName_discovery_parserdefaultTesttestRunnertestLoadertb_localsparseArgsrunTestsusageExit_initArgParsers_print_help_main_parser_do_discoverytestNamescreateTestsfrom_discovery_getParentArgParserparent_parser_getMainArgParser_getDiscoveryArgParserVerbose output--quietQuiet output--localsShow local variables in tracebacks--failfastStop on first fail or error--catchCatch Ctrl-C and display results so far-b--bufferBuffer stdout and stderr during tests-kOnly run tests which match the given substringa list of any number of test modules, classes and test methods.'a list of any number of test modules, ''classes and test methods.'%s discoverFor test discovery all test modules must be importable from the top level directory of the project.'For test discovery all test modules must be ''importable from the top level directory of the ''project.'--start-directoryDirectory to start discovery ('.' default)--patternPattern to match tests ('test*.py' default)--top-level-directoryTop level directory of project (defaults to start directory)'Top level directory of project (defaults to ''start directory)'# on Linux / Mac OS X 'foo.PY' is not importable, but on# Windows it is. Simpler to do a case insensitive match# a better check would be to check that the name is a# valid Python module name.# on Windows both '\' and '/' are used as path# separators. Better to replace both than rely on os.path.sep# defaults for testing# even if DeprecationWarnings are ignored by default# print them anyway unless other warnings settings are# specified by the warnings arg or the -W python flag# here self.warnings is set either to the value passed# to the warnings args or to None.# If the user didn't pass a value self.warnings will# be None. This means that the behavior is unchanged# and depends on the values passed to -W.# this allows "python -m unittest -v" to still work for# test discovery.# to support python -m unittest ...# createTests will load tests from self.module# handle command line args for test discovery# for testing# didn't accept the tb_locals argument# didn't accept the verbosity, buffer or failfast arguments# it is assumed to be a TestRunner instanceb'Unittest main program'u'Unittest main program'b'Examples: + %(prog)s test_module - run tests from test_module + %(prog)s module.TestClass - run tests from module.TestClass + %(prog)s module.Class.test_method - run specified test method + %(prog)s path/to/test_file.py - run tests from test_file.py +'u'Examples: + %(prog)s test_module - run tests from test_module + %(prog)s module.TestClass - run tests from module.TestClass + %(prog)s module.Class.test_method - run specified test method + %(prog)s path/to/test_file.py - run tests from test_file.py +'b'Examples: + %(prog)s - run default set of tests + %(prog)s MyTestSuite - run suite 'MyTestSuite' + %(prog)s MyTestCase.testSomething - run MyTestCase.testSomething + %(prog)s MyTestCase - run all 'test*' test methods + in MyTestCase +'u'Examples: + %(prog)s - run default set of tests + %(prog)s MyTestSuite - run suite 'MyTestSuite' + %(prog)s MyTestCase.testSomething - run MyTestCase.testSomething + %(prog)s MyTestCase - run all 'test*' test methods + in MyTestCase +'b'*%s*'u'*%s*'b'A command-line program that runs a set of tests; this is primarily + for making test modules conveniently executable. + 'u'A command-line program that runs a set of tests; this is primarily + for making test modules conveniently executable. + 'b'discover'u'discover'b'verbosity'u'verbosity'b'Verbose output'u'Verbose output'b'--quiet'u'--quiet'b'Quiet output'u'Quiet output'b'--locals'u'--locals'b'tb_locals'u'tb_locals'b'Show local variables in tracebacks'u'Show local variables in tracebacks'b'--failfast'u'--failfast'b'Stop on first fail or error'u'Stop on first fail or error'b'--catch'u'--catch'b'catchbreak'u'catchbreak'b'Catch Ctrl-C and display results so far'u'Catch Ctrl-C and display results so far'b'-b'u'-b'b'--buffer'u'--buffer'b'buffer'u'buffer'b'Buffer stdout and stderr during tests'u'Buffer stdout and stderr during tests'b'-k'u'-k'b'testNamePatterns'u'testNamePatterns'b'Only run tests which match the given substring'u'Only run tests which match the given substring'b'tests'u'tests'b'a list of any number of test modules, classes and test methods.'u'a list of any number of test modules, classes and test methods.'b'%s discover'u'%s discover'b'For test discovery all test modules must be importable from the top level directory of the project.'u'For test discovery all test modules must be importable from the top level directory of the project.'b'--start-directory'u'--start-directory'b'Directory to start discovery ('.' default)'u'Directory to start discovery ('.' default)'b'--pattern'u'--pattern'b'pattern'u'pattern'b'Pattern to match tests ('test*.py' default)'u'Pattern to match tests ('test*.py' default)'b'--top-level-directory'u'--top-level-directory'b'Top level directory of project (defaults to start directory)'u'Top level directory of project (defaults to start directory)'u'unittest.main'u'This module contains functions that can read and write Python values in +a binary format. The format is specific to Python, but independent of +machine architecture issues. + +Not all Python object types are supported; in general, only objects +whose value is independent from a particular invocation of Python can be +written and read by this module. The following types are supported: +None, integers, floating point numbers, strings, bytes, bytearrays, +tuples, lists, sets, dictionaries, and code objects, where it +should be understood that tuples, lists and dictionaries are only +supported as long as the values contained therein are themselves +supported; and recursive lists and dictionaries should not be written +(they will cause infinite loops). + +Variables: + +version -- indicates the format that the module uses. Version 0 is the + historical format, version 1 shares interned strings and version 2 + uses a binary format for floating point numbers. + Version 3 shares common object references (New in version 3.4). + +Functions: + +dump() -- write value to a file +load() -- read value from a file +dumps() -- marshal value as a bytes object +loads() -- read value from a bytes-like object'u'This module provides access to the mathematical functions +defined by the C standard.'u'/Users/pwntester/.pyenv/versions/3.8.13/lib/python3.8/lib-dynload/math.cpython-38-darwin.so'u'math'acosacoshasinasinhatanatan2atanhceilcombcoscoshdegreesdist2.718281828459045erferfcexpm1fabsfactorialfloorfmodfrexpfsumgcdhypotiscloseisfiniteisqrtldexplgammalog1plog2perm3.141592653589793radianssinsinhtantanh6.283185307179586trunc Python 'mbcs' Codec for Windows + + +Cloned by Mark Hammond (mhammond@skippinet.com.au) from ascii.py, +which was written by Marc-Andre Lemburg (mal@lemburg.com). + +(c) Copyright CNRI, All Rights Reserved. NO WARRANTY. + +mbcs_encodembcs_decode# Import them explicitly to cause an ImportError# on non-Windows systems# for IncrementalDecoder, IncrementalEncoder, ...### Codec APIs### encodings module APIb' Python 'mbcs' Codec for Windows + + +Cloned by Mark Hammond (mhammond@skippinet.com.au) from ascii.py, +which was written by Marc-Andre Lemburg (mal@lemburg.com). + +(c) Copyright CNRI, All Rights Reserved. NO WARRANTY. + +'u' Python 'mbcs' Codec for Windows + + +Cloned by Mark Hammond (mhammond@skippinet.com.au) from ascii.py, +which was written by Marc-Andre Lemburg (mal@lemburg.com). + +(c) Copyright CNRI, All Rights Reserved. NO WARRANTY. + +'u'encodings.mbcs'Basic message object for the email package object model.email._encoded_wordsSEMISPACE[ \(\)<>@,;:\\"/\[\]\?=]tspecials_splitparam_formatparamConvenience function to format and return a key=value pair. + + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. If it contains non-ascii characters it will likewise + be encoded according to RFC2231 rules, using the utf-8 charset and + a null language. + encode_rfc2231%s="%s"_parseparam_unquotevalueBasic message object. + + A message object is defined as something that has a bunch of RFC 2822 + headers and a payload. It may optionally have an envelope header + (a.k.a. Unix-From or From_ header). If the message is a container (i.e. a + multipart or a message/rfc822), then the payload is a list of Message + objects, otherwise it is a string. + + Message objects implement part of the `mapping' interface, which assumes + there is exactly one occurrence of the header per message. Some headers + do in fact appear multiple times (e.g. Received) and for those headers, + you must use the explicit API to set or get all the headers. Not all of + the mapping methods are implemented. + _unixfromtext/plain_default_typeReturn the entire formatted message as a string. + as_stringunixfrommaxheaderlenReturn the entire formatted message as a string. + + Optional 'unixfrom', when true, means include the Unix From_ envelope + header. For backward compatibility reasons, if maxheaderlen is + not specified it defaults to 0, so you must override it explicitly + if you want a different maxheaderlen. 'policy' is passed to the + Generator instance used to serialize the message; if it is not + specified the policy associated with the message instance is used. + + If the message object contains binary data that is not encoded + according to RFC standards, the non-compliant data will be replaced by + unicode "unknown character" code points. + email.generatorflattenReturn the entire formatted message as a bytes object. + as_bytesReturn the entire formatted message as a bytes object. + + Optional 'unixfrom', when true, means include the Unix From_ envelope + header. 'policy' is passed to the BytesGenerator instance used to + serialize the message; if not specified the policy associated with + the message instance is used. + BytesGeneratorReturn True if the message consists of multiple parts.get_unixfromAdd the given payload to the current payload. + + The current payload will always be a list of objects after this method + is called. If you want to set the payload to a scalar object, use + set_payload() instead. + Attach is not valid on a message with a non-multipart payload"Attach is not valid on a message with a"" non-multipart payload"Return a reference to the payload. + + The payload will either be a list object or a string. If you mutate + the list object, you modify the message's payload in place. Optional + i returns that index into the payload. + + Optional decode is a flag indicating whether the payload should be + decoded or not, according to the Content-Transfer-Encoding header + (default is False). + + When True and the message is not a multipart, the payload will be + decoded if this header's value is `quoted-printable' or `base64'. If + some other encoding is used, or the header is missing, or if the + payload has bogus data (i.e. bogus base64 or uuencoded data), the + payload is returned as-is. + + If the message is a multipart and the decode flag is True, then None + is returned. + Expected list, got %sbpayloadget_paramx-uuencodeuuencodeuuex-uuein_fileout_fileSet the payload to the given value. + + Optional charset sets the message's default character set. See + set_charset() for details. + set_charsetSet the charset of the payload to a given character set. + + charset can be a Charset instance, a string naming a character set, or + None. If it is a string it will be converted to a Charset instance. + If charset is None, the charset parameter will be removed from the + Content-Type field. Anything else will generate a TypeError. + + The message will be assumed to be of type text/* encoded with + charset.input_charset. It will be converted to charset.output_charset + and encoded properly, if needed, when generating the plain text + representation of the message. MIME headers (MIME-Version, + Content-Type, Content-Transfer-Encoding) will be added as needed. + del_paramMIME-Versionadd_headerset_paramget_charsetReturn the Charset instance associated with the message's payload. + Return the total number of headers, including duplicates.Get a header value. + + Return None if the header is missing instead of raising an exception. + + Note that if the header appeared multiple times, exactly which + occurrence gets returned is undefined. Use get_all() to get all + the values matching a header field name. + Set the value of a header. + + Note: this does not overwrite an existing header with the same field + name. Use __delitem__() first to delete any existing headers. + max_countThere may be at most {} {} headers in a message"There may be at most {} {} headers ""in a message"Delete all occurrences of a header, if present. + + Does not raise an exception if the header is missing. + newheadersReturn a list of all the message's header field names. + + These will be sorted in the order they appeared in the original + message, or were added to the message, and may contain duplicates. + Any fields deleted and re-inserted are always appended to the header + list. + Return a list of all the message's header values. + + These will be sorted in the order they appeared in the original + message, or were added to the message, and may contain duplicates. + Any fields deleted and re-inserted are always appended to the header + list. + Get all the message's header fields and values. + + These will be sorted in the order they appeared in the original + message, or were added to the message, and may contain duplicates. + Any fields deleted and re-inserted are always appended to the header + list. + failobjGet a header value. + + Like __getitem__() but return failobj instead of None when the field + is missing. + Store name and value in the model without modification. + + This is an "internal" API, intended only for use by a parser. + raw_itemsReturn the (name, value) header pairs without modification. + + This is an "internal" API, intended only for use by a generator. + Return a list of all the values for the named field. + + These will be sorted in the order they appeared in the original + message, and may contain duplicates. Any fields deleted and + re-inserted are always appended to the header list. + + If no such fields exist, failobj is returned (defaults to None). + _paramsExtended header setting. + + name is the header field to add. keyword arguments can be used to set + additional parameters for the header field, with underscores converted + to dashes. Normally the parameter will be added as key="value" unless + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it can be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Otherwise it will be encoded using + the utf-8 charset and a language of ''. + + Examples: + + msg.add_header('content-disposition', 'attachment', filename='bud.gif') + msg.add_header('content-disposition', 'attachment', + filename=('utf-8', '', Fußballer.ppt')) + msg.add_header('content-disposition', 'attachment', + filename='Fußballer.ppt')) + replace_headerReplace a header. + + Replace the first matching header found in the message, retaining + header order and case. If no matching header was found, a KeyError is + raised. + Return the message's content type. + + The returned string is coerced to lower case of the form + `maintype/subtype'. If there was no Content-Type header in the + message, the default type as given by get_default_type() will be + returned. Since according to RFC 2045, messages always have a default + type this will always return a value. + + RFC 2045 defines a message's default type to be text/plain unless it + appears inside a multipart/digest container, in which case it would be + message/rfc822. + get_default_typectypeReturn the message's main content type. + + This is the `maintype' part of the string returned by + get_content_type(). + get_content_subtypeReturns the message's sub-content type. + + This is the `subtype' part of the string returned by + get_content_type(). + Return the `default' content type. + + Most messages have a default content type of text/plain, except for + messages that are subparts of multipart/digest containers. Such + subparts have a default content type of message/rfc822. + Set the `default' content type. + + ctype should be either "text/plain" or "message/rfc822", although this + is not enforced. The default content type is not stored in the + Content-Type header. + _get_params_preservedecode_paramsget_paramsReturn the message's Content-Type parameters, as a list. + + The elements of the returned list are 2-tuples of key/value pairs, as + split on the `=' sign. The left hand side of the `=' is the key, + while the right hand side is the value. If there is no `=' sign in + the parameter the value is the empty string. The value is as + described in the get_param() method. + + Optional failobj is the object to return if there is no Content-Type + header. Optional header is the header to search instead of + Content-Type. If unquote is True, the value is unquoted. + Return the parameter value if found in the Content-Type header. + + Optional failobj is the object to return if there is no Content-Type + header, or the Content-Type header has no such parameter. Optional + header is the header to search instead of Content-Type. + + Parameter keys are always compared case insensitively. The return + value can either be a string, or a 3-tuple if the parameter was RFC + 2231 encoded. When it's a 3-tuple, the elements of the value are of + the form (CHARSET, LANGUAGE, VALUE). Note that both CHARSET and + LANGUAGE can be None, in which case you should consider VALUE to be + encoded in the us-ascii charset. You can usually ignore LANGUAGE. + The parameter value (either the returned string, or the VALUE item in + the 3-tuple) is always unquoted, unless unquote is set to False. + + If your application doesn't care whether the parameter was RFC 2231 + encoded, it can turn the return value into a string as follows: + + rawparam = msg.get_param('foo') + param = email.utils.collapse_rfc2231_value(rawparam) + + requoteSet a parameter in the Content-Type header. + + If the parameter already exists in the header, its value will be + replaced with the new value. + + If header is Content-Type and has not yet been defined for this + message, it will be set to "text/plain" and the new parameter and + value will be appended as per RFC 2045. + + An alternate header can be specified in the header argument, and all + parameters will be quoted as necessary unless requote is False. + + If charset is specified, the parameter will be encoded according to RFC + 2231. Optional language specifies the RFC 2231 language, defaulting + to the empty string. Both charset and language should be strings. + old_paramappend_paramRemove the given parameter completely from the Content-Type header. + + The header will be re-written in place without the parameter or its + value. All values will be quoted as necessary unless requote is + False. Optional header specifies an alternative to the Content-Type + header. + new_ctypeSet the main type and subtype for the Content-Type header. + + type must be a string in the form "maintype/subtype", otherwise a + ValueError is raised. + + This method replaces the Content-Type header, keeping all the + parameters in place. If requote is False, this leaves the existing + header's quoting as is. Otherwise, the parameters will be quoted (the + default). + + An alternative header can be specified in the header argument. When + the Content-Type header is set, we'll always also add a MIME-Version + header. + mime-versionReturn the filename associated with the payload if present. + + The filename is extracted from the Content-Disposition header's + `filename' parameter, and it is unquoted. If that header is missing + the `filename' parameter, this method falls back to looking for the + `name' parameter. + content-dispositioncollapse_rfc2231_valueReturn the boundary associated with the payload if present. + + The boundary is extracted from the Content-Type header's `boundary' + parameter, and it is unquoted. + set_boundarySet the boundary parameter in Content-Type to 'boundary'. + + This is subtly different than deleting the Content-Type header and + adding a new one with a new boundary parameter via add_header(). The + main difference is that using the set_boundary() method preserves the + order of the Content-Type header in the original message. + + HeaderParseError is raised if the message has no Content-Type header. + No Content-Type header foundnewparamsfoundppkpvget_content_charsetReturn the charset parameter of the Content-Type header. + + The returned string is always coerced to lower case. If there is no + Content-Type header, or if that header has no charset parameter, + failobj is returned. + pcharsetget_charsetsReturn a list containing the charset(s) used in this message. + + The returned list of items describes the Content-Type headers' + charset parameter for this message and all the subparts in its + payload. + + Each item will either be a string (the value of the charset parameter + in the Content-Type header of that part) or the value of the + 'failobj' parameter (defaults to None), if the part does not have a + main MIME type of "text", or the charset is not defined. + + The list will contain one string for each part of the message, plus + one for the container message (i.e. self), so that a non-multipart + message will still return a list of length 1. + get_content_dispositionReturn the message's content-disposition if it exists, or None. + + The return values can be either 'inline', 'attachment' or None + according to the rfc2183. + c_demail.iteratorsMIMEPartemail.policyReturn the entire formatted message as a string. + + Optional 'unixfrom', when true, means include the Unix From_ envelope + header. maxheaderlen is retained for backward compatibility with the + base Message class, but defaults to None, meaning that the policy value + for max_line_length controls the header maximum length. 'policy' is + passed to the Generator instance used to serialize the message; if it + is not specified the policy associated with the message instance is + used. + is_attachmentcontent_dispositionattachment_find_bodypreferencelistmaintyperelatedsubpartiter_partscontent-idsubpartsget_bodyplainReturn best candidate mime part for display as 'body' of message. + + Do a depth first search, starting with self, looking for the first part + matching each of the items in preferencelist, and return the part + corresponding to the first item that has a match, or None if no items + have a match. If 'related' is not included in preferencelist, consider + the root part of any multipart/related encountered as a candidate + match. Ignore parts with 'Content-Disposition: attachment'. + best_prio_body_typesiter_attachmentsReturn an iterator over the non-main parts of a multipart. + + Skip the first of each occurrence of text/plain, text/html, + multipart/related, or multipart/alternative in the multipart (unless + they have a 'Content-Disposition: attachment' header) and include all + remaining subparts in the returned iterator. When applied to a + multipart/related, return all parts except the root part. Return an + empty iterator when applied to a multipart/alternative or a + non-multipart. + attachmentsReturn an iterator over all immediate subparts of a multipart. + + Return an empty iterator for a non-multipart. + get_contentcontent_manager_make_multipartdisallowed_subtypesexisting_subtypeCannot convert {} to {}keep_headerspart_headerscontent-multipart/make_relatedmixedmake_alternativemake_mixed_add_multipart_subtype_dispmake_Content-Dispositionadd_relatedinlineadd_alternativeadd_attachmentclear_content# Intrapackage imports# Regular expression that matches `special' characters in parameters, the# existence of which force quoting of the parameter value.# Split header parameters. BAW: this may be too simple. It isn't# strictly RFC 2045 (section 5.1) compliant, but it catches most headers# found in the wild. We may eventually need a full fledged parser.# RDM: we might have a Header here; for now just stringify it.# A tuple is used for RFC 2231 encoded parameter values where items# are (charset, language, value). charset is a string, not a Charset# instance. RFC 2231 encoded values are never quoted, per RFC.# Encode as per RFC 2231# BAW: Please check this. I think that if quote is set it should# force quoting even if not necessary.# RDM This might be a Header, so for now stringify it.# This is different than utils.collapse_rfc2231_value() because it doesn't# try to convert the value to a unicode. Message.get_param() and# Message.get_params() are both currently defined to return the tuple in# the face of RFC 2231 parameters.# Defaults for multipart messages# Default content type# Unix From_ line# Payload manipulation.# Here is the logic table for this code, based on the email5.0.0 code:# i decode is_multipart result# ------ ------ ------------ ------------------------------# None True True None# i True True None# None False True _payload (a list)# i False True _payload element i (a Message)# i False False error (not a list)# i True False error (not a list)# None False False _payload# None True False _payload decoded (bytes)# Note that Barry planned to factor out the 'decode' case, but that# isn't so easy now that we handle the 8 bit data, which needs to be# converted in both the decode and non-decode path.# For backward compatibility, Use isinstance and this error message# instead of the more logical is_multipart test.# cte might be a Header, so for now stringify it.# payload may be bytes here.# This won't happen for RFC compliant messages (messages# containing only ASCII code points in the unicode input).# If it does happen, turn the string into bytes in a way# guaranteed not to fail.# XXX: this is a bit of a hack; decode_b should probably be factored# out somewhere, but I haven't figured out where yet.# Some decoding problem# This 'if' is for backward compatibility, it allows unicode# through even though that won't work correctly if the# message is serialized.# MAPPING INTERFACE (partial)# "Internal" methods (public API, but only intended for use by a parser# or generator, not normal application code.# Additional useful stuff# Use these three methods instead of the three above.# This should have no parameters# RFC 2045, section 5.2 says if its invalid, use text/plain# Like get_params() but preserves the quoting of values. BAW:# should this be part of the public interface?# Must have been a bare attribute# BAW: should we be strict?# Set the Content-Type, you get a MIME-Version# Skip the first param; it's the old type.# RFC 2046 says that boundaries may begin but not end in w/s# There was no Content-Type header, and we don't know what type# to set it to, so raise an exception.# The original Content-Type header had no boundary attribute.# Tack one on the end. BAW: should we raise an exception# instead???# Replace the existing Content-Type header with the new value# RFC 2231 encoded, so decode it, and it better end up as ascii.# LookupError will be raised if the charset isn't known to# Python. UnicodeError will be raised if the encoded text# contains a character not in the charset.# charset characters must be in us-ascii range# RFC 2046, $4.1.2 says charsets are not case sensitive# I.e. def walk(self): ...# Certain malformed messages can have content type set to `multipart/*`# but still have single part body, in which case payload.copy() can# fail with AttributeError.# payload is not a list, it is most probably a string.# For related, we treat everything but the root as an attachment.# The root may be indicated by 'start'; if there's no start or we# can't find the named start, treat the first subpart as the root.# Otherwise we more or less invert the remaining logic in get_body.# This only really works in edge cases (ex: non-text related or# alternatives) if the sending agent sets content-disposition.# Only skip the first example of each candidate type.# There is existing content, move it to the first subpart.b'Basic message object for the email package object model.'u'Basic message object for the email package object model.'b'Message'u'Message'b'EmailMessage'u'EmailMessage'b'[ \(\)<>@,;:\\"/\[\]\?=]'u'[ \(\)<>@,;:\\"/\[\]\?=]'b'Convenience function to format and return a key=value pair. + + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. If it contains non-ascii characters it will likewise + be encoded according to RFC2231 rules, using the utf-8 charset and + a null language. + 'u'Convenience function to format and return a key=value pair. + + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. If it contains non-ascii characters it will likewise + be encoded according to RFC2231 rules, using the utf-8 charset and + a null language. + 'b'%s="%s"'u'%s="%s"'b'Basic message object. + + A message object is defined as something that has a bunch of RFC 2822 + headers and a payload. It may optionally have an envelope header + (a.k.a. Unix-From or From_ header). If the message is a container (i.e. a + multipart or a message/rfc822), then the payload is a list of Message + objects, otherwise it is a string. + + Message objects implement part of the `mapping' interface, which assumes + there is exactly one occurrence of the header per message. Some headers + do in fact appear multiple times (e.g. Received) and for those headers, + you must use the explicit API to set or get all the headers. Not all of + the mapping methods are implemented. + 'u'Basic message object. + + A message object is defined as something that has a bunch of RFC 2822 + headers and a payload. It may optionally have an envelope header + (a.k.a. Unix-From or From_ header). If the message is a container (i.e. a + multipart or a message/rfc822), then the payload is a list of Message + objects, otherwise it is a string. + + Message objects implement part of the `mapping' interface, which assumes + there is exactly one occurrence of the header per message. Some headers + do in fact appear multiple times (e.g. Received) and for those headers, + you must use the explicit API to set or get all the headers. Not all of + the mapping methods are implemented. + 'b'text/plain'u'text/plain'b'Return the entire formatted message as a string. + 'u'Return the entire formatted message as a string. + 'b'Return the entire formatted message as a string. + + Optional 'unixfrom', when true, means include the Unix From_ envelope + header. For backward compatibility reasons, if maxheaderlen is + not specified it defaults to 0, so you must override it explicitly + if you want a different maxheaderlen. 'policy' is passed to the + Generator instance used to serialize the message; if it is not + specified the policy associated with the message instance is used. + + If the message object contains binary data that is not encoded + according to RFC standards, the non-compliant data will be replaced by + unicode "unknown character" code points. + 'u'Return the entire formatted message as a string. + + Optional 'unixfrom', when true, means include the Unix From_ envelope + header. For backward compatibility reasons, if maxheaderlen is + not specified it defaults to 0, so you must override it explicitly + if you want a different maxheaderlen. 'policy' is passed to the + Generator instance used to serialize the message; if it is not + specified the policy associated with the message instance is used. + + If the message object contains binary data that is not encoded + according to RFC standards, the non-compliant data will be replaced by + unicode "unknown character" code points. + 'b'Return the entire formatted message as a bytes object. + 'u'Return the entire formatted message as a bytes object. + 'b'Return the entire formatted message as a bytes object. + + Optional 'unixfrom', when true, means include the Unix From_ envelope + header. 'policy' is passed to the BytesGenerator instance used to + serialize the message; if not specified the policy associated with + the message instance is used. + 'u'Return the entire formatted message as a bytes object. + + Optional 'unixfrom', when true, means include the Unix From_ envelope + header. 'policy' is passed to the BytesGenerator instance used to + serialize the message; if not specified the policy associated with + the message instance is used. + 'b'Return True if the message consists of multiple parts.'u'Return True if the message consists of multiple parts.'b'Add the given payload to the current payload. + + The current payload will always be a list of objects after this method + is called. If you want to set the payload to a scalar object, use + set_payload() instead. + 'u'Add the given payload to the current payload. + + The current payload will always be a list of objects after this method + is called. If you want to set the payload to a scalar object, use + set_payload() instead. + 'b'Attach is not valid on a message with a non-multipart payload'u'Attach is not valid on a message with a non-multipart payload'b'Return a reference to the payload. + + The payload will either be a list object or a string. If you mutate + the list object, you modify the message's payload in place. Optional + i returns that index into the payload. + + Optional decode is a flag indicating whether the payload should be + decoded or not, according to the Content-Transfer-Encoding header + (default is False). + + When True and the message is not a multipart, the payload will be + decoded if this header's value is `quoted-printable' or `base64'. If + some other encoding is used, or the header is missing, or if the + payload has bogus data (i.e. bogus base64 or uuencoded data), the + payload is returned as-is. + + If the message is a multipart and the decode flag is True, then None + is returned. + 'u'Return a reference to the payload. + + The payload will either be a list object or a string. If you mutate + the list object, you modify the message's payload in place. Optional + i returns that index into the payload. + + Optional decode is a flag indicating whether the payload should be + decoded or not, according to the Content-Transfer-Encoding header + (default is False). + + When True and the message is not a multipart, the payload will be + decoded if this header's value is `quoted-printable' or `base64'. If + some other encoding is used, or the header is missing, or if the + payload has bogus data (i.e. bogus base64 or uuencoded data), the + payload is returned as-is. + + If the message is a multipart and the decode flag is True, then None + is returned. + 'b'Expected list, got %s'u'Expected list, got %s'b'x-uuencode'u'x-uuencode'b'uuencode'u'uuencode'b'uue'u'uue'b'x-uue'u'x-uue'b'Set the payload to the given value. + + Optional charset sets the message's default character set. See + set_charset() for details. + 'u'Set the payload to the given value. + + Optional charset sets the message's default character set. See + set_charset() for details. + 'b'Set the charset of the payload to a given character set. + + charset can be a Charset instance, a string naming a character set, or + None. If it is a string it will be converted to a Charset instance. + If charset is None, the charset parameter will be removed from the + Content-Type field. Anything else will generate a TypeError. + + The message will be assumed to be of type text/* encoded with + charset.input_charset. It will be converted to charset.output_charset + and encoded properly, if needed, when generating the plain text + representation of the message. MIME headers (MIME-Version, + Content-Type, Content-Transfer-Encoding) will be added as needed. + 'u'Set the charset of the payload to a given character set. + + charset can be a Charset instance, a string naming a character set, or + None. If it is a string it will be converted to a Charset instance. + If charset is None, the charset parameter will be removed from the + Content-Type field. Anything else will generate a TypeError. + + The message will be assumed to be of type text/* encoded with + charset.input_charset. It will be converted to charset.output_charset + and encoded properly, if needed, when generating the plain text + representation of the message. MIME headers (MIME-Version, + Content-Type, Content-Transfer-Encoding) will be added as needed. + 'b'MIME-Version'u'MIME-Version'b'Return the Charset instance associated with the message's payload. + 'u'Return the Charset instance associated with the message's payload. + 'b'Return the total number of headers, including duplicates.'u'Return the total number of headers, including duplicates.'b'Get a header value. + + Return None if the header is missing instead of raising an exception. + + Note that if the header appeared multiple times, exactly which + occurrence gets returned is undefined. Use get_all() to get all + the values matching a header field name. + 'u'Get a header value. + + Return None if the header is missing instead of raising an exception. + + Note that if the header appeared multiple times, exactly which + occurrence gets returned is undefined. Use get_all() to get all + the values matching a header field name. + 'b'Set the value of a header. + + Note: this does not overwrite an existing header with the same field + name. Use __delitem__() first to delete any existing headers. + 'u'Set the value of a header. + + Note: this does not overwrite an existing header with the same field + name. Use __delitem__() first to delete any existing headers. + 'b'There may be at most {} {} headers in a message'u'There may be at most {} {} headers in a message'b'Delete all occurrences of a header, if present. + + Does not raise an exception if the header is missing. + 'u'Delete all occurrences of a header, if present. + + Does not raise an exception if the header is missing. + 'b'Return a list of all the message's header field names. + + These will be sorted in the order they appeared in the original + message, or were added to the message, and may contain duplicates. + Any fields deleted and re-inserted are always appended to the header + list. + 'u'Return a list of all the message's header field names. + + These will be sorted in the order they appeared in the original + message, or were added to the message, and may contain duplicates. + Any fields deleted �q<� \ No newline at end of file