complexity
int64
1
56
n_identifiers
int64
1
114
code
stringlengths
19
12.7k
path
stringlengths
8
134
n_ast_nodes
int64
12
2.35k
ast_errors
stringlengths
0
4.01k
repo
stringlengths
3
28
documentation
dict
n_words
int64
2
866
language
stringclasses
1 value
vocab_size
int64
2
323
commit_id
stringlengths
40
40
file_name
stringlengths
5
79
id
int64
243
338k
nloc
int64
1
228
token_counts
int64
5
1.4k
fun_name
stringlengths
1
77
url
stringlengths
31
60
commit_message
stringlengths
3
15.3k
n_whitespaces
int64
1
3.23k
n_ast_errors
int64
0
20
d_id
int64
74
121k
ast_levels
int64
4
29
3
7
def get_default_cache_location() -> str: if "LUDWIG_CACHE" in os.environ and os.environ["LUDWIG_CACHE"]: return os.environ["LUDWIG_CACHE"] else: return str(Path.home().joinpath(".ludwig_cache"))
ludwig/datasets/loaders/dataset_loader.py
81
ludwig
{ "docstring": "Returns a path to the default LUDWIG_CACHE location, or $HOME/.ludwig_cache.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 10 }
15
Python
14
e4fc06f986e03919d9aef3ab55c05fee5a6b9d3a
dataset_loader.py
8,086
6
44
get_default_cache_location
https://github.com/ludwig-ai/ludwig.git
Config-first Datasets API (ludwig.datasets refactor) (#2479) * Adds README and stub for reading dataset configs. * Adds __init__.py for configs, moves circular import into function scope in ludwig/datasets/__init__.py * Print config files in datasets folder. * First pass at automatic archive extraction. * Implemented downloading and extract. * Refactor DatasetConfig into its own file. * Fixed bugs downloading kaggle dataset. * Makes registry store dataset instances, not classes. Also comments out import_submodules for testing. * Typo fix. * Only pass data files on to load_unprocessed_dataframe, symlink directories. * Downloading dataset files into existing directory if exists. * Refactor: make datasets fully config-first, lazy load dataset loaders. * Implemented agnews custom loader. * Implements train/validation/test split by files, and globbing support * Adds _glob_multiple * Adds adult_census_income, agnews, allstate_claims_severity. * Implements sha256 verification, adds more datasets up to creditcard_fraud. * Adds checksums, dbpedia, electricity * Fixes gzip file name returned as string not list, adds up to forest_cover dataset. * Adds datasets up to reuters_r8 * Adds all datasets which don't require a custom class. * Restore dataset import behavior by implementing module __getattr__ * Adds KDD datasets. * Adds ieee_fraud. * Adds imbalanced_insurance, insurance_lite. * Adds mnist. * Completes implementation of all of the built-in datasets. * Made cache_dir optional, read from environment variable if set. * Upgrades datasets tests. * Adds test for new dataset config API. Also adds scripts for dataset link checking. * Fixes loading allstate claims severity dataset. * Use @lru_cache(1), @cache not supported in python < 3.9 * Deletes dataset registry, updates automl test utils * Fix imports of datasets API. * Adds more detail to sha256: docstring and basic README * Copy-paste link oops. * Fixes handling of nested archive types like .tar.bz Also adds a LUDWIG_CACHE and export to the README * Adds link for twitter bots. * Fix order of splits in README.md * typo * Adds verify as a phase in doc string. * Support .pqt, .pq extensions for parquet. * Handle nested archives with longer file extensions like .csv.zip * Handle nested .gz types properly too. Check all extensions with .endswith * Handle all archive types with .endswith * Update ludwig/datasets/loaders/split_loaders.py Co-authored-by: Joppe Geluykens <[email protected]> * Adds explanation for export, fixes preserve_paths (should be relative to processed_dataset_dir) * Resolve preserved paths relative to raw dataset dir before move. * Catch runtime exception from extracting sub-archives. Co-authored-by: Daniel Treiman <[email protected]> Co-authored-by: Joppe Geluykens <[email protected]>
38
0
1,336
14
1
6
def correlate(a, v, mode='valid'): return multiarray.correlate2(a, v, mode)
numpy/core/numeric.py
37
numpy
{ "docstring": "\n Cross-correlation of two 1-dimensional sequences.\n\n This function computes the correlation as generally defined in signal\n processing texts::\n\n .. math:: c_k = \\sum_n a_{n+k} * \\overline{v_n}\n\n with a and v sequences being zero-padded where necessary and\n :math:`\\overline x` denoting complex conjugation.\n\n Parameters\n ----------\n a, v : array_like\n Input sequences.\n mode : {'valid', 'same', 'full'}, optional\n Refer to the `convolve` docstring. Note that the default\n is 'valid', unlike `convolve`, which uses 'full'.\n old_behavior : bool\n `old_behavior` was removed in NumPy 1.10. If you need the old\n behavior, use `multiarray.correlate`.\n\n Returns\n -------\n out : ndarray\n Discrete cross-correlation of `a` and `v`.\n\n See Also\n --------\n convolve : Discrete, linear convolution of two one-dimensional sequences.\n multiarray.correlate : Old, no conjugate, version of correlate.\n scipy.signal.correlate : uses FFT which has superior performance on large arrays. \n\n Notes\n -----\n The definition of correlation above is not unique and sometimes correlation\n may be defined differently. Another common definition is::\n\n .. math:: c'_k = \\sum_n a_{n} * \\overline{v_{n+k}\n\n which is related to :math:`c_k` by :math:`c'_k = c_{-k}`.\n\n `numpy.correlate` may perform slowly in large arrays (i.e. n = 1e5) because it does\n not use the FFT to compute the convolution; in that case, `scipy.signal.correlate` might\n be preferable.\n \n\n Examples\n --------\n >>> np.correlate([1, 2, 3], [0, 1, 0.5])\n array([3.5])\n >>> np.correlate([1, 2, 3], [0, 1, 0.5], \"same\")\n array([2. , 3.5, 3. ])\n >>> np.correlate([1, 2, 3], [0, 1, 0.5], \"full\")\n array([0.5, 2. , 3.5, 3. , 0. ])\n\n Using complex sequences:\n\n >>> np.correlate([1+1j, 2, 3-1j], [0, 1, 0.5j], 'full')\n array([ 0.5-0.5j, 1.0+0.j , 1.5-1.5j, 3.0-1.j , 0.0+0.j ])\n\n Note that you get the time reversed, complex conjugated result\n when the two input sequences change places, i.e.,\n ``c_{va}[k] = c^{*}_{av}[-k]``:\n\n >>> np.correlate([0, 1, 0.5j], [1+1j, 2, 3-1j], 'full')\n array([ 0.0+0.j , 3.0+1.j , 1.5+1.5j, 1.0+0.j , 0.5+0.5j])\n\n ", "language": "en", "n_whitespaces": 491, "n_words": 293, "vocab_size": 195 }
8
Python
7
24d653f11a55f76b125a91d7d4523052ef14b9b9
numeric.py
160,229
2
23
correlate
https://github.com/numpy/numpy.git
DOC: Use math mode
14
0
38,576
7
2
2
def default(val, default): return val if val is not None else default
parlai/core/params.py
27
ParlAI
{ "docstring": "\n shorthand for explicit None check for optional arguments.\n ", "language": "en", "n_whitespaces": 15, "n_words": 8, "vocab_size": 7 }
12
Python
11
ecdfbd0bb2ab76876e9fd3817d4502c3938a2ade
params.py
195,055
2
17
default
https://github.com/facebookresearch/ParlAI.git
Decoder-Only Transformer (#4329) * quick and dirty decoder-only implementation * fix decoder_only incremental decoding * remove unused code, add some comments, propogate func signature change * consolidate code in decoder.py * unify encoder_state * export PassThroughEncoder * add missing build_ functions * defaults in TransformerDecoderLayer __init__ * comments, consolidating more logic, simplified forward_layers args * resize token embeddings and unit test * attempt to suppress some unused import warnings * padded_tensor fp16 friendly * autoformat * decoder_only -> decoder * more documentation * update name in test * add missing dict args * more argument massaging * update TestBartDistillation::test_narrow_distillation_losses numbers * update TestTransformerDistillation::test_narrow_distillation_losses numbers * fix _pad_tensor in seeker Co-authored-by: klshuster <[email protected]>
18
0
47,174
7
1
3
def actors(self): # TODO(jungong) : remove this API once WorkerSet.remote_workers() # and WorkerSet._remote_workers() are removed. return self.__actors
rllib/utils/actor_manager.py
21
ray
{ "docstring": "Access the underlying actors being managed.\n\n Warning (jungong): This API should almost never be used.\n It is only exposed for testing and backward compatibility reasons.\n Remote actors managed by this class should never be accessed directly.\n ", "language": "en", "n_whitespaces": 64, "n_words": 36, "vocab_size": 32 }
17
Python
16
b84dac2609bd587c43ed17bb6fa18fb7241a41de
actor_manager.py
135,643
2
10
actors
https://github.com/ray-project/ray.git
Refactor ActorManager to store underlying remote actors in dict. (#29953) Signed-off-by: Jun Gong <[email protected]>
45
0
30,681
6
14
34
def _send(self, msg, endpoint="events", quiet=False, from_log=False, create=True): if msg.get("eid", None) is None: msg["eid"] = self.env self.env_list.add(self.env) if msg.get("eid", None) is not None: self.env_list.add(msg["eid"]) # TODO investigate send use cases, then deprecate if not self.send: return msg, endpoint if "win" in msg and msg["win"] is None and create: msg["win"] = "window_" + get_rand_id() if not from_log: self._log(msg, endpoint) if self.offline: # If offline, don't even try to post return msg["win"] if "win" in msg else True try: return self._handle_post( "{0}:{1}{2}/{3}".format( self.server, self.port, self.base_url, endpoint ), data=json.dumps(msg), ) except (requests.RequestException, requests.ConnectionError, requests.Timeout): if self.raise_exceptions: raise ConnectionError("Error connecting to Visdom server") else: if self.raise_exceptions is None: warnings.warn( "Visdom is eventually changing to default to raising " "exceptions rather than ignoring/printing. This change" " is expected to happen by July 2018. Please set " "`raise_exceptions` to False to retain current " "behavior.", PendingDeprecationWarning, ) if not quiet: print("Exception in user code:") print("-" * 60) traceback.print_exc() return False
py/visdom/__init__.py
412
visdom
{ "docstring": "\n This function sends specified JSON request to the Tornado server. This\n function should generally not be called by the user, unless you want to\n build the required JSON yourself. `endpoint` specifies the destination\n Tornado server endpoint for the request.\n\n If `create=True`, then if `win=None` in the message a new window will be\n created with a random name. If `create=False`, `win=None` indicates the\n operation should be applied to all windows.\n ", "language": "en", "n_whitespaces": 126, "n_words": 69, "vocab_size": 51 }
153
Python
107
5b8b7f267cfaf76a2a39a727ef31a62b3909a093
__init__.py
106,875
39
244
_send
https://github.com/fossasia/visdom.git
apply black py to all python files
712
0
22,492
17
7
22
def add_edge(self, u_for_edge, v_for_edge, key=None, **attr): u, v = u_for_edge, v_for_edge # add nodes if u not in self._succ: if u is None: raise ValueError("None cannot be a node") self._succ[u] = self.adjlist_inner_dict_factory() self._pred[u] = self.adjlist_inner_dict_factory() self._node[u] = self.node_attr_dict_factory() if v not in self._succ: if v is None: raise ValueError("None cannot be a node") self._succ[v] = self.adjlist_inner_dict_factory() self._pred[v] = self.adjlist_inner_dict_factory() self._node[v] = self.node_attr_dict_factory() if key is None: key = self.new_edge_key(u, v) if v in self._succ[u]: keydict = self._adj[u][v] datadict = keydict.get(key, self.edge_attr_dict_factory()) datadict.update(attr) keydict[key] = datadict else: # selfloops work this way without special treatment datadict = self.edge_attr_dict_factory() datadict.update(attr) keydict = self.edge_key_dict_factory() keydict[key] = datadict self._succ[u][v] = keydict self._pred[v][u] = keydict return key
networkx/classes/multidigraph.py
390
networkx
{ "docstring": "Add an edge between u and v.\n\n The nodes u and v will be automatically added if they are\n not already in the graph.\n\n Edge attributes can be specified with keywords or by directly\n accessing the edge's attribute dictionary. See examples below.\n\n Parameters\n ----------\n u_for_edge, v_for_edge : nodes\n Nodes can be, for example, strings or numbers.\n Nodes must be hashable (and not None) Python objects.\n key : hashable identifier, optional (default=lowest unused integer)\n Used to distinguish multiedges between a pair of nodes.\n attr : keyword arguments, optional\n Edge data (or labels or objects) can be assigned using\n keyword arguments.\n\n Returns\n -------\n The edge key assigned to the edge.\n\n See Also\n --------\n add_edges_from : add a collection of edges\n\n Notes\n -----\n To replace/update edge data, use the optional key argument\n to identify a unique edge. Otherwise a new edge will be created.\n\n NetworkX algorithms designed for weighted graphs cannot use\n multigraphs directly because it is not clear how to handle\n multiedge weights. Convert to Graph using edge attribute\n 'weight' to enable weighted graph algorithms.\n\n Default keys are generated using the method `new_edge_key()`.\n This method can be overridden by subclassing the base class and\n providing a custom `new_edge_key()` method.\n\n Examples\n --------\n The following all add the edge e=(1, 2) to graph G:\n\n >>> G = nx.MultiDiGraph()\n >>> e = (1, 2)\n >>> key = G.add_edge(1, 2) # explicit two-node form\n >>> G.add_edge(*e) # single edge as tuple of two nodes\n 1\n >>> G.add_edges_from([(1, 2)]) # add edges from iterable container\n [2]\n\n Associate data to edges using keywords:\n\n >>> key = G.add_edge(1, 2, weight=3)\n >>> key = G.add_edge(1, 2, key=0, weight=4) # update data for key=0\n >>> key = G.add_edge(1, 3, weight=7, capacity=15, length=342.7)\n\n For non-string attribute keys, use subscript notation.\n\n >>> ekey = G.add_edge(1, 2)\n >>> G[1][2][0].update({0: 5})\n >>> G.edges[1, 2, 0].update({0: 5})\n ", "language": "en", "n_whitespaces": 677, "n_words": 301, "vocab_size": 186 }
112
Python
58
6ab4e54e696ae65534e1c3329930df8beee03573
multidigraph.py
176,523
29
246
add_edge
https://github.com/networkx/networkx.git
Fixed wrong dict factory usage on MultiDiGraph (#5456) * Fixed the issue that the wrong dict factory on a MultiDiGraph was used for edge attributes (edge_key_dict_factory instead of edge_attr_dict_factory) Extended tests to typecheck the dict factories and added a test that incorporates custom dict factories on a MultiDiGraph * Mypy ignore inferred types in MDG subclass. Co-authored-by: Fabian Ball <[email protected]> Co-authored-by: Ross Barnowski <[email protected]>
425
0
41,942
12
2
15
def _get_kwargs(self) -> Dict[str, Union[int, Tuple[int, int]]]: retval = {kword: self._kwarg_mapping[kword] for kword in self._kwarg_requirements[self._blur_type]} logger.trace("BlurMask kwargs: %s", retval) # type: ignore return retval _HASHES_SEEN: Dict[str, Dict[str, int]] = {}
lib/align/detected_face.py
107
faceswap
{ "docstring": " dict: the valid keyword arguments for the requested :attr:`_blur_type` ", "language": "en", "n_whitespaces": 10, "n_words": 9, "vocab_size": 8 }
30
Python
26
5e73437be47f2410439a3c6716de96354e6a0c94
detected_face.py
101,222
6
56
_get_kwargs
https://github.com/deepfakes/faceswap.git
lib.align updates: - alignments.py - Add typed dicts for imported alignments - Explicitly check for presence of thumb value in alignments dict - linting - detected_face.py - Typing - Linting - Legacy support for pre-aligned face - Update dependencies to new property names
75
0
20,642
10
1
4
def getencoder(encoding): return lookup(encoding).encode
python3.10.4/Lib/codecs.py
24
XX-Net
{ "docstring": " Lookup up the codec for the given encoding and return\n its encoder function.\n\n Raises a LookupError in case the encoding cannot be found.\n\n ", "language": "en", "n_whitespaces": 41, "n_words": 23, "vocab_size": 20 }
4
Python
4
8198943edd73a363c266633e1aa5b2a9e9c9f526
codecs.py
221,361
2
13
getencoder
https://github.com/XX-net/XX-Net.git
add python 3.10.4 for windows
10
0
56,376
8
1
4
def reset(self): self._landmarks = {} self._tk_faces = {}
tools/manual/faceviewer/viewport.py
33
faceswap
{ "docstring": " Reset all the cached objects on a face size change. ", "language": "en", "n_whitespaces": 11, "n_words": 10, "vocab_size": 10 }
8
Python
6
5e73437be47f2410439a3c6716de96354e6a0c94
viewport.py
101,267
3
18
reset
https://github.com/deepfakes/faceswap.git
lib.align updates: - alignments.py - Add typed dicts for imported alignments - Explicitly check for presence of thumb value in alignments dict - linting - detected_face.py - Typing - Linting - Legacy support for pre-aligned face - Update dependencies to new property names
29
0
20,686
7
4
17
def on_mode_entered(self, mode): if (config.val.tabs.mode_on_change == 'restore' and mode in modeman.INPUT_MODES): tab = self.widget.currentWidget() if tab is not None: assert isinstance(tab, browsertab.AbstractTab), tab tab.data.input_mode = mode
qutebrowser/mainwindow/tabbedbrowser.py
96
qutebrowser
{ "docstring": "Save input mode when tabs.mode_on_change = restore.", "language": "en", "n_whitespaces": 6, "n_words": 7, "vocab_size": 7 }
26
Python
21
a20bb67a878b2e68abf8268c1b0a27f018d01352
tabbedbrowser.py
320,800
7
60
on_mode_entered
https://github.com/qutebrowser/qutebrowser.git
mypy: Upgrade to PyQt5-stubs 5.15.6.0 For some unknown reason, those new stubs cause a *lot* of things now to be checked by mypy which formerly probably got skipped due to Any being implied somewhere. The stubs themselves mainly improved, with a couple of regressions too. In total, there were some 337 (!) new mypy errors. This commit fixes almost all of them, and the next commit improves a fix to get things down to 0 errors again. Overview of the changes: ==== qutebrowser/app.py - Drop type ignore due to improved stubs. ==== qutebrowser/browser/browsertab.py - Specify the type of _widget members more closely than just QWidget. This is debatable: I suppose the abstract stuff shouldn't need to know anything about the concrete backends at all. But it seems like we cut some corners when initially implementing things, and put some code in browsertab.py just because the APIs of both backends happened to be compatible. Perhaps something to reconsider once we drop QtWebKit and hopefully implement a dummy backend. - Add an additional assertion in AbstractAction.run_string. This is already covered by the isinstance(member, self.action_base) above it, but that's too dynamic for mypy to understand. - Fix the return type of AbstractScroller.pos_px, which is a QPoint (with x and y components), not a single int. - Fix the return type of AbstractScroller.pos_perc, which is a Tuple (with x and y components), not a single int. - Fix the argument types of AbstractScroller.to_perc, as it's possible to pass fractional percentages too. - Specify the type for AbstractHistoryPrivate._history. See above (_widget) re this being debatable. - Fix the return type of AbstractTabPrivate.event_target(), which can be None (see #3888). - Fix the return type of AbstractTabPrivate.run_js_sync, which is Any (the JS return value), not None. - Fix the argument type for AbstractTabPrivate.toggle_inspector: position can be None to use the last used position. - Declare the type of sub-objects of AbstractTab. - Fix the return value of AbstractTab.icon(), which is the QIcon, not None. ==== qutebrowser/browser/commands.py - Make sure the active window is a MainWindow (with a .win_id attribute). ==== qutebrowser/browser/downloadview.py - Add _model() which makes sure that self.model() is a DownloadModel, not None or any other model. This is needed because other methods access a variety of custom attributes on it, e.g. last_index(). ==== qutebrowser/browser/greasemonkey.py - Add an ignore for AbstractDownload.requested_url which we patch onto the downloads. Probably would be nicer to add it as a proper attribute which always gets set by the DownloadManager. ==== qutebrowser/browser/hints.py - Remove type ignores for QUrl.toString(). - Add a new type ignore for combining different URL flags (which works, but is not exactly type safe... still probably a regression in the stubs). - Make sure the things we get back from self._get_keyparser are what we actually expect. Probably should introduce a TypedDict (and/or overloads for _get_keyparser with typing.Literal) to teach mypy about the exact return value. See #7098. This is needed because we access Hint/NormalKeyParser-specific attributes such as .set_inhibited_timout() or .update_bindings(). ==== qutebrowser/browser/inspector.py - Similar changes than in browsertab.py to make some types where we share API (e.g. .setPage()) more concrete. Didn't work out unfortunately, see next commit. ==== qutebrowser/browser/network/pac.py - Remove now unneeded type ignore for signal. ==== qutebrowser/browser/qtnetworkdownloads.py - Make sure that downloads is a qtnetworkdownloads.DownloadItem (rather than an AbstractDownload), so that we can call ._uses_nam() on it. ==== qutebrowser/browser/qutescheme.py - Remove now unneeded type ignore for QUrl flags. ==== qutebrowser/browser/urlmarks.py - Specify the type of UrlMarkManager._lineparser, as those only get initialized in _init_lineparser of subclasses, so mypy doesn't know it's supposed to exist. ==== qutebrowser/browser/webelem.py - New casts to turn single KeyboardModifier (enum) entries into KeyboardModifiers (flags). Might not be needed anymore with Qt 6. - With that, casting the final value is now unneeded. ==== qutebrowser/browser/webengine/notification.py - Remove now unneeded type ignore for signal. - Make sure the self.sender() we get in HerbeNotificationAdapter._on_finished() is a QProcess, not just any QObject. ==== qutebrowser/browser/webengine/webenginedownloads.py - Remove now unneeded type ignores for signals. ==== qutebrowser/browser/webengine/webengineelem.py - Specify the type of WebEngineElement._tab. - Remove now unneeded type ignore for mixed flags. ==== qutebrowser/browser/webengine/webengineinspector.py - See changes to inspector.py and next commit. - Remove now unneeded type ignore for signal. ==== qutebrowser/browser/webengine/webenginequtescheme.py - Remove now unneeded type ignore for mixed flags. ==== qutebrowser/browser/webengine/webenginesettings.py - Ignore access of .setter attribute which we patch onto QWebEngineProfile. Would be nice to have a subclass or wrapper-class instead. ==== qutebrowser/browser/webengine/webenginetab.py - Specified the type of _widget members more closely than just QWidget. See browsertab.py changes for details. - Remove some now-unneeded type ignores for creating FindFlags. - Specify more concrete types for WebEngineTab members where we actually need to access WebEngine-specific attributes. - Make sure the page we get is our custom WebEnginePage subclass, not just any QWebEnginePage. This is needed because we access custom attributes on it. ==== qutebrowser/browser/webengine/webview.py - Make sure the page we get is our custom WebEnginePage subclass, not just any QWebEnginePage. This is needed because we access custom attributes on it. ==== qutebrowser/browser/webkit/network/networkreply.py - Remove now unneeded type ignores for signals. ==== qutebrowser/browser/webkit/webkitinspector.py - See changes to inspector.py and next commit. ==== qutebrowser/browser/webkit/webkittab.py - Specify the type of _widget members more closely than just QWidget. See browsertab.py changes for details. - Add a type ignore for WebKitAction because our workaround needs to treat them as ints (which is allowed by PyQt, even if not type-safe). - Add new ignores for findText calls: The text is a QString and can be None; the flags are valid despite mypy thinking they aren't (stubs regression?). - Specify the type for WebKitHistoryPrivate._history, because we access WebKit-specific attributes. See above (_widget) re this being debatable. - Make mypy aware that .currentFrame() and .frameAt() can return None (stubs regression?). - Make sure the .page() and .page().networkAccessManager() are our subclasses rather than the more generic QtWebKit objects, as we use custom attributes. - Add new type ignores for signals (stubs regression!) ==== qutebrowser/browser/webkit/webpage.py - Make sure the .networkAccessManager() is our subclass rather than the more generic QtWebKit object, as we use custom attributes. - Replace a cast by a type ignore. The cast didn't work anymore. ==== qutebrowser/browser/webkit/webview.py - Make sure the .page() is our subclass rather than the more generic QtWebKit object, as we use custom attributes. ==== qutebrowser/commands/userscripts.py - Remove now unneeded type ignore for signal. ==== qutebrowser/completion/completer.py - Add a new _completion() getter (which ensures it actually gets the completion view) rather than accessing the .parent() directly (which could be any QObject). ==== qutebrowser/completion/completiondelegate.py - Make sure self.parent() is a CompletionView (no helper method as there is only one instance). - Remove a now-unneeded type ignore for adding QSizes. ==== qutebrowser/completion/completionwidget.py - Add a ._model() getter which ensures that we get a CompletionModel (with custom attributes) rather than Qt's .model() which can be any QAbstractItemModel (or None). - Removed a now-unneeded type ignore for OR-ing flags. ==== qutebrowser/completion/models/completionmodel.py - Remove now unneeded type ignores for signals. - Ignore a complaint about .set_pattern() not being defined. Completion categories don't share any common parent class, so it would be good to introduce a typing.Protocol for this. See #7098. ==== qutebrowser/components/misccommands.py - Removed a now-unneeded type ignore for OR-ing flags. ==== qutebrowser/components/readlinecommands.py - Make sure QApplication.instance() is a QApplication (and not just a QCoreApplication). This includes the former "not None" check. ==== qutebrowser/components/scrollcommands.py - Add basic annotation for "funcs" dict. Could have a callable protocol to specify it needs a count kwarg, see #7098. ==== qutebrowser/config/stylesheet.py - Correctly specify that stylesheet apply to QWidgets, not any QObject. - Ignore an attr-defined for obj.STYLESHEET. Perhaps could somehow teach mypy about this with overloads and protocols (stylesheet for set_register being None => STYLESHEET needs to be defined, otherwise anything goes), but perhaps not worth the troble. See #7098. ==== qutebrowser/keyinput/keyutils.py - Remove some now-unneeded type ignores and add a cast for using a single enum value as flags. Might need to look at this again with Qt 6 support. ==== qutebrowser/keyinput/modeman.py - Add a FIXME for using a TypedDict, see comments for hints.py above. ==== qutebrowser/mainwindow/mainwindow.py - Remove now-unneeded type ignores for calling with OR-ed flags. - Improve where we cast from WindowType to WindowFlags, no int needed - Use new .tab_bar() getter, see below. ==== qutebrowser/mainwindow/prompt.py - Remove now-unneeded type ignores for calling with OR-ed flags. ==== qutebrowser/mainwindow/statusbar/bar.py - Adjust type ignores around @pyqtProperty. The fact one is still needed seems like a stub regression. ==== qutebrowser/mainwindow/statusbar/command.py - Fix type for setText() override (from QLineEdit): text can be None (QString in C++). ==== qutebrowser/mainwindow/statusbar/url.py - Adjust type ignores around @pyqtProperty. The fact one is still needed seems like a stub regression. ==== qutebrowser/mainwindow/tabbedbrowser.py - Specify that TabDeque manages browser tabs, not any QWidgets. It accesses AbstractTab-specific attributes. - Make sure that the .tabBar() we get is a tabwidget.TabBar, as we access .maybe_hide. - Fix the annotations for stored marks: Scroll positions are a QPoint, not int. - Add _current_tab() and _tab_by_idx() wrappers for .currentWidget() and .widget(), which ensures that the return values are valid AbstractTabs (or None for _tab_by_idx). This is needed because we access AbstractTab-specific attributes. - For some places, where the tab can be None, continue using .currentTab() but add asserts. - Remove some now-unneeded [unreachable] ignores, as mypy knows about the None possibility now. ==== qutebrowser/mainwindow/tabwidget.py - Add new tab_bar() and _tab_by_idx() helpers which check that the .tabBar() and .widget() are of type TabBar and AbstractTab, respectively. - Add additional assertions where we expect ._tab_by_idx() to never be None. - Remove dead code in get_tab_fields for handling a None y scroll position. I was unable to find any place in the code where this could be set to None. - Remove some now-unneeded type ignores and casts, as mypy now knows that _type_by_idx() could be None. - Work around a strange instance where mypy complains about not being able to find the type of TabBar.drag_in_progress from TabWidget._toggle_visibility, despite it clearly being shown as a bool *inside* that class without any annotation. - Add a ._tab_widget() getter in TabBar which ensures that the .parent() is in fact a TabWidget. ==== qutebrowser/misc/crashsignal.py - Remove now unneeded type ignores for signals. ==== qutebrowser/misc/editor.py - Remove now unneeded type ignores for signals. ==== qutebrowser/misc/ipc.py - Remove now unneeded type ignores for signals. - Add new type ignores for .error() which is both a signal and a getter (stub regression?). Won't be relevant for Qt 6 anymore, as the signal was renamed to errorOccurred in 5.15. ==== qutebrowser/misc/objects.py - Make sure mypy knows that objects.app is our custom Application (with custom attributes) rather than any QApplication. ==== qutebrowser/utils/objreg.py - Ignore attr-defined for .win_id attributes. Maybe could add a typing.Protocol, but ideally, the whole objreg stuff should die one day anyways. ==== tests/unit/completion/test_completer.py - Make CompletionWidgetStub inherit from CompletionView so that it passes the new isinstance() asserts in completer.py (see above).
107
0
117,362
12
1
29
def test_shared_embedding_column_with_non_sequence_categorical(self): with tf.Graph().as_default(): vocabulary_size = 3 sparse_input_a = tf.compat.v1.SparseTensorValue( # example 0, ids [2] # example 1, ids [0, 1] indices=((0, 0), (1, 0), (1, 1)), values=(2, 0, 1), dense_shape=(2, 2), ) sparse_input_b = tf.compat.v1.SparseTensorValue( # example 0, ids [2] # example 1, ids [0, 1] indices=((0, 0), (1, 0), (1, 1)), values=(2, 0, 1), dense_shape=(2, 2), ) categorical_column_a = ( tf.feature_column.categorical_column_with_identity( key="aaa", num_buckets=vocabulary_size ) ) categorical_column_b = ( tf.feature_column.categorical_column_with_identity( key="bbb", num_buckets=vocabulary_size ) ) shared_embedding_columns = tf.feature_column.shared_embeddings( [categorical_column_a, categorical_column_b], dimension=2 ) sequence_input_layer = ksfc.SequenceFeatures( shared_embedding_columns ) with self.assertRaisesRegex( ValueError, r"In embedding_column: aaa_shared_embedding\. " r"categorical_column must " r"be of type SequenceCategoricalColumn to use " r"SequenceFeatures\.", ): _, _ = sequence_input_layer( {"aaa": sparse_input_a, "bbb": sparse_input_b} )
keras/feature_column/sequence_feature_column_test.py
332
keras
{ "docstring": "Tests that error is raised for non-sequence shared embedding\n column.", "language": "en", "n_whitespaces": 16, "n_words": 10, "vocab_size": 10 }
115
Python
64
6fafb567af4e4d9f42974d0b6c55b18bc03e17eb
sequence_feature_column_test.py
278,120
39
218
test_shared_embedding_column_with_non_sequence_categorical
https://github.com/keras-team/keras.git
resolve line-too-long in feature_column
696
0
82,378
15
6
51
def linear_mpc_control(xref, xbar, x0, dref): x = cvxpy.Variable((NX, T + 1)) u = cvxpy.Variable((NU, T)) cost = 0.0 constraints = [] for t in range(T): cost += cvxpy.quad_form(u[:, t], R) if t != 0: cost += cvxpy.quad_form(xref[:, t] - x[:, t], Q) A, B, C = get_linear_model_matrix( xbar[2, t], xbar[3, t], dref[0, t]) constraints += [x[:, t + 1] == A @ x[:, t] + B @ u[:, t] + C] if t < (T - 1): cost += cvxpy.quad_form(u[:, t + 1] - u[:, t], Rd) constraints += [cvxpy.abs(u[1, t + 1] - u[1, t]) <= MAX_DSTEER * DT] cost += cvxpy.quad_form(xref[:, T] - x[:, T], Qf) constraints += [x[:, 0] == x0] constraints += [x[2, :] <= MAX_SPEED] constraints += [x[2, :] >= MIN_SPEED] constraints += [cvxpy.abs(u[0, :]) <= MAX_ACCEL] constraints += [cvxpy.abs(u[1, :]) <= MAX_STEER] prob = cvxpy.Problem(cvxpy.Minimize(cost), constraints) prob.solve(solver=cvxpy.ECOS, verbose=False) if prob.status == cvxpy.OPTIMAL or prob.status == cvxpy.OPTIMAL_INACCURATE: ox = get_nparray_from_matrix(x.value[0, :]) oy = get_nparray_from_matrix(x.value[1, :]) ov = get_nparray_from_matrix(x.value[2, :]) oyaw = get_nparray_from_matrix(x.value[3, :]) oa = get_nparray_from_matrix(u.value[0, :]) odelta = get_nparray_from_matrix(u.value[1, :]) else: print("Error: Cannot solve mpc..") oa, odelta, ox, oy, oyaw, ov = None, None, None, None, None, None return oa, odelta, ox, oy, oyaw, ov
PathTracking/model_predictive_speed_and_steer_control/model_predictive_speed_and_steer_control.py
699
PythonRobotics
{ "docstring": "\n linear mpc control\n\n xref: reference point\n xbar: operational point\n x0: initial state\n dref: reference steer angle\n ", "language": "en", "n_whitespaces": 35, "n_words": 16, "vocab_size": 14 }
201
Python
108
c05a4fdada59fd97332417c4b99515118bfef45c
model_predictive_speed_and_steer_control.py
19,224
35
476
linear_mpc_control
https://github.com/AtsushiSakai/PythonRobotics.git
Fix ModuleNotFoundError when executing test in the tests folder and little improve MPC controller (#619) * Fix ModuleNotFoundError when executing test in the tests folder Signed-off-by: Trung Kien <[email protected]> * Improve model_predictive_speed_and_steer_control - Fix typo - Using @ for matrix multiplication instead of * with have been deprecated in CVXPY1.1 - Fix missing conftest module in test file Signed-off-by: Trung Kien <[email protected]>
414
0
2,919
17
1
4
def uses_after_args(self) -> Namespace: return self.peas_args['uses_after']
jina/peapods/pods/__init__.py
28
jina
{ "docstring": "Get the arguments for the `uses_after` of this Pod.\n\n\n .. # noqa: DAR201\n ", "language": "en", "n_whitespaces": 27, "n_words": 13, "vocab_size": 12 }
6
Python
6
933415bfa1f9eb89f935037014dfed816eb9815d
__init__.py
9,895
7
15
uses_after_args
https://github.com/jina-ai/jina.git
feat: star routing (#3900) * feat(proto): adjust proto for star routing (#3844) * feat(proto): adjust proto for star routing * feat(proto): generate proto files * feat(grpc): refactor grpclet interface (#3846) * feat: refactor connection pool for star routing (#3872) * feat(k8s): add more labels to k8s deployments * feat(network): refactor connection pool * feat(network): refactor k8s pool * feat: star routing graph gateway (#3877) * feat: star routing - refactor grpc data runtime (#3887) * feat(runtimes): refactor grpc dataruntime * fix(tests): adapt worker runtime tests * fix(import): fix import * feat(proto): enable sending multiple lists (#3891) * feat: star routing gateway (#3893) * feat: star routing gateway all protocols (#3897) * test: add streaming and prefetch tests (#3901) * feat(head): new head runtime for star routing (#3899) * feat(head): new head runtime * feat(head): new head runtime * style: fix overload and cli autocomplete * feat(network): improve proto comments Co-authored-by: Jina Dev Bot <[email protected]> * feat(worker): merge docs in worker runtime (#3905) * feat(worker): merge docs in worker runtime * feat(tests): assert after clean up * feat(tests): star routing runtime integration tests (#3908) * fix(tests): fix integration tests * test: test runtimes fast slow request (#3910) * feat(zmq): purge zmq, zed, routing_table (#3915) * feat(zmq): purge zmq, zed, routing_table * style: fix overload and cli autocomplete * feat(zmq): adapt comment in dependency list * style: fix overload and cli autocomplete * fix(tests): fix type tests Co-authored-by: Jina Dev Bot <[email protected]> * test: add test gateway to worker connection (#3921) * feat(pea): adapt peas for star routing (#3918) * feat(pea): adapt peas for star routing * style: fix overload and cli autocomplete * feat(pea): add tests * feat(tests): add failing head pea test Co-authored-by: Jina Dev Bot <[email protected]> * feat(tests): integration tests for peas (#3923) * feat(tests): integration tests for peas * feat(pea): remove _inner_pea function * feat: star routing container pea (#3922) * test: rescue tests (#3942) * fix: fix streaming tests (#3945) * refactor: move docker run to run (#3948) * feat: star routing pods (#3940) * feat(pod): adapt pods for star routing * feat(pods): adapt basepod to star routing * feat(pod): merge pod and compound pod * feat(tests): fix tests * style: fix overload and cli autocomplete * feat(test): add container pea int test * feat(ci): remove more unnecessary tests * fix(tests): remove jinad runtime * feat(ci): remove latency tracking * fix(ci): fix ci def * fix(runtime): enable runtime to be exited * fix(tests): wrap runtime test in process * fix(runtimes): remove unused runtimes * feat(runtimes): improve cancel wait * fix(ci): build test pip again in ci * fix(tests): fix a test * fix(test): run async in its own process * feat(pod): include shard in activate msg * fix(pea): dont join * feat(pod): more debug out * feat(grpc): manage channels properly * feat(pods): remove exitfifo * feat(network): add simple send retry mechanism * fix(network): await pool close * fix(test): always close grpc server in worker * fix(tests): remove container pea from tests * fix(tests): reorder tests * fix(ci): split tests * fix(ci): allow alias setting * fix(test): skip a test * feat(pods): address comments Co-authored-by: Jina Dev Bot <[email protected]> * test: unblock skipped test (#3957) * feat: jinad pea (#3949) * feat: jinad pea * feat: jinad pea * test: remote peas * test: toplogy tests with jinad * ci: parallel jobs * feat(tests): add pod integration tests (#3958) * feat(tests): add pod integration tests * fix(tests): make tests less flaky * fix(test): fix test * test(pea): remote pea topologies (#3961) * test(pea): remote pea simple topology * test: remote pea topologies * refactor: refactor streamer result handling (#3960) * feat(k8s): adapt K8s Pod for StarRouting (#3964) * test: optimize k8s test * test: increase timeout and use different namespace * test: optimize k8s test * test: build and load image when needed * test: refactor k8s test * test: fix image name error * test: fix k8s image load * test: fix typoe port expose * test: update tests in connection pool and handling * test: remove unused fixture * test: parameterize docker images * test: parameterize docker images * test: parameterize docker images * feat(k8s): adapt k8s pod for star routing * fix(k8s): dont overwrite add/remove function in pool * fix(k8s): some fixes * fix(k8s): some more fixes * fix(k8s): linting * fix(tests): fix tests * fix(tests): fix k8s unit tests * feat(k8s): complete k8s integration test * feat(k8s): finish k8s tests * feat(k8s): fix test * fix(tests): fix test with no name * feat(k8s): unify create/replace interface * feat(k8s): extract k8s port constants * fix(tests): fix tests * fix(tests): wait for runtime being ready in tests * feat(k8s): address comments Co-authored-by: bwanglzu <[email protected]> * feat(flow): adapt Flow for StarRouting (#3986) * feat(flow): add routes * feat(flow): adapt flow to star routing * style: fix overload and cli autocomplete * feat(flow): handle empty topologies * feat(k8s): allow k8s pool disabling * style: fix overload and cli autocomplete * fix(test): fix test with mock * fix(tests): fix more tests * feat(flow): clean up tests * style: fix overload and cli autocomplete * fix(tests): fix more tests * feat: add plot function (#3994) * fix(tests): avoid hanging tests * feat(flow): add type hinting * fix(test): fix duplicate exec name in test * fix(tests): fix more tests * fix(tests): enable jinad test again * fix(tests): random port fixture * fix(style): replace quotes Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Joan Fontanals <[email protected]> * feat(ci): bring back ci (#3997) * feat(ci): enable ci again * style: fix overload and cli autocomplete * feat(ci): add latency tracking * feat(ci): bring back some tests * fix(tests): remove invalid port test * feat(ci): disable daemon and distributed tests * fix(tests): fix entrypoint in hub test * fix(tests): wait for gateway to be ready * fix(test): fix more tests * feat(flow): do rolling update and scale sequentially * fix(tests): fix more tests * style: fix overload and cli autocomplete * feat: star routing hanging pods (#4011) * fix: try to handle hanging pods better * test: hanging pods test work * fix: fix topology graph problem * test: add unit test to graph * fix(tests): fix k8s tests * fix(test): fix k8s test * fix(test): fix k8s pool test * fix(test): fix k8s test * fix(test): fix k8s connection pool setting * fix(tests): make runtime test more reliable * fix(test): fix routes test * fix(tests): make rolling update test less flaky * feat(network): gurantee unique ports * feat(network): do round robin for shards * fix(ci): increase pytest timeout to 10 min Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Joan Fontanals <[email protected]> * fix(ci): fix ci file * feat(daemon): jinad pod for star routing * Revert "feat(daemon): jinad pod for star routing" This reverts commit ed9b37ac862af2e2e8d52df1ee51c0c331d76f92. * feat(daemon): remote jinad pod support (#4042) * feat(daemon): add pod tests for star routing * feat(daemon): add remote pod test * test(daemon): add remote pod arguments test * test(daemon): add async scale test * test(daemon): add rolling update test * test(daemon): fix host * feat(proto): remove message proto (#4051) * feat(proto): remove message proto * fix(tests): fix tests * fix(tests): fix some more tests * fix(tests): fix more tests * fix(tests): fix more tests * fix(tests): fix more tests * fix(tests): fix more tests * feat(proto): put docs back in data * fix(proto): clean up * feat(proto): clean up * fix(tests): skip latency tracking * fix(test): fix hub test * fix(tests): fix k8s test * fix(test): some test clean up * fix(style): clean up style issues * feat(proto): adjust for rebase * fix(tests): bring back latency tracking * fix(tests): fix merge accident * feat(proto): skip request serialization (#4074) * feat: add reduce to star routing (#4070) * feat: add reduce on shards to head runtime * test: add reduce integration tests with fixed order * feat: add reduce on needs * chore: get_docs_matrix_from_request becomes public * style: fix overload and cli autocomplete * docs: remove undeterministic results warning * fix: fix uses_after * test: assert correct num docs after reducing in test_external_pod * test: correct asserts after reduce in test_rolling_update * fix: no reduce if uses_after_address is set * fix: get_docs_from_request only if needed * fix: fix tests after merge * refactor: move reduce from data_request_handler to head * style: fix overload and cli autocomplete * chore: apply suggestions * fix: fix asserts * chore: minor test fix * chore: apply suggestions * test: remove flow tests with external executor (pea) * fix: fix test_expected_messages_routing * fix: fix test_func_joiner * test: adapt k8s test Co-authored-by: Jina Dev Bot <[email protected]> * fix(k8s): fix static pool config * fix: use custom protoc doc generator image (#4088) * fix: use custom protoc doc generator image * fix(docs): minor doc improvement * fix(docs): use custom image * fix(docs): copy docarray * fix: doc building local only * fix: timeout doc building * fix: use updated args when building ContainerPea * test: add container PeaFactory test * fix: force pea close on windows (#4098) * fix: dont reduce if uses exist (#4099) * fix: dont use reduce if uses exist * fix: adjust reduce tests * fix: adjust more reduce tests * fix: fix more tests * fix: adjust more tests * fix: ignore non jina resources (#4101) * feat(executor): enable async executors (#4102) * feat(daemon): daemon flow on star routing (#4096) * test(daemon): add remote flow test * feat(daemon): call scale in daemon * feat(daemon): remove tail args and identity * test(daemon): rename scalable executor * test(daemon): add a small delay in async test * feat(daemon): scale partial flow only * feat(daemon): call scale directly in partial flow store * test(daemon): use asyncio sleep * feat(daemon): enable flow level distributed tests * test(daemon): fix jinad env workspace config * test(daemon): fix pod test use new port rolling update * feat(daemon): enable distribuetd tests * test(daemon): remove duplicate tests and zed runtime test * test(daemon): fix stores unit test * feat(daemon): enable part of distributed tests * feat(daemon): enable part of distributed tests * test: correct test paths * test(daemon): add client test for remote flows * test(daemon): send a request with jina client * test(daemon): assert async generator * test(daemon): small interval between tests * test(daemon): add flow test for container runtime * test(daemon): add flow test for container runtime * test(daemon): fix executor name * test(daemon): fix executor name * test(daemon): use async client fetch result * test(daemon): finish container flow test * test(daemon): enable distributed in ci * test(daemon): enable distributed in ci * test(daemon): decare flows and pods * test(daemon): debug ci if else * test(daemon): debug ci if else * test(daemon): decare flows and pods * test(daemon): correct test paths * test(daemon): add small delay for async tests * fix: star routing fixes (#4100) * docs: update docs * fix: fix Request.__repr__ * docs: update flow remarks * docs: fix typo * test: add non_empty_fields test * chore: remove non_empty_fields test * feat: polling per endpoint (#4111) * feat(polling): polling per endpoint configurable * fix: adjust tests * feat(polling): extend documentation * style: fix overload and cli autocomplete * fix: clean up * fix: adjust more tests * fix: remove repeat from flaky test * fix: k8s test * feat(polling): address pr feedback * feat: improve docs Co-authored-by: Jina Dev Bot <[email protected]> * feat(grpc): support connect grpc server via ssl tunnel (#4092) * feat(grpc): support ssl grpc connect if port is 443 * fix(grpc): use https option instead of detect port automatically * chore: fix typo * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <[email protected]> * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <[email protected]> * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <[email protected]> * test(networking): add test for peapods networking * fix: address comments Co-authored-by: Joan Fontanals <[email protected]> * feat(polling): unify polling args (#4113) * fix: several issues for jinad pods (#4119) * fix: activate for jinad pods * fix: dont expose worker pod in partial daemon * fix: workspace setting * fix: containerized flows * fix: hub test * feat(daemon): remote peas on star routing (#4112) * test(daemon): fix request in peas * test(daemon): fix request in peas * test(daemon): fix sync async client test * test(daemon): enable remote peas test * test(daemon): replace send message to send request * test(daemon): declare pea tests in ci * test(daemon): use pea args fixture * test(daemon): head pea use default host * test(daemon): fix peas topologies * test(daemon): fix pseudo naming * test(daemon): use default host as host * test(daemon): fix executor path * test(daemon): add remote worker back * test(daemon): skip local remote remote topology * fix: jinad pea test setup * fix: jinad pea tests * fix: remove invalid assertion Co-authored-by: jacobowitz <[email protected]> * feat: enable daemon tests again (#4132) * feat: enable daemon tests again * fix: remove bogy empty script file * fix: more jinad test fixes * style: fix overload and cli autocomplete * fix: scale and ru in jinad * fix: fix more jinad tests Co-authored-by: Jina Dev Bot <[email protected]> * fix: fix flow test * fix: improve pea tests reliability (#4136) Co-authored-by: Joan Fontanals <[email protected]> Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Deepankar Mahapatro <[email protected]> Co-authored-by: bwanglzu <[email protected]> Co-authored-by: AlaeddineAbdessalem <[email protected]> Co-authored-by: Zhaofeng Miao <[email protected]>
20
0
1,762
7
1
2
def captured_stderr(): return captured_output("stderr")
django/test/utils.py
23
django
{ "docstring": "Capture the output of sys.stderr:\n\n with captured_stderr() as stderr:\n print(\"hello\", file=sys.stderr)\n self.assertEqual(stderr.getvalue(), \"hello\\n\")\n ", "language": "en", "n_whitespaces": 29, "n_words": 13, "vocab_size": 13 }
4
Python
4
9c19aff7c7561e3a82978a272ecdaad40dda5c00
utils.py
206,492
2
10
captured_stderr
https://github.com/django/django.git
Refs #33476 -- Reformatted code with Black.
10
0
51,543
8
3
20
def testBestCheckpointsOnlyNan(self): keep_checkpoints_num = 2 checkpoint_manager = self.checkpoint_manager(keep_checkpoints_num) checkpoints = [ _TrackedCheckpoint( dir_or_data=i, storage_mode=CheckpointStorage.PERSISTENT, metrics=self.mock_result(float("nan"), i), ) for i in range(4) ] for checkpoint in checkpoints: checkpoint_manager.on_checkpoint(checkpoint) best_checkpoints = checkpoint_manager.best_checkpoints() # best_checkpoints is sorted from worst to best self.assertEqual(len(best_checkpoints), keep_checkpoints_num) self.assertEqual(best_checkpoints[0].dir_or_data, 2) self.assertEqual(best_checkpoints[1].dir_or_data, 3)
python/ray/tune/tests/test_checkpoint_manager.py
173
ray
{ "docstring": "\n Tests that checkpoints with only nan priority are handled correctly.\n ", "language": "en", "n_whitespaces": 25, "n_words": 10, "vocab_size": 10 }
44
Python
38
8affbc7be6fdce169264b8db5b0276dbcc719f6d
test_checkpoint_manager.py
141,374
17
110
testBestCheckpointsOnlyNan
https://github.com/ray-project/ray.git
[tune/train] Consolidate checkpoint manager 3: Ray Tune (#24430) **Update**: This PR is now part 3 of a three PR group to consolidate the checkpoints. 1. Part 1 adds the common checkpoint management class #24771 2. Part 2 adds the integration for Ray Train #24772 3. This PR builds on #24772 and includes all changes. It moves the Ray Tune integration to use the new common checkpoint manager class. Old PR description: This PR consolidates the Ray Train and Tune checkpoint managers. These concepts previously did something very similar but in different modules. To simplify maintenance in the future, we've consolidated the common core. - This PR keeps full compatibility with the previous interfaces and implementations. This means that for now, Train and Tune will have separate CheckpointManagers that both extend the common core - This PR prepares Tune to move to a CheckpointStrategy object - In follow-up PRs, we can further unify interfacing with the common core, possibly removing any train- or tune-specific adjustments (e.g. moving to setup on init rather on runtime for Ray Train) Co-authored-by: Antoni Baum <[email protected]>
210
0
32,343
15
1
5
def slow(test_case): return unittest.skipUnless(_run_slow_tests, "test is slow")(test_case)
src/transformers/testing_utils.py
33
transformers
{ "docstring": "\n Decorator marking a test as slow.\n\n Slow tests are skipped by default. Set the RUN_SLOW environment variable to a truthy value to run them.\n\n ", "language": "en", "n_whitespaces": 34, "n_words": 24, "vocab_size": 22 }
7
Python
7
57e6464ac9a31156f1c93e59107323e6ec01309e
testing_utils.py
37,497
2
18
slow
https://github.com/huggingface/transformers.git
Update all require decorators to use skipUnless when possible (#16999)
13
0
6,802
9
3
10
def calc_first_derivative(self, x): if x < self.x[0]: return None elif x > self.x[-1]: return None i = self.__search_index(x) dx = x - self.x[i] dy = self.b[i] + 2.0 * self.c[i] * dx + 3.0 * self.d[i] * dx ** 2.0 return dy
PathPlanning/CubicSpline/cubic_spline_planner.py
131
PythonRobotics
{ "docstring": "\n Calc first derivative at given x.\n\n if x is outside the input x, return None\n\n Returns\n -------\n dy : float\n first derivative for given x.\n ", "language": "en", "n_whitespaces": 79, "n_words": 25, "vocab_size": 21 }
42
Python
27
def289b723e9216830c2a7b2577cb31b55710167
cubic_spline_planner.py
19,357
9
91
calc_first_derivative
https://github.com/AtsushiSakai/PythonRobotics.git
enhance cubic spline path doc (#698) * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc
113
0
2,945
12
2
5
def countedArray(expr, intExpr=None): arrayExpr = Forward()
.venv/lib/python3.8/site-packages/pip/_vendor/pyparsing.py
27
transferlearning
{ "docstring": "Helper to define a counted list of expressions.\n\n This helper defines a pattern of the form::\n\n integer expr expr expr...\n\n where the leading integer tells how many expr expressions follow.\n The matched tokens returns the array of expr tokens as a list - the\n leading count token is suppressed.\n\n If ``intExpr`` is specified, it should be a pyparsing expression\n that produces an integer value.\n\n Example::\n\n countedArray(Word(alphas)).parseString('2 ab cd ef') # -> ['ab', 'cd']\n\n # in this parser, the leading integer value is given in binary,\n # '10' indicating that 2 values are in the array\n binaryConstant = Word('01').setParseAction(lambda t: int(t[0], 2))\n countedArray(Word(alphas), intExpr=binaryConstant).parseString('10 ab cd ef') # -> ['ab', 'cd']\n ", "language": "en", "n_whitespaces": 178, "n_words": 110, "vocab_size": 75 }
6
Python
6
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
pyparsing.py
63,284
10
85
countedArray
https://github.com/jindongwang/transferlearning.git
upd; format
12
0
13,233
8
8
5
def test_overlapping_function_names(self) -> None: ops = [ helper.make_opsetid("", 10), helper.make_opsetid("local", 10) ]
onnx/test/compose_test.py
50
onnx
{ "docstring": "\n Tests error checking when the name of local function entries overlaps\n ", "language": "en", "n_whitespaces": 26, "n_words": 11, "vocab_size": 11 }
12
Python
12
83fa57c74edfd13ddac9548b8a12f9e3e2ed05bd
compose_test.py
255,414
97
814
test_overlapping_function_names
https://github.com/onnx/onnx.git
Use Python type annotations rather than comments (#3962) * These have been supported since Python 3.5. ONNX doesn't support Python < 3.6, so we can use the annotations. Diffs generated by https://pypi.org/project/com2ann/. Signed-off-by: Gary Miguel <[email protected]> * Remove MYPY conditional logic in gen_proto.py It breaks the type annotations and shouldn't be needed. Signed-off-by: Gary Miguel <[email protected]> * Get rid of MYPY bool from more scripts Signed-off-by: Gary Miguel <[email protected]> * move Descriptors class above where its referenced in type annotation Signed-off-by: Gary Miguel <[email protected]> * fixes Signed-off-by: Gary Miguel <[email protected]> * remove extra blank line Signed-off-by: Gary Miguel <[email protected]> * fix type annotations Signed-off-by: Gary Miguel <[email protected]> * fix type annotation in gen_docs Signed-off-by: Gary Miguel <[email protected]> * fix Operators.md Signed-off-by: Gary Miguel <[email protected]> * fix TestCoverage.md Signed-off-by: Gary Miguel <[email protected]> * fix protoc-gen-mypy.py Signed-off-by: Gary Miguel <[email protected]>
55
0
74,754
10
24
49
def get_conn(self) -> Any: in_cluster = self._coalesce_param( self.in_cluster, self.conn_extras.get("extra__kubernetes__in_cluster") or None ) cluster_context = self._coalesce_param( self.cluster_context, self.conn_extras.get("extra__kubernetes__cluster_context") or None ) kubeconfig_path = self._coalesce_param( self.config_file, self.conn_extras.get("extra__kubernetes__kube_config_path") or None ) kubeconfig = self.conn_extras.get("extra__kubernetes__kube_config") or None num_selected_configuration = len([o for o in [in_cluster, kubeconfig, kubeconfig_path] if o]) if num_selected_configuration > 1: raise AirflowException( "Invalid connection configuration. Options kube_config_path, " "kube_config, in_cluster are mutually exclusive. " "You can only use one option at a time." ) disable_verify_ssl = self._coalesce_param( self.disable_verify_ssl, _get_bool(self._get_field("disable_verify_ssl")) ) disable_tcp_keepalive = self._coalesce_param( self.disable_tcp_keepalive, _get_bool(self._get_field("disable_tcp_keepalive")) ) # BEGIN apply settings from core kubernetes configuration # this section should be removed in next major release deprecation_warnings: List[Tuple[str, Any]] = [] if disable_verify_ssl is None and self._deprecated_core_disable_verify_ssl is True: deprecation_warnings.append(('verify_ssl', False)) disable_verify_ssl = self._deprecated_core_disable_verify_ssl # by default, hook will try in_cluster first. so we only need to # apply core airflow config and alert when False and in_cluster not otherwise set. if in_cluster is None and self._deprecated_core_in_cluster is False: deprecation_warnings.append(('in_cluster', self._deprecated_core_in_cluster)) in_cluster = self._deprecated_core_in_cluster if not cluster_context and self._deprecated_core_cluster_context: deprecation_warnings.append(('cluster_context', self._deprecated_core_cluster_context)) cluster_context = self._deprecated_core_cluster_context if not kubeconfig_path and self._deprecated_core_config_file: deprecation_warnings.append(('config_file', self._deprecated_core_config_file)) kubeconfig_path = self._deprecated_core_config_file if disable_tcp_keepalive is None and self._deprecated_core_disable_tcp_keepalive is True: deprecation_warnings.append(('enable_tcp_keepalive', False)) disable_tcp_keepalive = True if deprecation_warnings: self._deprecation_warning_core_param(deprecation_warnings) # END apply settings from core kubernetes configuration if disable_verify_ssl is True: _disable_verify_ssl() if disable_tcp_keepalive is not True: _enable_tcp_keepalive() if in_cluster: self.log.debug("loading kube_config from: in_cluster configuration") config.load_incluster_config() return client.ApiClient() if kubeconfig_path is not None: self.log.debug("loading kube_config from: %s", kubeconfig_path) config.load_kube_config( config_file=kubeconfig_path, client_configuration=self.client_configuration, context=cluster_context, ) return client.ApiClient() if kubeconfig is not None: with tempfile.NamedTemporaryFile() as temp_config: self.log.debug("loading kube_config from: connection kube_config") temp_config.write(kubeconfig.encode()) temp_config.flush() config.load_kube_config( config_file=temp_config.name, client_configuration=self.client_configuration, context=cluster_context, ) return client.ApiClient() return self._get_default_client(cluster_context=cluster_context)
airflow/providers/cncf/kubernetes/hooks/kubernetes.py
759
airflow
{ "docstring": "Returns kubernetes api session for use with requests", "language": "en", "n_whitespaces": 7, "n_words": 8, "vocab_size": 8 }
267
Python
146
60eb9e106f5915398eafd6aa339ec710c102dc09
kubernetes.py
42,787
71
460
get_conn
https://github.com/apache/airflow.git
Use KubernetesHook to create api client in KubernetesPodOperator (#20578) Add support for k8s hook in KPO; use it always (even when no conn id); continue to consider the core k8s settings that KPO already takes into account but emit deprecation warning about them. KPO historically takes into account a few settings from core airflow cfg (e.g. verify ssl, tcp keepalive, context, config file, and in_cluster). So to use the hook to generate the client, somehow the hook has to take these settings into account. But we don't want the hook to consider these settings in general. So we read them in KPO and if necessary patch the hook and warn.
1,032
0
7,735
13
1
6
def get_feedback(): labels = DOCUMENT_STORE.get_all_labels() return labels @router.delete("/feedback")
rest_api/controller/feedback.py
41
@router.delete("/feedback")
haystack
{ "docstring": "\n This endpoint allows the API user to retrieve all the feedback that has been submitted\n through the `POST /feedback` endpoint.\n ", "language": "en", "n_whitespaces": 30, "n_words": 20, "vocab_size": 18 }
8
Python
7
4e940be85902dc93f3924662ba83111df72bb4d3
feedback.py
256,622
3
14
get_feedback
https://github.com/deepset-ai/haystack.git
Allow Linux CI to push changes to forks (#2182) * Add explicit reference to repo name to allow CI to push code back * Run test matrix only on tested code changes * Isolate the bot to check if it works * Clarify situation with a comment * Simplify autoformat.yml * Add code and docs check * Add git pull to make sure to fetch changes if they were created * Add cache to autoformat.yml too * Add information on forks in CONTRIBUTING.md * Add a not about code quality tools in CONTRIBUTING.md * Add image file types to the CI exclusion list Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
16
1
74,903
8
1
2
def boxmean(self): return self["boxmean"]
packages/python/plotly/plotly/graph_objs/_box.py
22
plotly.py
{ "docstring": "\n If True, the mean of the box(es)' underlying distribution is\n drawn as a dashed line inside the box(es). If \"sd\" the standard\n deviation is also drawn. Defaults to True when `mean` is set.\n Defaults to \"sd\" when `sd` is set Otherwise defaults to False.\n\n The 'boxmean' property is an enumeration that may be specified as:\n - One of the following enumeration values:\n [True, 'sd', False]\n\n Returns\n -------\n Any\n ", "language": "en", "n_whitespaces": 156, "n_words": 68, "vocab_size": 52 }
4
Python
4
43e3a4011080911901176aab919c0ecf5046ddd3
_box.py
226,332
2
11
boxmean
https://github.com/plotly/plotly.py.git
switch to black .22
18
0
58,005
7
1
28
def test_task_result_with_error(self): result1 = TaskResult.objects.create( task_id=str(uuid.uuid4()), task_name="documents.tasks.some_task", status=celery.states.SUCCESS, result={ "exc_type": "ConsumerError", "exc_message": ["test.pdf: Not consuming test.pdf: It is a duplicate."], "exc_module": "documents.consumer", }, ) _ = PaperlessTask.objects.create(attempted_task=result1) response = self.client.get(self.ENDPOINT) self.assertEqual(response.status_code, 200) self.assertEqual(len(response.data), 1) returned_data = response.data[0] self.assertEqual( returned_data["result"], "test.pdf: Not consuming test.pdf: It is a duplicate.", )
src/documents/tests/test_api.py
206
paperless-ngx
{ "docstring": "\n GIVEN:\n - A celery task completed with an exception\n WHEN:\n - API call is made to get tasks\n THEN:\n - The returned result is the exception info\n ", "language": "en", "n_whitespaces": 89, "n_words": 27, "vocab_size": 23 }
48
Python
38
5b66ef0a748fd5570361a2a1ed6147e0462568d2
test_api.py
320,069
20
124
test_task_result_with_error
https://github.com/paperless-ngx/paperless-ngx.git
Updates how task_args and task_kwargs are parsed, adds testing to cover everything I can think of
240
0
117,054
13
1
6
def serialize_for_task_group(self) -> Tuple[DagAttributeTypes, Any]: raise NotImplementedError()
airflow/models/taskmixin.py
29
airflow
{ "docstring": "This is used by SerializedTaskGroup to serialize a task group's content.", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
7
Python
7
2fdc23333909096d427171002582e2906f8bbc0a
taskmixin.py
43,876
3
17
serialize_for_task_group
https://github.com/apache/airflow.git
Fix remaining mypy issues in "core" Airflow (#20795) Co-authored-by: Josh Fell <[email protected]> Co-authored-by: Tzu-ping Chung <[email protected]> Co-authored-by: Jarek Potiuk <[email protected]>
21
0
8,079
7
1
13
def bind(self, *args, **kwargs):
python/ray/actor.py
29
""" For Ray DAG building that creates static graph from decorated
ray
{ "docstring": "\n For Ray DAG building that creates static graph from decorated", "language": "en", "n_whitespaces": 17, "n_words": 10, "vocab_size": 10 }
4
Python
4
f8b0ab7e78246e4dddf6c0095f3f7a5e409988ba
actor.py
141,858
5
39
bind
https://github.com/ray-project/ray.git
[Ray DAG] Add documentation in `more options` section (#25528)
11
1
32,500
5
14
23
def _build_repr_df(self, num_rows, num_cols): # Fast track for empty dataframe. if len(self.index) == 0 or (self._is_dataframe and len(self.columns) == 0): return pandas.DataFrame( index=self.index, columns=self.columns if self._is_dataframe else None, ) if len(self.index) <= num_rows: row_indexer = slice(None) else: # Add one here so that pandas automatically adds the dots # It turns out to be faster to extract 2 extra rows and columns than to # build the dots ourselves. num_rows_for_head = num_rows // 2 + 1 num_rows_for_tail = ( num_rows_for_head if len(self.index) > num_rows else len(self.index) - num_rows_for_head if len(self.index) - num_rows_for_head >= 0 else None ) row_indexer = list(range(len(self.index))[:num_rows_for_head]) + ( list(range(len(self.index))[-num_rows_for_tail:]) if num_rows_for_tail is not None else [] ) if self._is_dataframe: if len(self.columns) <= num_cols: col_indexer = slice(None) else: num_cols_for_front = num_cols // 2 + 1 num_cols_for_back = ( num_cols_for_front if len(self.columns) > num_cols else len(self.columns) - num_cols_for_front if len(self.columns) - num_cols_for_front >= 0 else None ) col_indexer = list(range(len(self.columns))[:num_cols_for_front]) + ( list(range(len(self.columns))[-num_cols_for_back:]) if num_cols_for_back is not None else [] ) indexer = row_indexer, col_indexer else: indexer = row_indexer return self.iloc[indexer]._query_compiler.to_pandas()
modin/pandas/base.py
473
modin
{ "docstring": "\n Build pandas DataFrame for string representation.\n\n Parameters\n ----------\n num_rows : int\n Number of rows to show in string representation. If number of\n rows in this dataset is greater than `num_rows` then half of\n `num_rows` rows from the beginning and half of `num_rows` rows\n from the end are shown.\n num_cols : int\n Number of columns to show in string representation. If number of\n columns in this dataset is greater than `num_cols` then half of\n `num_cols` columns from the beginning and half of `num_cols`\n columns from the end are shown.\n\n Returns\n -------\n pandas.DataFrame or pandas.Series\n A pandas dataset with `num_rows` or fewer rows and `num_cols` or fewer columns.\n ", "language": "en", "n_whitespaces": 269, "n_words": 106, "vocab_size": 46 }
173
Python
84
2ebc9cf51bfc773e3d4c898f5a33c0f60ad7ebc5
base.py
155,341
43
295
_build_repr_df
https://github.com/modin-project/modin.git
REFACTOR-#5310: Remove some hasattr('columns') checks. (#5311) Signed-off-by: mvashishtha <[email protected]>
786
0
36,343
22
5
13
def discover(cls, **kwargs): context = kwargs.pop('context', None) if context and kwargs: raise ValueError("cannot accept context and kwargs") context = context or DistributionFinder.Context(**kwargs) return itertools.chain.from_iterable( resolver(context) for resolver in cls._discover_resolvers() )
python3.10.4/Lib/importlib/metadata/__init__.py
101
XX-Net
{ "docstring": "Return an iterable of Distribution objects for all packages.\n\n Pass a ``context`` or pass keyword arguments for constructing\n a context.\n\n :context: A ``DistributionFinder.Context`` object.\n :return: Iterable of Distribution objects for all packages.\n ", "language": "en", "n_whitespaces": 67, "n_words": 32, "vocab_size": 24 }
30
Python
24
8198943edd73a363c266633e1aa5b2a9e9c9f526
__init__.py
218,217
8
60
discover
https://github.com/XX-net/XX-Net.git
add python 3.10.4 for windows
94
0
55,211
10
6
15
def ensure_string_list(self, option): r val = getattr(self, option) if val is None: return elif isinstance(val, str): setattr(self, option, re.split(r',\s*|\s+', val)) else: if isinstance(val, list): ok = all(isinstance(v, str) for v in val) else: ok = False if not ok: raise DistutilsOptionError( "'%s' must be a list of strings (got %r)" % (option, val))
python3.10.4/Lib/distutils/cmd.py
144
XX-Net
{ "docstring": "Ensure that 'option' is a list of strings. If 'option' is\n currently a string, we split it either on /,\\s*/ or /\\s+/, so\n \"foo bar baz\", \"foo,bar,baz\", and \"foo, bar baz\" all become\n [\"foo\", \"bar\", \"baz\"].\n ", "language": "en", "n_whitespaces": 67, "n_words": 36, "vocab_size": 32 }
53
Python
44
8198943edd73a363c266633e1aa5b2a9e9c9f526
cmd.py
222,607
20
92
ensure_string_list
https://github.com/XX-net/XX-Net.git
add python 3.10.4 for windows
229
0
56,670
15
3
27
def _read(**kwargs) -> DataFrame: Engine.subscribe(_update_engine) from modin.core.execution.dispatching.factories.dispatcher import FactoryDispatcher try: pd_obj = FactoryDispatcher.read_csv_glob(**kwargs) except AttributeError: raise AttributeError("read_csv_glob() is only implemented for pandas on Ray.") # This happens when `read_csv` returns a TextFileReader object for iterating through if isinstance(pd_obj, pandas.io.parsers.TextFileReader): reader = pd_obj.read pd_obj.read = lambda *args, **kwargs: DataFrame( query_compiler=reader(*args, **kwargs) ) return pd_obj return DataFrame(query_compiler=pd_obj) read_csv_glob = _make_parser_func(sep=",")
modin/experimental/pandas/io.py
176
modin
{ "docstring": "\n General documentation is available in `modin.pandas.read_csv`.\n\n This experimental feature provides parallel reading from multiple csv files which are\n defined by glob pattern.\n\n Parameters\n ----------\n **kwargs : dict\n Keyword arguments in `modin.pandas.read_csv`.\n\n Returns\n -------\n modin.DataFrame\n\n Examples\n --------\n >>> import modin.experimental.pandas as pd\n >>> df = pd.read_csv_glob(\"s3://dask-data/nyc-taxi/2015/yellow_tripdata_2015-1*\")\n UserWarning: `read_*` implementation has mismatches with pandas:\n Data types of partitions are different! Please refer to the troubleshooting section of the Modin documentation to fix this issue.\n VendorID tpep_pickup_datetime ... total_amount congestion_surcharge\n 0 1.0 2020-10-01 00:09:08 ... 4.30 0.0\n 1 1.0 2020-10-01 00:09:19 ... 13.30 2.5\n 2 1.0 2020-10-01 00:30:00 ... 15.36 2.5\n 3 2.0 2020-10-01 00:56:46 ... -3.80 0.0\n 4 2.0 2020-10-01 00:56:46 ... 3.80 0.0\n ... ... ... ... ... ...\n 4652008 NaN 2020-12-31 23:44:35 ... 43.95 2.5\n 4652009 NaN 2020-12-31 23:41:36 ... 20.17 2.5\n 4652010 NaN 2020-12-31 23:01:17 ... 78.98 0.0\n 4652011 NaN 2020-12-31 23:31:29 ... 39.50 0.0\n 4652012 NaN 2020-12-31 23:12:48 ... 20.64 0.0\n\n [4652013 rows x 18 columns]\n ", "language": "en", "n_whitespaces": 680, "n_words": 158, "vocab_size": 110 }
58
Python
51
dcee13d57ebf9a006460deedb734c15791acae7a
io.py
153,838
50
100
_read
https://github.com/modin-project/modin.git
REFACTOR-#4510: Align experimental and regular IO modules initializations (#4511) Signed-off-by: alexander3774 <[email protected]>
134
0
35,651
15
13
11
def _to_csv_check_support(kwargs): path_or_buf = kwargs["path_or_buf"] compression = kwargs["compression"] if not isinstance(path_or_buf, str): return False # case when the pointer is placed at the beginning of the file. if "r" in kwargs["mode"] and "+" in kwargs["mode"]: return False # encodings with BOM don't support; # instead of one mark in result bytes we will have them by the number of partitions # so we should fallback in pandas for `utf-16`, `utf-32` with all aliases, in instance # (`utf_32_be`, `utf_16_le` and so on) if kwargs["encoding"] is not None: encoding = kwargs["encoding"].lower() if "u" in encoding or "utf" in encoding: if "16" in encoding or "32" in encoding: return False if compression is None or not compression == "infer": return False if any((path_or_buf.endswith(ext) for ext in [".gz", ".bz2", ".zip", ".xz"])): return False return True
modin/core/execution/ray/implementations/pandas_on_ray/io/io.py
232
modin
{ "docstring": "\n Check if parallel version of ``to_csv`` could be used.\n\n Parameters\n ----------\n kwargs : dict\n Keyword arguments passed to ``.to_csv()``.\n\n Returns\n -------\n bool\n Whether parallel version of ``to_csv`` is applicable.\n ", "language": "en", "n_whitespaces": 108, "n_words": 29, "vocab_size": 25 }
131
Python
80
0faf4675140415e17d4112f9d0d37cfe87770b9e
io.py
152,978
17
126
_to_csv_check_support
https://github.com/modin-project/modin.git
REFACTOR-#3871: move related to pandas functionality into 'PandasOnRayIO' class (#3872) Signed-off-by: Anatoly Myachev <[email protected]>
329
0
35,220
12
5
19
def test_start_by_longest(self): if ok_ljspeech: dataloader, _ = self._create_dataloader(2, c.r, 0, True) dataloader.dataset.preprocess_samples() for i, data in enumerate(dataloader): if i == self.max_loader_iter: break mel_lengths = data["mel_lengths"] if i == 0: max_len = mel_lengths[0] print(mel_lengths) self.assertTrue(all(max_len >= mel_lengths))
tests/data_tests/test_loader.py
136
TTS
{ "docstring": "Test start_by_longest option.\n\n Ther first item of the fist batch must be longer than all the other items.\n ", "language": "en", "n_whitespaces": 32, "n_words": 18, "vocab_size": 17 }
36
Python
30
ef63c995248fb854d1efae73acfbdcf75666c263
test_loader.py
262,200
12
84
test_start_by_longest
https://github.com/coqui-ai/TTS.git
Implement `start_by_longest` option for TTSDatase
196
0
77,141
14
2
12
def get_uncertainty(mask_pred, labels): if mask_pred.shape[1] == 1: gt_class_logits = mask_pred.clone() else: inds = torch.arange(mask_pred.shape[0], device=mask_pred.device) gt_class_logits = mask_pred[inds, labels].unsqueeze(1) return -torch.abs(gt_class_logits)
mmdet/models/utils/point_sample.py
106
mmdetection
{ "docstring": "Estimate uncertainty based on pred logits.\n\n We estimate uncertainty as L1 distance between 0.0 and the logits\n prediction in 'mask_pred' for the foreground class in `classes`.\n\n Args:\n mask_pred (Tensor): mask predication logits, shape (num_rois,\n num_classes, mask_height, mask_width).\n\n labels (list[Tensor]): Either predicted or ground truth label for\n each predicted mask, of length num_rois.\n\n Returns:\n scores (Tensor): Uncertainty scores with the most uncertain\n locations having the highest uncertainty score,\n shape (num_rois, 1, mask_height, mask_width)\n ", "language": "en", "n_whitespaces": 152, "n_words": 72, "vocab_size": 59 }
21
Python
18
c576e5d570bf64a99e2c6817ed7b5c0084a44a55
point_sample.py
244,084
7
67
get_uncertainty
https://github.com/open-mmlab/mmdetection.git
[Enhance] Take point sample related functions out of mask_point_head (#7353) add point sample replace function in mask_point_head
54
0
70,232
13
1
5
def _is_textIO(stream): return isinstance(stream, io.TextIOBase)
python3.10.4/Lib/http/client.py
26
XX-Net
{ "docstring": "Test whether a file-like object is a text or a binary stream.\n ", "language": "en", "n_whitespaces": 19, "n_words": 12, "vocab_size": 10 }
5
Python
5
8198943edd73a363c266633e1aa5b2a9e9c9f526
client.py
217,708
2
15
_is_textIO
https://github.com/XX-net/XX-Net.git
add python 3.10.4 for windows
19
0
54,893
8
1
17
def test_pagination_offset_without_orderby(self): response = self.get_response( self.organization.slug, field=f"count({TransactionMetricKey.MEASUREMENTS_LCP.value})", groupBy="transaction", cursor=Cursor(0, 1), statsPeriod="1h", useCase="performance", ) assert response.status_code == 200, response.data
tests/sentry/api/endpoints/test_organization_metric_data.py
97
sentry
{ "docstring": "\n Test that ensures a successful response is returned even when requesting an offset\n without an orderBy\n ", "language": "en", "n_whitespaces": 38, "n_words": 16, "vocab_size": 15 }
18
Python
18
35ec251212b82e5d9468062a3ab5945d8e739002
test_organization_metric_data.py
85,794
10
55
test_pagination_offset_without_orderby
https://github.com/getsentry/sentry.git
feat(metrics): Support rate for derived metric [TET-129 TET-127] (#38792) Adds support for operation `rate` to be able to compute performance related metrics such as tpm, tps, epm, eps This PR achieves this by: - Defining rate as a derived operation that produces its own SnQL rather than trying to compute the data sketch aggregate and using that directly - Replaces `filter_conditions_func` that used to just produce a snql condition to be used a conditional aggregate with `snql_func` that instead produces a SnQL function - Replaces the logic in `get_entity` on MetricsExpression to determine the entity from the MRI rather than from the aggregate applied
112
0
18,044
13
1
17
def _finalize_sample_weight(self, sample_weight, y):
sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py
36
"""Finalize sample weight. Used by subclasses to adjustuseful for
scikit-learn
{ "docstring": "Finalize sample weight.\n\n Used by subclasses to adjust sample_weights. This is useful for implementing", "language": "en", "n_whitespaces": 20, "n_words": 14, "vocab_size": 14 }
4
Python
4
7fda68d45734d41e47da1f57d23348ae8de655b0
gradient_boosting.py
260,913
2
12
_finalize_sample_weight
https://github.com/scikit-learn/scikit-learn.git
FEA Adds class_weight to HistGradientBoostingClassifier (#22014) Co-authored-by: Olivier Grisel <[email protected]> Co-authored-by: jeremie du boisberranger <[email protected]>
11
2
76,560
7
2
10
def frame_has_multiple_faces(self, frame_name): if not frame_name: retval = False else: retval = bool(len(self._data.get(frame_name, {}).get("faces", [])) > 1) logger.trace("'%s': %s", frame_name, retval) return retval
lib/align/alignments.py
97
faceswap
{ "docstring": " Check whether a given frame_name exists within the alignments :attr:`data` and contains\n more than 1 face.\n\n Parameters\n ----------\n frame_name: str\n The frame_name name to check. This should be the base name of the frame, not the full\n path\n\n Returns\n -------\n bool\n ``True`` if the given frame_name exists within the alignments :attr:`data` and has more\n than 1 face associated with it, otherwise ``False``\n ", "language": "en", "n_whitespaces": 163, "n_words": 62, "vocab_size": 45 }
23
Python
20
5e73437be47f2410439a3c6716de96354e6a0c94
alignments.py
101,212
7
58
frame_has_multiple_faces
https://github.com/deepfakes/faceswap.git
lib.align updates: - alignments.py - Add typed dicts for imported alignments - Explicitly check for presence of thumb value in alignments dict - linting - detected_face.py - Typing - Linting - Legacy support for pre-aligned face - Update dependencies to new property names
80
0
20,633
19
5
7
def get_qt_library_info(namespace): if namespace == 'PyQt5': return pyqt5_library_info if namespace == 'PyQt6': return pyqt6_library_info elif namespace == 'PySide2': return pyside2_library_info elif namespace == 'PySide6': return pyside6_library_info raise ValueError(f'Invalid namespace: {namespace}!') # add_qt_dependencies # -------------------- # Generic implemnentation that finds the Qt 5/6 dependencies based on the hook name of a PyQt5/PyQt6/PySide2/PySide6 # hook. Returns (hiddenimports, binaries, datas). Typical usage: # ``hiddenimports, binaries, datas = add_qt5_dependencies(__file__)``.
PyInstaller/utils/hooks/qt/__init__.py
84
pyinstaller
{ "docstring": "\n Return QtLibraryInfo instance for the given namespace.\n ", "language": "en", "n_whitespaces": 14, "n_words": 7, "vocab_size": 7 }
65
Python
48
d789a7daa7712716c89259b987349917a89aece7
__init__.py
264,021
10
40
get_qt_library_info
https://github.com/pyinstaller/pyinstaller.git
hookutils: reorganize the Qt hook utilities Reorganize the Qt module information to provide information necessary to deal with variations between different python Qt bindings (PySide2, PyQt5, PySide6, and PyQt6). Replace the existing table-like dictionary with list of entries, which is easier to format and document. From this list, we now generate two dictionaries; one that maps Qt module (shared library) names to the module info entries (the same role as the old dictionary), and one that maps python module names to the module info entries. The latter is necessary to accommodate python modules that do not have corresponding Qt shared libraries (header-only Qt modules, such as QtAxContainer; or statically-linked module, such as QSci), but we still need to provide information about plugins or translation files. The new information list is based on manual inspection of source code for Qt 5.15 and 6.3, and should provide comprehensive information about all plugin names and translation file basenames. In addition, most of the helper functions, which take a reference to the `QtLibraryInfo` class as their first argument, have been turned into methods of the `QtLibraryInfo` class. The corresponding hooks have also been adjusted.
106
0
77,563
9
1
16
def create_deepbooru_process(threshold=0.5): from modules import shared # prevents circular reference shared.deepbooru_process_manager = multiprocessing.Manager() shared.deepbooru_process_queue = shared.deepbooru_process_manager.Queue() shared.deepbooru_process_return = shared.deepbooru_process_manager.dict() shared.deepbooru_process_return["value"] = -1 shared.deepbooru_process = multiprocessing.Process(target=deepbooru_process, args=(shared.deepbooru_process_queue, shared.deepbooru_process_return, threshold)) shared.deepbooru_process.start()
modules/deepbooru.py
140
stable-diffusion-webui
{ "docstring": "\n Creates deepbooru process. A queue is created to send images into the process. This enables multiple images\n to be processed in a row without reloading the model or creating a new process. To return the data, a shared\n dictionary is created to hold the tags created. To wait for tags to be returned, a value of -1 is assigned\n to the dictionary and the method adding the image to the queue should wait for this value to be updated with\n the tags.\n ", "language": "en", "n_whitespaces": 105, "n_words": 82, "vocab_size": 50 }
29
Python
25
1f92336be768d235c18a82acb2195b7135101ae7
deepbooru.py
152,835
8
87
create_deepbooru_process
https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
refactored the deepbooru module to improve speed on running multiple interogations in a row. Added the option to generate deepbooru tags for textual inversion preproccessing.
54
0
35,197
11
5
49
def _estimate_files_encoding_ratio(self) -> float: if not DatasetContext.get_current().decoding_size_estimation: return PARQUET_ENCODING_RATIO_ESTIMATE_DEFAULT # Sample a few rows from Parquet files to estimate the encoding ratio. # Launch tasks to sample multiple files remotely in parallel. # Evenly distributed to sample N rows in i-th row group in i-th file. # TODO(ekl/cheng) take into account column pruning. start_time = time.perf_counter() num_files = len(self._pq_ds.pieces) num_samples = int(num_files * PARQUET_ENCODING_RATIO_ESTIMATE_SAMPLING_RATIO) min_num_samples = min( PARQUET_ENCODING_RATIO_ESTIMATE_MIN_NUM_SAMPLES, num_files ) max_num_samples = min( PARQUET_ENCODING_RATIO_ESTIMATE_MAX_NUM_SAMPLES, num_files ) num_samples = max(min(num_samples, max_num_samples), min_num_samples) # Evenly distributed to choose which file to sample, to avoid biased prediction # if data is skewed. file_samples = [ self._pq_ds.pieces[idx] for idx in np.linspace(0, num_files - 1, num_samples).astype(int).tolist() ] sample_piece = cached_remote_fn(_sample_piece) futures = [] for idx, sample in enumerate(file_samples): # Sample i-th row group in i-th file. futures.append(sample_piece.remote(_SerializedPiece(sample), idx)) sample_ratios = ray.get(futures) ratio = np.mean(sample_ratios) sampling_duration = time.perf_counter() - start_time if sampling_duration > 5: logger.info( "Parquet input size estimation took " f"{round(sampling_duration, 2)} seconds." ) logger.debug(f"Estimated Parquet encoding ratio from sampling is {ratio}.") return max(ratio, PARQUET_ENCODING_RATIO_ESTIMATE_LOWER_BOUND)
python/ray/data/datasource/parquet_datasource.py
339
ray
{ "docstring": "Return an estimate of the Parquet files encoding ratio.\n\n To avoid OOMs, it is safer to return an over-estimate than an underestimate.\n ", "language": "en", "n_whitespaces": 36, "n_words": 22, "vocab_size": 20 }
170
Python
110
e19cf164fd51c4f6bf730e999cba46b30c39ff83
parquet_datasource.py
125,650
35
198
_estimate_files_encoding_ratio
https://github.com/ray-project/ray.git
[Datasets] Use sampling to estimate in-memory data size for Parquet data source (#26868)
488
0
27,939
15
4
11
def johnson_lindenstrauss_min_dim(n_samples, *, eps=0.1): eps = np.asarray(eps) n_samples = np.asarray(n_samples) if np.any(eps <= 0.0) or np.any(eps >= 1): raise ValueError("The JL bound is defined for eps in ]0, 1[, got %r" % eps) if np.any(n_samples) <= 0: raise ValueError( "The JL bound is defined for n_samples greater than zero, got %r" % n_samples ) denominator = (eps**2 / 2) - (eps**3 / 3) return (4 * np.log(n_samples) / denominator).astype(np.int64)
sklearn/random_projection.py
177
scikit-learn
{ "docstring": "Find a 'safe' number of components to randomly project to.\n\n The distortion introduced by a random projection `p` only changes the\n distance between two points by a factor (1 +- eps) in an euclidean space\n with good probability. The projection `p` is an eps-embedding as defined\n by:\n\n (1 - eps) ||u - v||^2 < ||p(u) - p(v)||^2 < (1 + eps) ||u - v||^2\n\n Where u and v are any rows taken from a dataset of shape (n_samples,\n n_features), eps is in ]0, 1[ and p is a projection by a random Gaussian\n N(0, 1) matrix of shape (n_components, n_features) (or a sparse\n Achlioptas matrix).\n\n The minimum number of components to guarantee the eps-embedding is\n given by:\n\n n_components >= 4 log(n_samples) / (eps^2 / 2 - eps^3 / 3)\n\n Note that the number of dimensions is independent of the original\n number of features but instead depends on the size of the dataset:\n the larger the dataset, the higher is the minimal dimensionality of\n an eps-embedding.\n\n Read more in the :ref:`User Guide <johnson_lindenstrauss>`.\n\n Parameters\n ----------\n n_samples : int or array-like of int\n Number of samples that should be a integer greater than 0. If an array\n is given, it will compute a safe number of components array-wise.\n\n eps : float or ndarray of shape (n_components,), dtype=float, \\\n default=0.1\n Maximum distortion rate in the range (0,1 ) as defined by the\n Johnson-Lindenstrauss lemma. If an array is given, it will compute a\n safe number of components array-wise.\n\n Returns\n -------\n n_components : int or ndarray of int\n The minimal number of components to guarantee with good probability\n an eps-embedding with n_samples.\n\n Examples\n --------\n >>> from sklearn.random_projection import johnson_lindenstrauss_min_dim\n >>> johnson_lindenstrauss_min_dim(1e6, eps=0.5)\n 663\n\n >>> johnson_lindenstrauss_min_dim(1e6, eps=[0.5, 0.1, 0.01])\n array([ 663, 11841, 1112658])\n\n >>> johnson_lindenstrauss_min_dim([1e4, 1e5, 1e6], eps=0.1)\n array([ 7894, 9868, 11841])\n\n References\n ----------\n\n .. [1] https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma\n\n .. [2] Sanjoy Dasgupta and Anupam Gupta, 1999,\n \"An elementary proof of the Johnson-Lindenstrauss Lemma.\"\n http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.3654\n\n ", "language": "en", "n_whitespaces": 522, "n_words": 318, "vocab_size": 187 }
69
Python
50
1fc86b6aacd89da44a3b4e8abf7c3e2ba4336ffe
random_projection.py
258,948
12
112
johnson_lindenstrauss_min_dim
https://github.com/scikit-learn/scikit-learn.git
MNT Update black to stable version (#22474)
133
0
75,490
12
4
14
def verify_mac_libraries_dont_reference_chkstack(): if not _is_mac(): return nm = subprocess.run( ["nm", "-g", r.Rlocation("org_tensorflow/tensorflow/compiler/xla/python/xla_extension.so") ], capture_output=True, text=True, check=False) if nm.returncode != 0: raise RuntimeError(f"nm process failed: {nm.stdout} {nm.stderr}") if "____chkstk_darwin" in nm.stdout: raise RuntimeError( "Mac wheel incorrectly depends on symbol ____chkstk_darwin, which " "means that it isn't compatible with older MacOS versions.")
build/build_wheel.py
136
jax
{ "docstring": "Verifies that xla_extension.so doesn't depend on ____chkstk_darwin.\n\n We don't entirely know why this happens, but in some build environments\n we seem to target the wrong Mac OS version.\n https://github.com/google/jax/issues/3867\n\n This check makes sure we don't release wheels that have this dependency.\n ", "language": "en", "n_whitespaces": 46, "n_words": 41, "vocab_size": 37 }
50
Python
47
17de89b16ac5ee05aee03115d858e67489eab973
build_wheel.py
120,553
15
69
verify_mac_libraries_dont_reference_chkstack
https://github.com/google/jax.git
feat: refactor code using pyupgrade This PR upgrades legacy Python code to 3.7+ code using pyupgrade: ```sh pyupgrade --py37-plus --keep-runtime-typing **.py ``` a
90
0
26,889
12
2
8
def test_timeout_does_not_wait_for_completion_for_sync_flows(self, tmp_path): if sys.version_info[1] == 11: pytest.xfail("The engine returns _after_ sleep finishes in Python 3.11") canary_file = tmp_path / "canary"
tests/test_flows.py
52
prefect
{ "docstring": "\n Sync flows are cancelled when they change instructions. The flow will return\n immediately when the timeout is reached, but the thread it executes in will\n continue until the next instruction is reached. `time.sleep` will return then\n the thread will be interrupted.\n ", "language": "en", "n_whitespaces": 77, "n_words": 41, "vocab_size": 31 }
21
Python
21
a7bd9cadd5038383449b0e75a87bb23a73b278d8
test_flows.py
59,586
14
96
test_timeout_does_not_wait_for_completion_for_sync_flows
https://github.com/PrefectHQ/prefect.git
Add support for Python 3.11 (#7304) Co-authored-by: Chris Guidry <[email protected]>
53
0
11,913
10
2
15
def get_target_distribution_details(filters): target_details = {} for d in frappe.db.sql( , (filters.from_fiscal_year, filters.to_fiscal_year), as_dict=1, ): target_details.setdefault(d.name, {}).setdefault(d.month, flt(d.percentage_allocation)) return target_details # Get actual details from gl entry
erpnext/accounts/report/budget_variance_report/budget_variance_report.py
97
erpnext
{ "docstring": "\n\t\t\tselect\n\t\t\t\tmd.name,\n\t\t\t\tmdp.month,\n\t\t\t\tmdp.percentage_allocation\n\t\t\tfrom\n\t\t\t\t`tabMonthly Distribution Percentage` mdp,\n\t\t\t\t`tabMonthly Distribution` md\n\t\t\twhere\n\t\t\t\tmdp.parent = md.name\n\t\t\t\tand md.fiscal_year between %s and %s\n\t\t\torder by\n\t\t\t\tmd.fiscal_year\n\t\t", "language": "en", "n_whitespaces": 13, "n_words": 25, "vocab_size": 21 }
26
Python
25
494bd9ef78313436f0424b918f200dab8fc7c20b
budget_variance_report.py
65,174
22
63
get_target_distribution_details
https://github.com/frappe/erpnext.git
style: format code with black
16
0
13,816
12
2
10
def get_list_context(context=None): return { "global_number_format": frappe.db.get_default("number_format") or "#,###.##", "currency": frappe.db.get_default("currency"), "currency_symbols": json.dumps( dict( frappe.db.sql( ) ) ), "row_template": "templates/includes/transaction_row.html", "get_list": get_transaction_list, }
erpnext/controllers/website_list_for_contact.py
111
erpnext
{ "docstring": "select name, symbol\n\t\t\tfrom tabCurrency where enabled=1", "language": "en", "n_whitespaces": 5, "n_words": 7, "vocab_size": 7 }
22
Python
21
494bd9ef78313436f0424b918f200dab8fc7c20b
website_list_for_contact.py
65,703
15
61
get_list_context
https://github.com/frappe/erpnext.git
style: format code with black
9
0
13,991
14
2
6
def _assert_float_dtype(dtype): dtype = tf.as_dtype(dtype) if not dtype.is_floating: raise ValueError(f"Expected floating point type, got {dtype}.") return dtype
keras/initializers/initializers_v2.py
53
keras
{ "docstring": "Validate and return floating point type based on `dtype`.\n\n `dtype` must be a floating point type.\n\n Args:\n dtype: The data type to validate.\n\n Returns:\n Validated type.\n\n Raises:\n ValueError: if `dtype` is not a floating point type.\n ", "language": "en", "n_whitespaces": 66, "n_words": 36, "vocab_size": 27 }
17
Python
16
84afc5193d38057e2e2badf9c889ea87d80d8fbf
initializers_v2.py
272,164
5
28
_assert_float_dtype
https://github.com/keras-team/keras.git
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
36
0
80,966
11
1
4
def removeChild(self, node): raise NotImplementedError
.venv/lib/python3.8/site-packages/pip/_vendor/html5lib/treebuilders/base.py
18
transferlearning
{ "docstring": "Remove node from the children of the current node\n\n :arg node: the child node to remove\n\n ", "language": "en", "n_whitespaces": 30, "n_words": 16, "vocab_size": 12 }
5
Python
5
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
base.py
62,599
2
10
removeChild
https://github.com/jindongwang/transferlearning.git
upd; format
19
0
13,010
6
1
13
async def test_import_flow_triggered_but_no_ecobee_conf(hass): flow = config_flow.EcobeeFlowHandler() flow.hass = hass flow.hass.data[DATA_ECOBEE_CONFIG] = {} result = await flow.async_step_import(import_data=None) assert result["type"] == data_entry_flow.FlowResultType.FORM assert result["step_id"] == "user"
tests/components/ecobee/test_config_flow.py
101
core
{ "docstring": "Test expected result if import flow triggers but ecobee.conf doesn't exist.", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
24
Python
19
7cd68381f1d4f58930ffd631dfbfc7159d459832
test_config_flow.py
315,806
7
58
test_import_flow_triggered_but_no_ecobee_conf
https://github.com/home-assistant/core.git
Search/replace RESULT_TYPE_* by FlowResultType enum (#74642)
45
0
114,384
10
7
22
def exception_handler(exc, context): if isinstance(exc, Http404): exc = exceptions.NotFound(*(exc.args)) elif isinstance(exc, PermissionDenied): exc = exceptions.PermissionDenied(*(exc.args)) if isinstance(exc, exceptions.APIException): headers = {} if getattr(exc, 'auth_header', None): headers['WWW-Authenticate'] = exc.auth_header if getattr(exc, 'wait', None): headers['Retry-After'] = '%d' % exc.wait if isinstance(exc.detail, (list, dict)): data = exc.detail else: data = {'detail': exc.detail} set_rollback() return Response(data, status=exc.status_code, headers=headers) return None
rest_framework/views.py
246
django-rest-framework
{ "docstring": "\n Returns the response that should be used for any given exception.\n\n By default we handle the REST framework `APIException`, and also\n Django's built-in `Http404` and `PermissionDenied` exceptions.\n\n Any unhandled exceptions may return `None`, which will cause a 500 error\n to be raised.\n ", "language": "en", "n_whitespaces": 61, "n_words": 42, "vocab_size": 39 }
56
Python
39
56946fac8f29aa44ce84391f138d63c4c8a2a285
views.py
48,680
18
152
exception_handler
https://github.com/encode/django-rest-framework.git
Preserve exception messages for wrapped Django exceptions (#8051) * Preserve messages for wrapped Django exceptions * Fix the test * Update test_generics.py * Update test_generics.py Co-authored-by: Tom Christie <[email protected]>
178
0
9,566
14
5
16
def check_all_decorator_order(): errors = [] for fname in os.listdir(PATH_TO_TESTS): if fname.endswith(".py"): filename = os.path.join(PATH_TO_TESTS, fname) new_errors = check_decorator_order(filename) errors += [f"- {filename}, line {i}" for i in new_errors] if len(errors) > 0: msg = "\n".join(errors) raise ValueError( "The parameterized decorator (and its variants) should always be first, but this is not the case in the" f" following files:\n{msg}" )
utils/check_repo.py
148
transformers
{ "docstring": "Check that in all test files, the slow decorator is always last.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 12 }
59
Python
51
afe5d42d8d1d80af911ed980c2936bfe887078f6
check_repo.py
38,408
13
78
check_all_decorator_order
https://github.com/huggingface/transformers.git
Black preview (#17217) * Black preview * Fixup too! * Fix check copies * Use the same version as the CI * Bump black
154
0
6,970
13
1
18
def ready_to_fulfill(self): statuses = {OrderStatus.UNFULFILLED, OrderStatus.PARTIALLY_FULFILLED} payments = Payment.objects.filter(is_active=True).values("id") return self.filter( Exists(payments.filter(order_id=OuterRef("id"))), status__in=statuses, total_gross_amount__lte=F("total_paid_amount"), )
saleor/order/models.py
110
saleor
{ "docstring": "Return orders that can be fulfilled.\n\n Orders ready to fulfill are fully paid but unfulfilled (or partially\n fulfilled).\n ", "language": "en", "n_whitespaces": 39, "n_words": 18, "vocab_size": 18 }
15
Python
14
34bf03ce99d46e84c0990f009025b436c1f0386c
models.py
25,927
8
66
ready_to_fulfill
https://github.com/saleor/saleor.git
Optimize order filtering by ready to fulfill status (#9113)
83
0
4,925
15
1
7
def fit_transform(self, X, y=None): self._validate_params() return self._fit_transform(X, compute_sources=True)
sklearn/decomposition/_fastica.py
45
scikit-learn
{ "docstring": "Fit the model and recover the sources from X.\n\n Parameters\n ----------\n X : array-like of shape (n_samples, n_features)\n Training data, where `n_samples` is the number of samples\n and `n_features` is the number of features.\n\n y : Ignored\n Not used, present for API consistency by convention.\n\n Returns\n -------\n X_new : ndarray of shape (n_samples, n_components)\n Estimated sources obtained by transforming the data with the\n estimated unmixing matrix.\n ", "language": "en", "n_whitespaces": 177, "n_words": 66, "vocab_size": 49 }
8
Python
8
4cc347d4d0cbbfdcbd353f08842e0668fed78c9f
_fastica.py
260,360
3
28
fit_transform
https://github.com/scikit-learn/scikit-learn.git
MAINT Use _validate_params in FastICA (#23711) Co-authored-by: Guillaume Lemaitre <[email protected]> Co-authored-by: jeremiedbb <[email protected]>
29
0
76,206
8
17
38
def _text2settings(self): t2xs = [ (self.t2f, "font"), (self.t2s, "slant"), (self.t2w, "weight"), (self.t2c, "color"), ] setting_args = {arg: getattr(self, arg) for _, arg in t2xs} settings = self._get_settings_from_t2xs(t2xs) settings.extend(self._get_settings_from_gradient(setting_args)) # Handle overlaps settings.sort(key=lambda setting: setting.start) for index, setting in enumerate(settings): if index + 1 == len(settings): break next_setting = settings[index + 1] if setting.end > next_setting.start: new_setting = self._merge_settings(setting, next_setting, setting_args) new_index = index + 1 while ( new_index < len(settings) and settings[new_index].start < new_setting.start ): new_index += 1 settings.insert(new_index, new_setting) # Set all text settings (default font, slant, weight) temp_settings = settings.copy() start = 0 for setting in settings: if setting.start != start: temp_settings.append(TextSetting(start, setting.start, **setting_args)) start = setting.end if start != len(self.text): temp_settings.append(TextSetting(start, len(self.text), **setting_args)) settings = sorted(temp_settings, key=lambda setting: setting.start) if re.search(r"\n", self.text): line_num = 0 for start, end in self._find_indexes("\n", self.text): for setting in settings: if setting.line_num == -1: setting.line_num = line_num if start < setting.end: line_num += 1 new_setting = copy.copy(setting) setting.end = end new_setting.start = end new_setting.line_num = line_num settings.append(new_setting) settings.sort(key=lambda setting: setting.start) break for setting in settings: if setting.line_num == -1: setting.line_num = 0 return settings
manim/mobject/svg/text_mobject.py
612
manim
{ "docstring": "Converts the texts and styles to a setting for parsing.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 10 }
182
Python
99
902e7eb4f0147b5882a613b67467e38a1d47f01e
text_mobject.py
189,494
52
389
_text2settings
https://github.com/ManimCommunity/manim.git
Hide more private methods from the docs. (#2468) * hide privs from text_mobject.py * hide privs from tex_mobject.py * hide privs from code_mobject.py * hide privs from svg_mobject.py * remove SVGPath and utils from __init__.py * don't import string_to_numbers * hide privs from geometry.py * hide privs from matrix.py * hide privs from numbers.py * hide privs from three_dimensions.py * forgot underscore under set_stroke_width_from_length * there were more i missed * unhidea method that was used in docs * forgot other text2hash * remove svg_path from docs
888
0
46,094
18
2
16
def decoder(self, side): input_ = Input(shape=(8, 8, 512)) var_x = input_ var_x = UpscaleBlock(256, activation="leakyrelu")(var_x) var_x = UpscaleBlock(128, activation="leakyrelu")(var_x) var_x = UpscaleBlock(64, activation="leakyrelu")(var_x) var_x = Conv2DOutput(3, 5, name=f"face_out_{side}")(var_x) outputs = [var_x] if self.learn_mask: var_y = input_ var_y = UpscaleBlock(256, activation="leakyrelu")(var_y) var_y = UpscaleBlock(128, activation="leakyrelu")(var_y) var_y = UpscaleBlock(64, activation="leakyrelu")(var_y) var_y = Conv2DOutput(1, 5, name=f"mask_out_{side}")(var_y) outputs.append(var_y) return KerasModel(input_, outputs=outputs, name=f"decoder_{side}")
plugins/train/model/original.py
283
faceswap
{ "docstring": " The original Faceswap Decoder Network.\r\n\r\n The decoders for the original model have separate weights for each side \"A\" and \"B\", so two\r\n instances are created in :func:`build_model`, one for each side.\r\n\r\n Parameters\r\n ----------\r\n side: str\r\n Either `\"a` or `\"b\"`. This is used for naming the decoder model.\r\n\r\n Returns\r\n -------\r\n :class:`keras.models.Model`\r\n The Keras decoder model. This will be called twice, once for each side.\r\n ", "language": "en", "n_whitespaces": 149, "n_words": 63, "vocab_size": 49 }
58
Python
29
aa39234538a8f83e6aa2b60b8275a570e8876ac2
original.py
100,464
16
168
decoder
https://github.com/deepfakes/faceswap.git
Update all Keras Imports to be conditional (#1214) * Remove custom keras importer * first round keras imports fix * launcher.py: Remove KerasFinder references * 2nd round keras imports update (lib and extract) * 3rd round keras imports update (train) * remove KerasFinder from tests * 4th round keras imports update (tests)
194
0
19,938
14
6
21
def check_toname_in_config_by_regex(config_string, to_name, control_type=None): c = parse_config(config_string) if control_type: check_list = [control_type] else: check_list = list(c.keys()) for control in check_list: item = c[control].get('regex', {}) for to_name_item in c[control]['to_name']: expression = to_name_item for key in item: expression = expression.replace(key, item[key]) pattern = re.compile(expression) full_match = pattern.fullmatch(to_name) if full_match: return True return False
label_studio/core/label_config.py
179
label-studio
{ "docstring": "\n Check if to_name is in config including regex filter\n :return: True if to_name is fullmatch to some pattern ion config\n ", "language": "en", "n_whitespaces": 30, "n_words": 20, "vocab_size": 16 }
51
Python
35
583b3cb3b03a36a30b3ce9fe96eb4fb28548a070
label_config.py
178,014
17
112
check_toname_in_config_by_regex
https://github.com/heartexlabs/label-studio.git
fix: DEV-1462: Fix changing label config for repeater tag (#2725) * fix: DEV-1462: Fix changing label config for repeater tag with created annotations
182
0
42,572
15
1
10
async def test_hub_not_support_wireless(hass, mock_device_registry_devices): await setup_mikrotik_entry(hass, support_wireless=False) device_1 = hass.states.get("device_tracker.device_1") assert device_1 assert device_1.state == "home" # device_2 is added from DHCP device_2 = hass.states.get("device_tracker.device_2") assert device_2 assert device_2.state == "home"
tests/components/mikrotik/test_device_tracker.py
95
core
{ "docstring": "Test device_trackers created when hub doesn't support wireless.", "language": "en", "n_whitespaces": 7, "n_words": 8, "vocab_size": 8 }
31
Python
22
b09aaba421d6d6178d582bef9ea363017e55639d
test_device_tracker.py
315,483
8
53
test_hub_not_support_wireless
https://github.com/home-assistant/core.git
Add type hints and code cleanup for mikrotik (#74296) * Add type hints and code cleanup for mikrotik * update test and increase coverage * move setup_mikrotik_entry to __init__.py
58
0
114,071
9
2
9
def test_finditer(): matches = list(finditer(re.compile(rb"\d+"), b"0123 4567 890 12 3 4")) aligned = [i.group() for i in matches] assert aligned == [b"0123", b"567", b"890", b"12"]
tests/unit/test_bytecode.py
84
pyinstaller
{ "docstring": "\n Test that bytecode.finditer() yields matches only that start on an even byte (``match.start() % 2 == 0``).\n\n There are 3 permutations here when considering a match:\n - A match starts on an even byte:\n That's good! Include that sequence.\n - A single character match starts on an odd byte:\n Ignore it. It's a false positive.\n - A multi-character match starts on an odd byte:\n This match will be a false positive but there may be a genuine match shortly afterwards (in the case of the\n # test below - it'll be the next character) which overlaps with this one so we must override regex's\n behaviour of ignoring overlapping matches to prevent these from getting lost.\n ", "language": "en", "n_whitespaces": 169, "n_words": 115, "vocab_size": 82 }
25
Python
23
dc12cb59559f99110917bcbd21c9960ab57d994f
test_bytecode.py
263,956
4
52
test_finditer
https://github.com/pyinstaller/pyinstaller.git
tests: fix test_finditer Have the test use bytestrings instead of strings. Also assert that the bytecode string passed to bytecode.finditer() is in fact a bytestring.
37
0
77,525
13
2
12
def add_axes(self, animate=False, color=WHITE, **kwargs): axes = Axes(color=color, axis_config={"unit_size": 1}) if animate: self.play(Create(axes)) self.add(axes) return axes
manim/scene/vector_space_scene.py
86
manim
{ "docstring": "\n Adds a pair of Axes to the Scene.\n\n Parameters\n ----------\n animate : bool, optional\n Whether or not to animate the addition of the axes through Create.\n color : bool, optional\n The color of the axes. Defaults to WHITE.\n ", "language": "en", "n_whitespaces": 103, "n_words": 38, "vocab_size": 26 }
16
Python
15
5789be81609c8bf6d98d1d87d4061477d0cd37b9
vector_space_scene.py
189,404
6
53
add_axes
https://github.com/ManimCommunity/manim.git
Fix `add_axes` in :class:`~.VectorScene`. (#2444) * fix bug * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
62
0
46,041
12
17
31
def _score(estimator, X_test, y_test, scorer, error_score="raise"): if isinstance(scorer, dict): # will cache method calls if needed. scorer() returns a dict scorer = _MultimetricScorer(scorers=scorer, raise_exc=(error_score == "raise")) try: if y_test is None: scores = scorer(estimator, X_test) else: scores = scorer(estimator, X_test, y_test) except Exception: if isinstance(scorer, _MultimetricScorer): # If `_MultimetricScorer` raises exception, the `error_score` # parameter is equal to "raise". raise else: if error_score == "raise": raise else: scores = error_score warnings.warn( "Scoring failed. The score on this train-test partition for " f"these parameters will be set to {error_score}. Details: \n" f"{format_exc()}", UserWarning, ) # Check non-raised error messages in `_MultimetricScorer` if isinstance(scorer, _MultimetricScorer): exception_messages = [ (name, str_e) for name, str_e in scores.items() if isinstance(str_e, str) ] if exception_messages: # error_score != "raise" for name, str_e in exception_messages: scores[name] = error_score warnings.warn( "Scoring failed. The score on this train-test partition for " f"these parameters will be set to {error_score}. Details: \n" f"{str_e}", UserWarning, ) error_msg = "scoring must return a number, got %s (%s) instead. (scorer=%s)" if isinstance(scores, dict): for name, score in scores.items(): if hasattr(score, "item"): with suppress(ValueError): # e.g. unwrap memmapped scalars score = score.item() if not isinstance(score, numbers.Number): raise ValueError(error_msg % (score, type(score), name)) scores[name] = score else: # scalar if hasattr(scores, "item"): with suppress(ValueError): # e.g. unwrap memmapped scalars scores = scores.item() if not isinstance(scores, numbers.Number): raise ValueError(error_msg % (scores, type(scores), scorer)) return scores
sklearn/model_selection/_validation.py
506
scikit-learn
{ "docstring": "Compute the score(s) of an estimator on a given test set.\n\n Will return a dict of floats if `scorer` is a dict, otherwise a single\n float is returned.\n ", "language": "en", "n_whitespaces": 37, "n_words": 28, "vocab_size": 23 }
228
Python
126
d17d0f9f721dd030f7405023a838edb564ac1a4c
_validation.py
261,817
51
296
_score
https://github.com/scikit-learn/scikit-learn.git
FIX `cross_validate` with multimetric scoring returns the non-failed scorers results even if some fail (#23101) Co-authored-by: Guillaume Lemaitre <[email protected]>
863
0
77,015
21
1
15
def test_socket_options_address_in_use_problem(qlocalserver, short_tmpdir): servername = str(short_tmpdir / 'x') s1 = QLocalServer() ok = s1.listen(servername) assert ok s2 = QLocalServer() s2.setSocketOptions(QLocalServer.SocketOption.UserAccessOption) ok = s2.listen(servername) print(s2.errorString()) # We actually would expect ok == False here - but we want the test to fail # when the Qt bug is fixed. assert ok
tests/unit/misc/test_ipc.py
112
qutebrowser
{ "docstring": "Qt seems to ignore AddressInUseError when using socketOptions.\n\n With this test we verify this bug still exists. If it fails, we can\n probably start using setSocketOptions again.\n ", "language": "en", "n_whitespaces": 36, "n_words": 27, "vocab_size": 24 }
50
Python
38
0877fb0d78635692e481c8bde224fac5ad0dd430
test_ipc.py
321,412
10
64
test_socket_options_address_in_use_problem
https://github.com/qutebrowser/qutebrowser.git
Run scripts/dev/rewrite_enums.py
86
0
117,701
10
3
9
def create_dummy_object(name, backend_name): if name.isupper(): return DUMMY_CONSTANT.format(name) elif name.islower(): return DUMMY_FUNCTION.format(name, backend_name) else: return DUMMY_CLASS.format(name, backend_name)
utils/check_dummies.py
80
transformers
{ "docstring": "Create the code for the dummy object corresponding to `name`.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 9 }
16
Python
13
1b730c3d11fdad0180ee9f9d3da9cff933c3b264
check_dummies.py
34,075
7
49
create_dummy_object
https://github.com/huggingface/transformers.git
Better dummies (#15148) * Better dummies * See if this fixes the issue * Fix quality * Style * Add doc for DummyObject
49
0
6,195
10
1
5
def require_torch_bf16(test_case): return unittest.skipUnless( is_torch_bf16_available(), "test requires torch>=1.10, using Ampere GPU or newer arch with cuda>=11.0 or using CPU", )(test_case)
src/transformers/testing_utils.py
38
transformers
{ "docstring": "Decorator marking a test that requires torch>=1.10, using Ampere GPU or newer arch with cuda>=11.0 or using CPU.", "language": "en", "n_whitespaces": 17, "n_words": 18, "vocab_size": 16 }
20
Python
18
34097b3304d79ace845316d4929220623279c8bc
testing_utils.py
31,115
5
21
require_torch_bf16
https://github.com/huggingface/transformers.git
Extend Transformers Trainer Class to Enable CPU AMP and Integrate Intel Extension for PyTorch (#17138) * init PR * fix import ipex * minor fix on bf16 * refine optimizer * refine args notes * refine code * refine ipex optimize args * refine half_precision_backend * black format * isort format * isort format files * flake8 format * doc builder format * refine codes * remove jit and optim bits * black preview format * Update src/transformers/trainer.py Co-authored-by: Sylvain Gugger <[email protected]> * refine code * refine notes * Update src/transformers/trainer.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/trainer.py Co-authored-by: Sylvain Gugger <[email protected]> * code refine * add ipex ut * add performance cpu doc * link to the cpu doc from main perf doc * install ipex into CI's docker * Update perf_train_cpu.mdx * Update docs/source/en/perf_train_cpu.mdx Co-authored-by: Stas Bekman <[email protected]> * Update perf_train_cpu.mdx * Update perf_train_cpu.mdx Co-authored-by: Sylvain Gugger <[email protected]> Co-authored-by: Stas Bekman <[email protected]> Co-authored-by: Stas Bekman <[email protected]>
43
0
5,682
10
2
28
def milp(c, *, integrality=None, bounds=None, constraints=None, options=None): r args_iv = _milp_iv(c, integrality, bounds, constraints, options) c, integrality, lb, ub, indptr, indices, data, b_l, b_u, options = args_iv highs_res = _highs_wrapper(c, indptr, indices, data, b_l, b_u, lb, ub, integrality, options) res = {} # Convert to scipy-style status and message highs_status = highs_res.get('status', None) highs_message = highs_res.get('message', None) status, message = _highs_to_scipy_status_message(highs_status, highs_message) res['status'] = status res['message'] = message res['success'] = res['status'] in {0, 2, 3} x = highs_res.get('x', None) res['x'] = np.array(x) if x is not None else None res['fun'] = highs_res.get('fun', None) res['mip_node_count'] = highs_res.get('mip_node_count', None) res['mip_dual_bound'] = highs_res.get('mip_dual_bound', None) res['mip_gap'] = highs_res.get('mip_gap', None) return OptimizeResult(res)
scipy/optimize/_milp.py
357
scipy
{ "docstring": "\n Mixed-integer linear programming\n\n Solves problems of the following form:\n\n .. math::\n\n \\min_x \\ & c^T x \\\\\n \\mbox{such that} \\ & b_l \\leq A x \\leq b_u,\\\\\n & l \\leq x \\leq u, \\\\\n & x_i \\in \\mathbb{Z}, i \\in X_i\n\n where :math:`x` is a vector of decision variables;\n :math:`c`, :math:`b_l`, :math:`b_u`, :math:`l`, and :math:`u` are vectors;\n :math:`A` is a matrix, and :math:`X_i` is the set of indices of\n decision variables that must be integral. (In this context, a\n variable that can assume only integer values is said to be \"integral\";\n it has an \"integrality\" constraint.)\n\n Alternatively, that's:\n\n minimize::\n\n c @ x\n\n such that::\n\n b_l <= A @ x <= b_u\n l <= x <= u\n Specified elements of x must be integers\n\n By default, ``l = 0`` and ``u = np.inf`` unless specified with\n ``bounds``.\n\n Parameters\n ----------\n c : 1D array_like\n The coefficients of the linear objective function to be minimized.\n `c` is converted to a double precision array before the problem is\n solved.\n integrality : 1D array_like, optional\n Indicates the type of integrality constraint on each decision variable.\n\n ``0`` : Continuous variable; no integrality constraint.\n\n ``1`` : Integer variable; decision variable must be an integer\n within `bounds`.\n\n ``2`` : Semi-continuous variable; decision variable must be within\n `bounds` or take value ``0``.\n\n ``3`` : Semi-integer variable; decision variable must be an integer\n within `bounds` or take value ``0``.\n\n By default, all variables are continuous. `integrality` is converted\n to an array of integers before the problem is solved.\n\n bounds : scipy.optimize.Bounds, optional\n Bounds on the decision variables. Lower and upper bounds are converted\n to double precision arrays before the problem is solved. The\n ``keep_feasible`` parameter of the `Bounds` object is ignored. If\n not specified, all decision variables are constrained to be\n non-negative.\n constraints : sequence of scipy.optimize.LinearConstraint, optional\n Linear constraints of the optimization problem. Arguments may be\n one of the following:\n\n 1. A single `LinearConstraint` object\n 2. A single tuple that can be converted to a `LinearConstraint` object\n as ``LinearConstraint(*constraints)``\n 3. A sequence composed entirely of objects of type 1. and 2.\n\n Before the problem is solved, all values are converted to double\n precision, and the matrices of constraint coefficients are converted to\n instances of `scipy.sparse.csc_array`. The ``keep_feasible`` parameter\n of `LinearConstraint` objects is ignored.\n options : dict, optional\n A dictionary of solver options. The following keys are recognized.\n\n disp : bool (default: ``False``)\n Set to ``True`` if indicators of optimization status are to be\n printed to the console during optimization.\n presolve : bool (default: ``True``)\n Presolve attempts to identify trivial infeasibilities,\n identify trivial unboundedness, and simplify the problem before\n sending it to the main solver.\n time_limit : float, optional\n The maximum number of seconds allotted to solve the problem.\n Default is no time limit.\n\n Returns\n -------\n res : OptimizeResult\n An instance of :class:`scipy.optimize.OptimizeResult`. The object\n is guaranteed to have the following attributes.\n\n status : int\n An integer representing the exit status of the algorithm.\n\n ``0`` : Optimal solution found.\n\n ``1`` : Iteration or time limit reached.\n\n ``2`` : Problem is infeasible.\n\n ``3`` : Problem is unbounded.\n\n ``4`` : Other; see message for details.\n\n success : bool\n ``True`` when an optimal solution is found, the problem is\n determined to be infeasible, or the problem is determined\n to be unbounded.\n\n message : str\n A string descriptor of the exit status of the algorithm.\n\n The following attributes will also be present, but the values may be\n ``None``, depending on the solution status.\n\n x : ndarray\n The values of the decision variables that minimize the\n objective function while satisfying the constraints.\n fun : float\n The optimal value of the objective function ``c @ x``.\n mip_node_count : int\n The number of subproblems or \"nodes\" solved by the MILP solver.\n mip_dual_bound : float\n The MILP solver's final estimate of the lower bound on the optimal\n solution.\n mip_gap : float\n The difference between the final objective function value and the\n final dual bound.\n\n Notes\n -----\n `milp` is a wrapper of the HiGHS linear optimization software [1]_. The\n algorithm is deterministic, and it typically finds the global optimum of\n moderately challenging mixed-integer linear programs (when it exists).\n\n References\n ----------\n .. [1] Huangfu, Q., Galabova, I., Feldmeier, M., and Hall, J. A. J.\n \"HiGHS - high performance software for linear optimization.\"\n Accessed 12/25/2021 at https://www.maths.ed.ac.uk/hall/HiGHS/#guide\n .. [2] Huangfu, Q. and Hall, J. A. J. \"Parallelizing the dual revised\n simplex method.\" Mathematical Programming Computation, 10 (1),\n 119-142, 2018. DOI: 10.1007/s12532-017-0130-5\n\n Examples\n --------\n Consider the problem at\n https://en.wikipedia.org/wiki/Integer_programming#Example, which is\n expressed as a maximization problem of two variables. Since `milp` requires\n that the problem be expressed as a minimization problem, the objective\n function coefficients on the decision variables are:\n\n >>> c = -np.array([0, 1])\n\n Note the negative sign: we maximize the original objective function\n by minimizing the negative of the objective function.\n\n We collect the coefficients of the constraints into arrays like:\n\n >>> A = np.array([[-1, 1], [3, 2], [2, 3]])\n >>> b_u = np.array([1, 12, 12])\n >>> b_l = np.full_like(b_u, -np.inf)\n\n Because there is no lower limit on these constraints, we have defined a\n variable ``b_l`` full of values representing negative infinity. This may\n be unfamiliar to users of `scipy.optimize.linprog`, which only accepts\n \"less than\" (or \"upper bound\") inequality constraints of the form\n ``A_ub @ x <= b_u``. By accepting both ``b_l`` and ``b_u`` of constraints\n ``b_l <= A_ub @ x <= b_u``, `milp` makes it easy to specify \"greater than\"\n inequality constraints, \"less than\" inequality constraints, and equality\n constraints concisely.\n\n These arrays are collected into a single `LinearConstraint` object like:\n\n >>> from scipy.optimize import LinearConstraint\n >>> constraints = LinearConstraint(A, b_l, b_u)\n\n The non-negativity bounds on the decision variables are enforced by\n default, so we do not need to provide an argument for `bounds`.\n\n Finally, the problem states that both decision variables must be integers:\n\n >>> integrality = np.ones_like(c)\n\n We solve the problem like:\n >>> from scipy.optimize import milp\n >>> res = milp(c=c, constraints=constraints, integrality=integrality)\n >>> res.x\n [1.0, 2.0]\n\n Note that had we solved the relaxed problem (without integrality\n constraints):\n >>> res = milp(c=c, constraints=constraints) # OR:\n >>> # from scipy.optimize import linprog; res = linprog(c, A, b_u)\n >>> res.x\n [1.8, 2.8]\n\n we would not have obtained the correct solution by rounding to the nearest\n integers.\n\n ", "language": "en", "n_whitespaces": 1938, "n_words": 1026, "vocab_size": 475 }
107
Python
69
131c4e78b3f093ad3d415ebcc1fb42bbbde72470
_milp.py
242,082
226
232
milp
https://github.com/scipy/scipy.git
MAINT: optimize: milp: update error messages [skip ci]
245
0
69,782
9
6
25
def all_reduce_sum_gradients(grads_and_vars): grads_and_vars = list(grads_and_vars) filtered_grads_and_vars = filter_empty_gradients(grads_and_vars) if filtered_grads_and_vars: if tf.__internal__.distribute.strategy_supports_no_merge_call(): grads = [pair[0] for pair in filtered_grads_and_vars] reduced = tf.distribute.get_replica_context().all_reduce( tf.distribute.ReduceOp.SUM, grads ) else: # TODO(b/183257003): Remove this branch reduced = tf.distribute.get_replica_context().merge_call( _all_reduce_sum_fn, args=(filtered_grads_and_vars,) ) else: reduced = [] # Copy 'reduced' but add None gradients back in reduced_with_nones = [] reduced_pos = 0 for g, v in grads_and_vars: if g is None: reduced_with_nones.append((None, v)) else: reduced_with_nones.append((reduced[reduced_pos], v)) reduced_pos += 1 assert reduced_pos == len(reduced), "Failed to add all gradients" return reduced_with_nones
keras/optimizers/optimizer_v2/utils.py
247
keras
{ "docstring": "Returns all-reduced gradients aggregated via summation.\n\n Args:\n grads_and_vars: List of (gradient, variable) pairs.\n\n Returns:\n List of (gradient, variable) pairs where gradients have been all-reduced.\n ", "language": "en", "n_whitespaces": 43, "n_words": 24, "vocab_size": 19 }
84
Python
59
84afc5193d38057e2e2badf9c889ea87d80d8fbf
utils.py
275,620
25
153
all_reduce_sum_gradients
https://github.com/keras-team/keras.git
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
281
0
81,433
16
1
3
def dev(self): return self._get_dev()
ivy/core/container.py
23
ivy
{ "docstring": "\n The device to which the arrays in the container belong, with None returned if the devices are not consistent\n ", "language": "en", "n_whitespaces": 34, "n_words": 19, "vocab_size": 17 }
4
Python
4
d743336b1f3654cd0315f380f43eed4116997c1d
container.py
213,577
2
12
dev
https://github.com/unifyai/ivy.git
renamed dev_str arg to dev for all methods.
18
0
53,660
7
1
8
def _zero_in_bounds(self): vmin, vmax = self._axes.yaxis._scale.limit_range_for_scale(0, 1, 1e-5) return vmin == 0
lib/matplotlib/projections/polar.py
48
matplotlib
{ "docstring": "\n Return True if zero is within the valid values for the\n scale of the radial axis.\n ", "language": "en", "n_whitespaces": 38, "n_words": 16, "vocab_size": 14 }
12
Python
12
4507ae544155fb6fc9fa594faf1b8c8a23a85426
polar.py
110,754
3
32
_zero_in_bounds
https://github.com/matplotlib/matplotlib.git
Allow polar scales where zero is not in valid interval
33
0
24,277
11
5
14
def _watch_glob(self, directory, patterns): prefix = "glob" if not directory.exists(): if not directory.parent.exists(): logger.warning( "Unable to watch directory %s as neither it or its parent exist.", directory, ) return prefix = "glob-parent-%s" % directory.name patterns = ["%s/%s" % (directory.name, pattern) for pattern in patterns] directory = directory.parent expression = ["anyof"] for pattern in patterns: expression.append(["match", pattern, "wholename"]) self._subscribe(directory, "%s:%s" % (prefix, directory), expression)
django/utils/autoreload.py
181
django
{ "docstring": "\n Watch a directory with a specific glob. If the directory doesn't yet\n exist, attempt to watch the parent directory and amend the patterns to\n include this. It's important this method isn't called more than one per\n directory when updating all subscriptions. Subsequent calls will\n overwrite the named subscription, so it must include all possible glob\n expressions.\n ", "language": "en", "n_whitespaces": 106, "n_words": 56, "vocab_size": 46 }
63
Python
49
9c19aff7c7561e3a82978a272ecdaad40dda5c00
autoreload.py
206,541
16
108
_watch_glob
https://github.com/django/django.git
Refs #33476 -- Reformatted code with Black.
243
0
51,559
12
4
19
def edit_focus_options(self) -> typing.Sequence[str]: flow = self.master.view.focus.flow focus_options = [] if isinstance(flow, tcp.TCPFlow): focus_options = ["tcp-message"] elif isinstance(flow, http.HTTPFlow): focus_options = [ "cookies", "urlencoded form", "multipart form", "path", "method", "query", "reason", "request-headers", "response-headers", "request-body", "response-body", "status_code", "set-cookies", "url", ] elif isinstance(flow, dns.DNSFlow): raise exceptions.CommandError("Cannot edit DNS flows yet, please submit a patch.") return focus_options
mitmproxy/tools/console/consoleaddons.py
181
mitmproxy
{ "docstring": "\n Possible components for console.edit.focus.\n ", "language": "en", "n_whitespaces": 23, "n_words": 4, "vocab_size": 4 }
54
Python
44
fab7016b318d7c37fc30cef9c0567b9b620b883e
consoleaddons.py
251,073
28
104
edit_focus_options
https://github.com/mitmproxy/mitmproxy.git
beautify flowtable dns entries this isn't perfect (the whole table needs to be refactored properly), but good enough for now.
357
0
73,587
11
1
10
def default_user(factories): return factories.create_user(email="admin@localhost", is_superuser=True) @pytest.mark.django_db @pytest.fixture(scope="function")
src/sentry/utils/pytest/fixtures.py
60
@pytest.mark.django_db @pytest.fixture(scope="function")
sentry
{ "docstring": "A default (super)user with email ``admin@localhost`` and password ``admin``.\n\n :returns: A :class:`sentry.models.user.User` instance.\n ", "language": "en", "n_whitespaces": 19, "n_words": 13, "vocab_size": 12 }
7
Python
7
9acf84fbe1c7ffba0faec907ad3219141086949f
fixtures.py
98,369
2
19
default_user
https://github.com/getsentry/sentry.git
feat(appconnect): Introduce an endpoint to trigger refresh of builds (#33457) Normally the builds are refreshed once an hour, however we're adding a way to trigger this manually. This endpoint is still severely ratelimited. This also includes the UI code to add a button for this endpoint. NATIVE-139 Co-authored-by: Priscila Oliveira <[email protected]>
11
1
19,565
9
2
20
def imshow_rgb(self, r, g, b, **kwargs): if not (r.shape == g.shape == b.shape): raise ValueError( f'Input shapes ({r.shape}, {g.shape}, {b.shape}) do not match') RGB = np.dstack([r, g, b]) R = np.zeros_like(RGB) R[:, :, 0] = r G = np.zeros_like(RGB) G[:, :, 1] = g B = np.zeros_like(RGB) B[:, :, 2] = b im_rgb = self.RGB.imshow(RGB, **kwargs) im_r = self.R.imshow(R, **kwargs) im_g = self.G.imshow(G, **kwargs) im_b = self.B.imshow(B, **kwargs) return im_rgb, im_r, im_g, im_b
lib/mpl_toolkits/axes_grid1/axes_rgb.py
272
matplotlib
{ "docstring": "\n Create the four images {rgb, r, g, b}.\n\n Parameters\n ----------\n r, g, b : array-like\n The red, green, and blue arrays.\n **kwargs :\n Forwarded to `~.Axes.imshow` calls for the four images.\n\n Returns\n -------\n rgb : `~matplotlib.image.AxesImage`\n r : `~matplotlib.image.AxesImage`\n g : `~matplotlib.image.AxesImage`\n b : `~matplotlib.image.AxesImage`\n ", "language": "en", "n_whitespaces": 152, "n_words": 45, "vocab_size": 32 }
73
Python
52
df6f95703b60348e01603f98a439b133da2938a0
axes_rgb.py
109,887
16
165
imshow_rgb
https://github.com/matplotlib/matplotlib.git
Improve mpl_toolkit documentation
197
0
23,796
12
1
27
def forward(self, qkv): bs, width, length = qkv.shape assert width % (3 * self.n_heads) == 0 ch = width // (3 * self.n_heads) # q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1) q, k, v = paddle.reshape(qkv, [bs * self.n_heads, ch * 3, length]).split(3, axis=1) scale = 1 / math.sqrt(math.sqrt(ch)) weight = paddle.einsum("bct,bcs->bts", q * scale, k * scale) # More stable with f16 than dividing afterwards weight = paddle.cast(nn.functional.softmax(paddle.cast(weight, 'float32'), axis=-1), weight.dtype) a = paddle.einsum("bts,bcs->bct", weight, v) # return a.reshape(bs, -1, length) return paddle.reshape(a, [bs, -1, length])
modules/image/text_to_image/disco_diffusion_cnclip_vitb16/reverse_diffusion/model/unet.py
253
PaddleHub
{ "docstring": "\n Apply QKV attention.\n\n :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.\n :return: an [N x (H * C) x T] tensor after attention.\n ", "language": "en", "n_whitespaces": 62, "n_words": 33, "vocab_size": 21 }
92
Python
63
f4d6e64cdc132ae868699a0ba442f4ab1d304a14
unet.py
49,867
10
158
forward
https://github.com/PaddlePaddle/PaddleHub.git
add disco_diffusion_cnclip_vitb16 module
177
0
9,938
13
1
3
def callbacks(self, callbacks_class) -> "TrainerConfig": self.callbacks_class = callbacks_class return self
rllib/agents/trainer_config.py
31
ray
{ "docstring": "Sets the callbacks configuration.\n\n Args:\n callbacks_class: Callbacks class, whose methods will be run during\n various phases of training and environment sample collection.\n See the `DefaultCallbacks` class and\n `examples/custom_metrics_and_callbacks.py` for more usage information.\n\n Returns:\n This updated TrainerConfig object.\n ", "language": "en", "n_whitespaces": 125, "n_words": 37, "vocab_size": 35 }
10
Python
10
2eaa54bd763ae0e63158ae0d939633c804394b78
trainer_config.py
147,576
14
17
callbacks
https://github.com/ray-project/ray.git
[RLlib] POC: Config objects instead of dicts (PPO only). (#23491)
31
0
34,012
7
1
5
def notify_conversion_complete(self, status="Converting") -> None: self.progress = 95 self.update(status)
spotdl/download/progress_handler.py
41
spotify-downloader
{ "docstring": "\n Notifies the progress handler that the song has been converted.\n\n ### Arguments\n - status: The status to display.\n ", "language": "en", "n_whitespaces": 47, "n_words": 18, "vocab_size": 17 }
9
Python
9
14e467160c852095efb0107a25eb1b136fda3ea8
progress_handler.py
30,336
9
23
notify_conversion_complete
https://github.com/spotDL/spotify-downloader.git
download the audio stream using yt-dlp not ffmpeg
30
0
5,493
7
2
13
def ft_load_params_from_file(self) -> None: if self._ft_params_from_file: # Set parameters from Hyperopt results file params = self._ft_params_from_file self.minimal_roi = params.get('roi', getattr(self, 'minimal_roi', {})) self.stoploss = params.get('stoploss', {}).get( 'stoploss', getattr(self, 'stoploss', -0.1)) trailing = params.get('trailing', {}) self.trailing_stop = trailing.get( 'trailing_stop', getattr(self, 'trailing_stop', False)) self.trailing_stop_positive = trailing.get( 'trailing_stop_positive', getattr(self, 'trailing_stop_positive', None)) self.trailing_stop_positive_offset = trailing.get( 'trailing_stop_positive_offset', getattr(self, 'trailing_stop_positive_offset', 0)) self.trailing_only_offset_is_reached = trailing.get( 'trailing_only_offset_is_reached', getattr(self, 'trailing_only_offset_is_reached', 0.0))
freqtrade/strategy/hyper.py
256
freqtrade
{ "docstring": "\n Load Parameters from parameter file\n Should/must run before config values are loaded in strategy_resolver.\n ", "language": "en", "n_whitespaces": 36, "n_words": 14, "vocab_size": 14 }
62
Python
42
386d3e035337cea7cbe9e38a5a8100fa79948fbb
hyper.py
149,922
21
157
ft_load_params_from_file
https://github.com/freqtrade/freqtrade.git
Rename stop/roi loading method
280
0
34,590
13
1
3
def default_training_arg(self): return self._default_training_arg
keras/utils/layer_utils.py
19
keras
{ "docstring": "The default value given to the \"training\" argument.", "language": "en", "n_whitespaces": 7, "n_words": 8, "vocab_size": 8 }
4
Python
4
84afc5193d38057e2e2badf9c889ea87d80d8fbf
layer_utils.py
276,951
2
10
default_training_arg
https://github.com/keras-team/keras.git
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
18
0
81,798
6
1
5
def add_distribution(self, distribution): self.adjacency_list[distribution] = [] self.reverse_list[distribution] = [] #self.missing[distribution] = []
.venv/lib/python3.8/site-packages/pip/_vendor/distlib/database.py
44
transferlearning
{ "docstring": "Add the *distribution* to the graph.\n\n :type distribution: :class:`distutils2.database.InstalledDistribution`\n or :class:`distutils2.database.EggInfoDistribution`\n ", "language": "en", "n_whitespaces": 52, "n_words": 11, "vocab_size": 10 }
12
Python
8
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
database.py
61,958
3
26
add_distribution
https://github.com/jindongwang/transferlearning.git
upd; format
40
0
12,779
8
1
14
def _getargspec(target): fullargspecs = getfullargspec(target) argspecs = ArgSpec( args=fullargspecs.args, varargs=fullargspecs.varargs, keywords=fullargspecs.varkw, defaults=fullargspecs.defaults, ) return argspecs else: _getargspec = _inspect.getargspec
keras/utils/tf_inspect.py
78
keras
{ "docstring": "A python3 version of getargspec.\n\n Calls `getfullargspec` and assigns args, varargs,\n varkw, and defaults to a python 2/3 compatible `ArgSpec`.\n\n The parameter name 'varkw' is changed to 'keywords' to fit the\n `ArgSpec` struct.\n\n Args:\n target: the target object to inspect.\n\n Returns:\n An ArgSpec with args, varargs, keywords, and defaults parameters\n from FullArgSpec.\n ", "language": "en", "n_whitespaces": 128, "n_words": 52, "vocab_size": 43 }
19
Python
16
84afc5193d38057e2e2badf9c889ea87d80d8fbf
tf_inspect.py
277,063
9
43
_getargspec
https://github.com/keras-team/keras.git
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
100
0
81,840
10
1
3
def proj_version(self): return self._get_spatialite_func("proj4_version()")
django/contrib/gis/db/backends/spatialite/operations.py
26
django
{ "docstring": "Return the version of the PROJ library used by SpatiaLite.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 9 }
4
Python
4
9c19aff7c7561e3a82978a272ecdaad40dda5c00
operations.py
203,882
2
13
proj_version
https://github.com/django/django.git
Refs #33476 -- Reformatted code with Black.
18
0
50,571
8
3
9
def build_pattern(): #bare = set() for module, replace in list(MAPPING.items()): for old_attr, new_attr in list(replace.items()): LOOKUP[(module, old_attr)] = new_attr #bare.add(module) #bare.add(old_attr) #yield % (module, module) yield % (module, old_attr, old_attr) yield % (module, old_attr) #yield % alternates(bare)
python3.10.4/Lib/lib2to3/fixes/fix_renames.py
104
XX-Net
{ "docstring": "\n # import_name< 'import' (module=%r\n # | dotted_as_names< any* module=%r any* >) >\n # \n import_from< 'from' module_name=%r 'import'\n ( attr_name=%r | import_as_name< attr_name=%r 'as' any >) >\n \n power< module_name=%r trailer< '.' attr_name=%r > any* >\n bare_name=%s", "language": "en", "n_whitespaces": 178, "n_words": 35, "vocab_size": 22 }
37
Python
24
8198943edd73a363c266633e1aa5b2a9e9c9f526
fix_renames.py
218,720
11
60
build_pattern
https://github.com/XX-net/XX-Net.git
add python 3.10.4 for windows
122
0
55,450
12
1
5
def close_tilt_position(self) -> PowerviewShadeMove: return PowerviewShadeMove(self._shade.close_position_tilt, {})
homeassistant/components/hunterdouglas_powerview/cover.py
34
core
{ "docstring": "Return the close tilt position and required additional positions.", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
7
Python
7
3ab294e8efc00c9f3cda2993318bb582ba675f8c
cover.py
288,765
3
20
close_tilt_position
https://github.com/home-assistant/core.git
Powerview refactor prep for all shade types (#79862)
21
0
87,917
9
1
2
def starts(self): return self["starts"]
packages/python/plotly/plotly/graph_objs/_streamtube.py
22
plotly.py
{ "docstring": "\n The 'starts' property is an instance of Starts\n that may be specified as:\n - An instance of :class:`plotly.graph_objs.streamtube.Starts`\n - A dict of string/value properties that will be passed\n to the Starts constructor\n\n Supported dict properties:\n\n x\n Sets the x components of the starting position\n of the streamtubes\n xsrc\n Sets the source reference on Chart Studio Cloud\n for `x`.\n y\n Sets the y components of the starting position\n of the streamtubes\n ysrc\n Sets the source reference on Chart Studio Cloud\n for `y`.\n z\n Sets the z components of the starting position\n of the streamtubes\n zsrc\n Sets the source reference on Chart Studio Cloud\n for `z`.\n\n Returns\n -------\n plotly.graph_objs.streamtube.Starts\n ", "language": "en", "n_whitespaces": 508, "n_words": 107, "vocab_size": 51 }
4
Python
4
43e3a4011080911901176aab919c0ecf5046ddd3
_streamtube.py
228,263
2
11
starts
https://github.com/plotly/plotly.py.git
switch to black .22
18
0
59,936
7
1
6
def normalization(channels, swish=0.0): return GroupNorm32(num_channels=channels, num_groups=32, swish=swish) ## go
src/diffusers/models/unet_ldm.py
40
diffusers
{ "docstring": "\n Make a standard normalization layer, with an optional swish activation.\n\n :param channels: number of input channels. :return: an nn.Module for normalization.\n ", "language": "en", "n_whitespaces": 31, "n_words": 21, "vocab_size": 20 }
9
Python
9
4261c3aadfc23ee5b123b80ab7d8680a013acb66
unet_ldm.py
335,461
2
27
normalization
https://github.com/huggingface/diffusers.git
Make style
14
0
120,765
8
1
6
def check_override(object, parentclass, attribute): return getattr(type(object), attribute) == getattr(parentclass, attribute)
freqtrade/resolvers/strategy_resolver.py
42
freqtrade
{ "docstring": "\n Checks if a object overrides the parent class attribute.\n ", "language": "en", "n_whitespaces": 16, "n_words": 9, "vocab_size": 9 }
10
Python
9
fe62a71f4c621aa06cfbda1f2aec19ab01062ebd
strategy_resolver.py
149,120
2
27
check_override
https://github.com/freqtrade/freqtrade.git
Simplify implementation of "check_override" by extracting it to function
16
0
34,371
10
3
9
def get_cost_centers(filters): order_by = "" if filters.get("budget_against") == "Cost Center": order_by = "order by lft" if filters.get("budget_against") in ["Cost Center", "Project"]: return frappe.db.sql_list( .format( tab=filters.get("budget_against"), order_by=order_by ), filters.get("company"), ) else: return frappe.db.sql_list( .format( tab=filters.get("budget_against") ) ) # nosec # Get dimension & target details
erpnext/accounts/report/budget_variance_report/budget_variance_report.py
168
erpnext
{ "docstring": "\n\t\t\t\tselect\n\t\t\t\t\tname\n\t\t\t\tfrom\n\t\t\t\t\t`tab{tab}`\n\t\t\t\twhere\n\t\t\t\t\tcompany = %s\n\t\t\t\t{order_by}\n\t\t\t\n\t\t\t\tselect\n\t\t\t\t\tname\n\t\t\t\tfrom\n\t\t\t\t\t`tab{tab}`\n\t\t\t", "language": "en", "n_whitespaces": 2, "n_words": 13, "vocab_size": 9 }
44
Python
34
494bd9ef78313436f0424b918f200dab8fc7c20b
budget_variance_report.py
65,181
30
91
get_cost_centers
https://github.com/frappe/erpnext.git
style: format code with black
27
0
13,819
16
4
11
def paginator_number(cl, i): if i == cl.paginator.ELLIPSIS: return format_html("{} ", cl.paginator.ELLIPSIS) elif i == cl.page_num: return format_html('<span class="this-page">{}</span> ', i) else: return format_html( '<a href="{}"{}>{}</a> ', cl.get_query_string({PAGE_VAR: i}), mark_safe(' class="end"' if i == cl.paginator.num_pages else ""), i, )
django/contrib/admin/templatetags/admin_list.py
128
django
{ "docstring": "\n Generate an individual page index link in a paginated list.\n ", "language": "en", "n_whitespaces": 17, "n_words": 10, "vocab_size": 10 }
38
Python
30
9c19aff7c7561e3a82978a272ecdaad40dda5c00
admin_list.py
203,487
12
78
paginator_number
https://github.com/django/django.git
Refs #33476 -- Reformatted code with Black.
122
0
50,406
16
1
27
def test_deepspeed_multigpu_single_file(tmpdir): model = BoringModel() checkpoint_path = os.path.join(tmpdir, "model.pt") trainer = Trainer(default_root_dir=tmpdir, fast_dev_run=True) trainer.fit(model) trainer.save_checkpoint(checkpoint_path) trainer = Trainer( default_root_dir=tmpdir, strategy=DeepSpeedStrategy(stage=3), gpus=1, fast_dev_run=True, precision=16 ) strategy = trainer.strategy assert isinstance(strategy, DeepSpeedStrategy) assert not strategy.load_full_weights with pytest.raises(MisconfigurationException, match="DeepSpeed was unable to load the checkpoint."): trainer.test(model, ckpt_path=checkpoint_path) trainer = Trainer( default_root_dir=tmpdir, strategy=DeepSpeedStrategy(stage=3, load_full_weights=True), gpus=1, fast_dev_run=True, precision=16, ) strategy = trainer.strategy assert isinstance(strategy, DeepSpeedStrategy) assert strategy.load_full_weights trainer.test(model, ckpt_path=checkpoint_path)
tests/strategies/test_deepspeed_strategy.py
270
lightning
{ "docstring": "Test to ensure that DeepSpeed loads from a single file checkpoint.", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
64
Python
41
650c710efacd633fa283955145342bb64063c883
test_deepspeed_strategy.py
241,567
25
175
test_deepspeed_multigpu_single_file
https://github.com/Lightning-AI/lightning.git
Rename training plugin test files & names to strategy (#11303)
167
0
69,594
12
3
23
def test_f2py_only(capfd, retreal_f77, monkeypatch): foutl = get_io_paths(retreal_f77, mname="test") ipath = foutl.finp toskip = "t0 t4 t8 sd s8 s4" tokeep = "td s0" monkeypatch.setattr( sys, "argv", f'f2py {ipath} -m test only: {tokeep}'.split()) with util.switchdir(ipath.parent): f2pycli() out, err = capfd.readouterr() for skey in toskip.split(): assert ( f'buildmodule: Could not find the body of interfaced routine "{skey}". Skipping.' in err) for rkey in tokeep.split(): assert f'Constructing wrapper function "{rkey}"' in out
numpy/f2py/tests/test_f2py2e.py
183
numpy
{ "docstring": "Test that functions can be kept by only:\n CLI :: only:\n ", "language": "en", "n_whitespaces": 17, "n_words": 11, "vocab_size": 10 }
69
Python
60
729ad4f92420231e2a7009b3223c6c7620b8b808
test_f2py2e.py
160,134
17
98
test_f2py_only
https://github.com/numpy/numpy.git
TST: Initialize f2py2e tests of the F2PY CLI (#20668) Increases F2PY coverage by around 15 percent. For the CLI itself it covers the major features (around 70 percent), with the exception of mostly numpy.distutils stuff. More importantly, sets the groundwork for #20056, in that passing the same testsuite should indicate feature parity.
184
0
38,506
13
4
10
def _is_all_dates(self) -> bool: if needs_i8_conversion(self.dtype): return True elif self.dtype != _dtype_obj: # TODO(ExtensionIndex): 3rd party EA might override? # Note: this includes IntervalIndex, even when the left/right # contain datetime-like objects. return False elif self._is_multi: return False return is_datetime_array(ensure_object(self._values))
pandas/core/indexes/base.py
76
pandas
{ "docstring": "\n Whether or not the index values only consist of dates.\n ", "language": "en", "n_whitespaces": 25, "n_words": 10, "vocab_size": 10 }
40
Python
33
06dac44e91bb099319fa6c421df8111b189d26a6
base.py
164,487
11
44
_is_all_dates
https://github.com/pandas-dev/pandas.git
CLN: Index methods incorrectly assuming object dtype (#45767)
142
0
39,562
10
1
4
def copy(self) -> Animation: return deepcopy(self) # Methods for interpolation, the mean of an Animation # TODO: stop using alpha as parameter name in different meanings.
manim/animation/animation.py
26
manim
{ "docstring": "Create a copy of the animation.\n\n Returns\n -------\n Animation\n A copy of ``self``\n ", "language": "en", "n_whitespaces": 52, "n_words": 13, "vocab_size": 11 }
26
Python
25
daf23c9d1031b12d9c119b8f6b7e60727d7f9242
animation.py
189,517
9
13
copy
https://github.com/ManimCommunity/manim.git
Upgraded typehints (#2429) * Future Annotations * Delete template_twitter_post.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Apply suggestions from code review * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed broken RTD Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
46
0
46,111
7
1
14
def test_change_one_time_keys(self) -> None: local_user = "@boris:" + self.hs.hostname device_id = "xyz" keys = { "alg1:k1": "key1", "alg2:k2": {"key": "key2", "signatures": {"k1": "sig1"}}, "alg2:k3": {"key": "key3"}, } res = self.get_success( self.handler.upload_keys_for_user( local_user, device_id, {"one_time_keys": keys} ) ) self.assertDictEqual( res, {"one_time_key_counts": {"alg1": 1, "alg2": 2, "signed_curve25519": 0}} ) # Error when changing string key self.get_failure( self.handler.upload_keys_for_user( local_user, device_id, {"one_time_keys": {"alg1:k1": "key2"}} ), SynapseError, ) # Error when replacing dict key with string self.get_failure( self.handler.upload_keys_for_user( local_user, device_id, {"one_time_keys": {"alg2:k3": "key2"}} ), SynapseError, ) # Error when replacing string key with dict self.get_failure( self.handler.upload_keys_for_user( local_user, device_id, {"one_time_keys": {"alg1:k1": {"key": "key"}}}, ), SynapseError, ) # Error when replacing dict key self.get_failure( self.handler.upload_keys_for_user( local_user, device_id, { "one_time_keys": { "alg2:k2": {"key": "key3", "signatures": {"k1": "sig1"}} } }, ), SynapseError, )
tests/handlers/test_e2e_keys.py
407
synapse
{ "docstring": "attempts to change one-time-keys should be rejected", "language": "en", "n_whitespaces": 6, "n_words": 7, "vocab_size": 7 }
124
Python
61
5dd949bee6158a8b651db9f2ae417a62c8184bfd
test_e2e_keys.py
247,617
49
229
test_change_one_time_keys
https://github.com/matrix-org/synapse.git
Add type hints to some tests/handlers files. (#12224)
680
0
71,783
18
3
10
def __call__(self, *other): rv = Cycle(*other) for k, v in zip(list(self.keys()), [rv[self[k]] for k in self.keys()]): rv[k] = v return rv
sympy/combinatorics/permutations.py
93
sympy
{ "docstring": "Return product of cycles processed from R to L.\n\n Examples\n ========\n\n >>> from sympy.combinatorics import Cycle\n >>> Cycle(1, 2)(2, 3)\n (1 3 2)\n\n An instance of a Cycle will automatically parse list-like\n objects and Permutations that are on the right. It is more\n flexible than the Permutation in that all elements need not\n be present:\n\n >>> a = Cycle(1, 2)\n >>> a(2, 3)\n (1 3 2)\n >>> a(2, 3)(4, 5)\n (1 3 2)(4 5)\n\n ", "language": "en", "n_whitespaces": 179, "n_words": 74, "vocab_size": 54 }
21
Python
16
498015021131af4dbb07eb110e5badaba8250c7b
permutations.py
196,180
5
59
__call__
https://github.com/sympy/sympy.git
Updated import locations
60
0
47,680
11
1
6
async def test_default_discovery_abort_on_new_unique_flow(hass, manager): mock_integration(hass, MockModule("comp")) mock_entity_platform(hass, "config_flow.comp", None)
tests/test_config_entries.py
45
core
{ "docstring": "Test that a flow using default discovery is aborted when a second flow with unique ID is created.", "language": "en", "n_whitespaces": 17, "n_words": 18, "vocab_size": 15 }
9
Python
9
7cd68381f1d4f58930ffd631dfbfc7159d459832
test_config_entries.py
316,438
21
165
test_default_discovery_abort_on_new_unique_flow
https://github.com/home-assistant/core.git
Search/replace RESULT_TYPE_* by FlowResultType enum (#74642)
18
0
115,016
10
3
10
def _get_roles_path(self): roles_path = context.CLIARGS['roles_path'] if context.CLIARGS['basedir'] is not None: subdir = os.path.join(context.CLIARGS['basedir'], "roles") if os.path.isdir(subdir): roles_path = (subdir,) + roles_path roles_path = roles_path + (context.CLIARGS['basedir'],) return roles_path
lib/ansible/cli/doc.py
126
ansible
{ "docstring": "\n Add any 'roles' subdir in playbook dir to the roles search path.\n And as a last resort, add the playbook dir itself. Order being:\n - 'roles' subdir of playbook dir\n - DEFAULT_ROLES_PATH (default in cliargs)\n - playbook dir (basedir)\n NOTE: This matches logic in RoleDefinition._load_role_path() method.\n ", "language": "en", "n_whitespaces": 108, "n_words": 46, "vocab_size": 33 }
28
Python
18
29b5eb6ba9fb652ddd8dd06cdd8f2e80e2098063
doc.py
266,511
8
75
_get_roles_path
https://github.com/ansible/ansible.git
updated metadata dump to do full docs dump (#76170) * minor refactor in other options by pushing common code into functions * consolidate coll_filter * more normalizing loader * dont pass plugin_loader, its global import * Also dump roles and collections * adjusted tests to new err msg * disable namespace filter (unused)
104
0
78,449
12
6
20
def compile_sample(self, batch_size, samples=None, images=None, masks=None): num_images = self._config.get("preview_images", 14) num_images = min(batch_size, num_images) if batch_size is not None else num_images retval = {} for side in ("a", "b"): logger.debug("Compiling samples: (side: '%s', samples: %s)", side, num_images) side_images = images[side] if images is not None else self._target[side] side_masks = masks[side] if masks is not None else self._masks[side] side_samples = samples[side] if samples is not None else self._samples[side] retval[side] = [side_samples[0:num_images], side_images[0:num_images], side_masks[0:num_images]] return retval
plugins/train/trainer/_base.py
225
faceswap
{ "docstring": " Compile the preview samples for display.\n\n Parameters\n ----------\n batch_size: int\n The requested batch size for each training iterations\n samples: dict, optional\n Dictionary for side \"a\", \"b\" of :class:`numpy.ndarray`. The sample images that should\n be used for creating the preview. If ``None`` then the samples will be generated from\n the internal random image generator. Default: ``None``\n images: dict, optional\n Dictionary for side \"a\", \"b\" of :class:`numpy.ndarray`. The target images that should\n be used for creating the preview. If ``None`` then the targets will be generated from\n the internal random image generator. Default: ``None``\n masks: dict, optional\n Dictionary for side \"a\", \"b\" of :class:`numpy.ndarray`. The masks that should be used\n for creating the preview. If ``None`` then the masks will be generated from the\n internal random image generator. Default: ``None``\n\n Returns\n -------\n list\n The list of samples, targets and masks as :class:`numpy.ndarrays` for creating a\n preview image\n ", "language": "en", "n_whitespaces": 349, "n_words": 145, "vocab_size": 58 }
74
Python
48
c1512fd41d86ef47a5d1ce618d6d755ef7cbacdf
_base.py
100,396
13
153
compile_sample
https://github.com/deepfakes/faceswap.git
Update code to support Tensorflow versions up to 2.8 (#1213) * Update maximum tf version in setup + requirements * - bump max version of tf version in launcher - standardise tf version check * update keras get_custom_objects for tf>2.6 * bugfix: force black text in GUI file dialogs (linux) * dssim loss - Move to stock tf.ssim function * Update optimizer imports for compatibility * fix logging for tf2.8 * Fix GUI graphing for TF2.8 * update tests * bump requirements.txt versions * Remove limit on nvidia-ml-py * Graphing bugfixes - Prevent live graph from displaying if data not yet available * bugfix: Live graph. Collect loss labels correctly * fix: live graph - swallow inconsistent loss errors * Bugfix: Prevent live graph from clearing during training * Fix graphing for AMD
225
0
19,880
11
1
19
def test_delete_alias_admin(self): # Create an alias from a different user. self._create_alias(self.test_user) # Delete the user's alias as the admin. result = self.get_success( self.handler.delete_association( create_requester(self.admin_user), self.room_alias ) ) self.assertEqual(self.room_id, result) # Confirm the alias is gone. self.get_failure( self.handler.get_association(self.room_alias), synapse.api.errors.SynapseError, )
tests/handlers/test_directory.py
117
synapse
{ "docstring": "A server admin can delete an alias created by another user.", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
39
Python
31
02d708568b476f2f7716000b35c0adfa4cbd31b3
test_directory.py
246,741
12
72
test_delete_alias_admin
https://github.com/matrix-org/synapse.git
Replace assertEquals and friends with non-deprecated versions. (#12092)
168
0
71,337
13
6
16
def _validate_resource_path(path): invalid = ( os.path.pardir in path.split(posixpath.sep) or posixpath.isabs(path) or ntpath.isabs(path) ) if not invalid: return msg = "Use of .. or absolute path in a resource path is not allowed." # Aggressively disallow Windows absolute paths if ntpath.isabs(path) and not posixpath.isabs(path): raise ValueError(msg) # for compatibility, warn; in future # raise ValueError(msg) warnings.warn( msg[:-1] + " and will raise exceptions in a future release.", DeprecationWarning, stacklevel=4, )
.venv/lib/python3.8/site-packages/pip/_vendor/pkg_resources/__init__.py
151
transferlearning
{ "docstring": "\n Validate the resource paths according to the docs.\n https://setuptools.readthedocs.io/en/latest/pkg_resources.html#basic-resource-access\n\n >>> warned = getfixture('recwarn')\n >>> warnings.simplefilter('always')\n >>> vrp = NullProvider._validate_resource_path\n >>> vrp('foo/bar.txt')\n >>> bool(warned)\n False\n >>> vrp('../foo/bar.txt')\n >>> bool(warned)\n True\n >>> warned.clear()\n >>> vrp('/foo/bar.txt')\n >>> bool(warned)\n True\n >>> vrp('foo/../../bar.txt')\n >>> bool(warned)\n True\n >>> warned.clear()\n >>> vrp('foo/f../bar.txt')\n >>> bool(warned)\n False\n\n Windows path separators are straight-up disallowed.\n >>> vrp(r'\\\\foo/bar.txt')\n Traceback (most recent call last):\n ...\n ValueError: Use of .. or absolute path in a resource path \\\nis not allowed.\n\n >>> vrp(r'C:\\\\foo/bar.txt')\n Traceback (most recent call last):\n ...\n ValueError: Use of .. or absolute path in a resource path \\\nis not allowed.\n\n Blank values are allowed\n\n >>> vrp('')\n >>> bool(warned)\n False\n\n Non-string values are not.\n\n >>> vrp(None)\n Traceback (most recent call last):\n ...\n AttributeError: ...\n ", "language": "en", "n_whitespaces": 409, "n_words": 123, "vocab_size": 58 }
69
Python
48
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
__init__.py
63,033
16
87
_validate_resource_path
https://github.com/jindongwang/transferlearning.git
upd; format
234
0
13,106
13
5
10
def _round(self, places, rounding): if places <= 0: raise ValueError("argument should be at least 1 in _round") if self._is_special or not self: return Decimal(self) ans = self._rescale(self.adjusted()+1-places, rounding) # it can happen that the rescale alters the adjusted exponent; # for example when rounding 99.97 to 3 significant figures. # When this happens we end up with an extra 0 at the end of # the number; a second rescale fixes this. if ans.adjusted() != self.adjusted(): ans = ans._rescale(ans.adjusted()+1-places, rounding) return ans
python3.10.4/Lib/_pydecimal.py
141
XX-Net
{ "docstring": "Round a nonzero, nonspecial Decimal to a fixed number of\n significant figures, using the given rounding mode.\n\n Infinities, NaNs and zeros are returned unaltered.\n\n This operation is quiet: it raises no flags, and uses no\n information from the context.\n\n ", "language": "en", "n_whitespaces": 74, "n_words": 39, "vocab_size": 35 }
82
Python
66
8198943edd73a363c266633e1aa5b2a9e9c9f526
_pydecimal.py
219,783
9
84
_round
https://github.com/XX-net/XX-Net.git
add python 3.10.4 for windows
185
0
55,798
14
1
5
def exterior_angle(self): return 2*S.Pi/self._n
sympy/geometry/polygon.py
28
sympy
{ "docstring": "Measure of the exterior angles.\n\n Returns\n =======\n\n exterior_angle : number\n\n See Also\n ========\n\n sympy.geometry.line.LinearEntity.angle_between\n\n Examples\n ========\n\n >>> from sympy import RegularPolygon, Point\n >>> rp = RegularPolygon(Point(0, 0), 4, 8)\n >>> rp.exterior_angle\n pi/4\n\n ", "language": "en", "n_whitespaces": 123, "n_words": 32, "vocab_size": 29 }
4
Python
4
498015021131af4dbb07eb110e5badaba8250c7b
polygon.py
196,294
2
16
exterior_angle
https://github.com/sympy/sympy.git
Updated import locations
18
0
47,794
8
1
24
def test_context_for_resolved_crash_rate_alert(self): status = TriggerStatus.RESOLVED incident = self.create_incident() alert_rule = self.create_alert_rule( aggregate="percentage(sessions_crashed, sessions) AS _crash_rate_alert_aggregate", threshold_type=AlertRuleThresholdType.BELOW, query="", ) alert_rule_trigger = self.create_alert_rule_trigger(alert_rule) action = self.create_alert_rule_trigger_action( alert_rule_trigger=alert_rule_trigger, triggered_for_incident=incident ) generated_email_context = generate_incident_trigger_email_context( self.project, incident, action.alert_rule_trigger, status, IncidentStatus.CLOSED ) assert generated_email_context["aggregate"] == "percentage(sessions_crashed, sessions)" assert generated_email_context["threshold"] == 100 assert generated_email_context["threshold_direction_string"] == ">"
tests/sentry/incidents/test_action_handlers.py
168
sentry
{ "docstring": "\n Test that ensures the resolved notification contains the correct threshold string\n ", "language": "en", "n_whitespaces": 26, "n_words": 11, "vocab_size": 10 }
49
Python
38
146fba432a32568be7d0b884dae0c39a6c33a11f
test_action_handlers.py
96,409
18
102
test_context_for_resolved_crash_rate_alert
https://github.com/getsentry/sentry.git
fix(metric_alerts): Make sure critical triggers resolve properly when no action is set on a warning trigger (#31883) ### Problem If we have an alert set up like: - Warning: 50. Action: None - Critical: 100. Action: Slack Then if we go from critical -> warning state the slack resolve action will fail to fire. ### Cause The reason this happens is related to a previous fix. For an alert like - Warning: 50. Action: Slack - Critical: 100. Action: Slack When going from critical -> warning the critical action would be marked as resolved. This would cause a slack notification with `Resolved` to be sent to the channel. This is misleading, because the alert is still active, just in the warning state. What we want here is to fire a warning notification instead. The initial fix for this was that when we resolved a critical trigger, we’d check and see whether there was an active warning trigger. If so, we’d send a warning trigger fire to our actions, rather than a critical trigger resolve. This works ok for many cases, but fails when the actions on the warning trigger are different to those on the critical trigger. ### Fix Substituting the warning trigger for the critical trigger causes us subtle bugs. So, instead of this, when triggering fires/resolves on our action handlers we will also pass along the incident state change that the trigger/resolve caused the incident to go into. So if a critical trigger resolves, we check what state it would have put the incident in. If there’s a warning trigger, then the state is warning. If no warning trigger, the state is closed. This state is then used to appropriately generate the messages that we send to users via our various actions. So now, If we have an alert set up like: - Warning: 50. Action: None - Critical: 100. Action: Slack If this goes from - critical -> warning OR critical -> resolved we will send `IncidentStatus.WARNING` to any actions related to the critical trigger. - warning -> resolved We do nothing since there are no actions on the warning trigger If we have an alert set up like: - Warning: 50. Action: Slack - Critical: 100. Action: Slack If this goes from: - critical -> warning: critical trigger, `IncidentStatus.Warning` - warning -> resolved: warning trigger, `IncidentStatus.Closed` - critical -> resolved: Since we de-dupe triggers to avoid spamming the user, we will select the warning trigger here, and send `IncidentStatus.closed` If we have an alert set up like: - Warning: 50. Action: Slack - Critical: 100. Action: Pagerduty If this goes from: - critical -> warning: critical trigger, `IncidentStatus.Warning` sent to Pagerduty. Nothing sent to Slack - warning -> resolved: warning trigger, `IncidentStatus.Closed` sent to Slack. Nothing sent to Pagerduty - critical -> resolved: Critical trigger, `IncidentStatus.Warning` sent to Pagerduty. Warning trigger, `IncidentStatus.Closed` sent to Slack. We don’t de-dupe here since the actions are different.
195
0
19,309
10
1
6
def test_causal_lm_model_past_with_attn_mask(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_causal_lm_model_past_with_attn_mask(*config_and_inputs)
templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/test_modeling_tf_{{cookiecutter.lowercase_modelname}}.py
43
transformers
{ "docstring": "Test the causal LM model with `past_key_values` and `attention_mask`", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
6
Python
6
8635407bc724c45142c1f91dbc9ef3ea681e1a56
test_modeling_tf_{{cookiecutter.lowercase_modelname}}.py
35,529
3
24
test_causal_lm_model_past_with_attn_mask
https://github.com/huggingface/transformers.git
Fix tf.concatenate + test past_key_values for TF models (#15774) * fix wrong method name tf.concatenate * add tests related to causal LM / decoder * make style and quality * clean-up * Fix TFBertModel's extended_attention_mask when past_key_values is provided * Fix tests * fix copies * More tf.int8 -> tf.int32 in TF test template * clean-up * Update TF test template * revert the previous commit + update the TF test template * Fix TF template extended_attention_mask when past_key_values is provided * Fix some styles manually * clean-up * Fix ValueError: too many values to unpack in the test * Fix more: too many values to unpack in the test * Add a comment for extended_attention_mask when there is past_key_values * Fix TFElectra extended_attention_mask when past_key_values is provided * Add tests to other TF models * Fix for TF Electra test: add prepare_config_and_inputs_for_decoder * Fix not passing training arg to lm_head in TFRobertaForCausalLM * Fix tests (with past) for TF Roberta * add testing for pask_key_values for TFElectra model Co-authored-by: ydshieh <[email protected]>
27
0
6,472
9
4
11
def should_show(self, securitymanager) -> bool: if self.roles: user_roles = {r.name for r in securitymanager.current_user.roles} if not user_roles.intersection(set(self.roles)): return False return True
airflow/www/utils.py
77
airflow
{ "docstring": "Determine if the user should see the message based on their role membership", "language": "en", "n_whitespaces": 12, "n_words": 13, "vocab_size": 12 }
21
Python
19
3e9828022b03b60d9e112f1f64340a528c8407e3
utils.py
44,455
7
48
should_show
https://github.com/apache/airflow.git
Simplify fab has access lookup (#19294) * Use FAB models. * Remove incorrect conversions to new permission naming scheme. * Fix missing FAB renames. * Remove unused FAB compatibility fixes in models.py. * Set perms directly on user objects. * Set perms properties on User model. * Rename missed old naming scheme conversion. * Remove unused imports. * Remove unused imports. * Remeve get_user_roles() method. * Make permissions eagerload. * Remove unused imports. * Clarify query params. * Modify sort logic so MSSQL passes. * Add text modifier to order_by values. * Remove calls to get_*_dags. * Add back execution_date * Add back comma to match rest of file. * Remove unused permission functions. * Fix failing tests. * Pass user object to current_app.appbuilder.sm.has_all_dags_access. * Remove attempts to fix query. * Update the api_connexion query builders. * Add typing. * Apply sorts directly to model objects. * Apply sorts directly to model objects. * Standardize custom sort code. * Code review * Augment xcom docs (#20755) * Fix relationship join bug in FAB/SecurityManager with SQLA 1.4 (#21296) This is fixed in SQLA 1.4.19, but the fix makes the intent clearer here anyway. * Docs: Fix task order in overview example (#21282) * Update stat_name_handler documentation (#21298) Previously stat_name_handler was under the scheduler section of the configuration but it was moved to the metrics section since 2.0.0. * Update recipe for Google Cloud SDK (#21268) * Use FAB models. * Remove incorrect conversions to new permission naming scheme. * Fix missing FAB renames. * Remove unused FAB compatibility fixes in models.py. * Set perms directly on user objects. * Set perms properties on User model. * Rename missed old naming scheme conversion. * Remove unused imports. * Remove unused imports. * Remeve get_user_roles() method. * Make permissions eagerload. * Remove unused imports. * Clarify query params. * Modify sort logic so MSSQL passes. * Add text modifier to order_by values. * Remove calls to get_*_dags. * Add back execution_date * Add back comma to match rest of file. * Remove unused permission functions. * Fix failing tests. * Pass user object to current_app.appbuilder.sm.has_all_dags_access. * Remove attempts to fix query. * Update the api_connexion query builders. * Add typing. * Apply sorts directly to model objects. * Apply sorts directly to model objects. * Standardize custom sort code. * Make sure joined fields prefetch. * Dont use cached_property, since its only on > 3.8. Co-authored-by: Ash Berlin-Taylor <[email protected]> Co-authored-by: Lewis John McGibbney <[email protected]> Co-authored-by: Ash Berlin-Taylor <[email protected]> Co-authored-by: Lucia Kasman <[email protected]> Co-authored-by: Fran Sánchez <[email protected]> Co-authored-by: Kamil Breguła <[email protected]>
79
0
8,273
13