id
int64
20
338k
vocab_size
int64
2
671
ast_levels
int64
4
32
nloc
int64
1
451
n_ast_nodes
int64
12
5.6k
n_identifiers
int64
1
186
n_ast_errors
int64
0
10
n_words
int64
2
2.17k
n_whitespaces
int64
2
13.8k
fun_name
stringlengths
2
73
commit_message
stringlengths
51
15.3k
url
stringlengths
31
59
code
stringlengths
51
31k
ast_errors
stringlengths
0
1.46k
token_counts
int64
6
3.32k
file_name
stringlengths
5
56
language
stringclasses
1 value
path
stringlengths
7
134
commit_id
stringlengths
40
40
repo
stringlengths
3
28
complexity
int64
1
153
185,436
4
8
2
23
2
0
4
10
test_checkboxes
Checkbox polishing + fix auto-width in Horizontal layout (#942) * checkbox widget * fixes * Checkbox additions, fix content width in horizontal layout * Update docs, add tests for checkbox * Remove some test code * Small renaming of test class Co-authored-by: Will McGugan <[email protected]>
https://github.com/Textualize/textual.git
def test_checkboxes(snap_compare): assert snap_compare("docs/examples/widgets/checkbox.py")
11
test_snapshots.py
Python
tests/snapshot_tests/test_snapshots.py
4a0dc49bca6fdae8880c105ccf816180e65dc2a6
textual
1
95,593
26
9
9
92
13
0
28
92
send
feat(notifications): Nudge Notifications (#30409) Co-authored-by: getsantry[bot] <66042841+getsantry[bot]@users.noreply.github.com>
https://github.com/getsentry/sentry.git
def send(self) -> None: from sentry.notifications.notify import notify participants_by_provider = self.get_participants() if not participants_by_provider: return context = self.get_context() for provider, recipients in participants_by_provider.items(): safe_execute(notify, provider, self, recipients, context)
58
base.py
Python
src/sentry/notifications/notifications/base.py
2a5e2fd78a7e1a963e2827f90c64f353928d79b4
sentry
3
153,895
24
14
15
127
18
0
33
238
binary_operation
PERF-#4182, FIX-#4059: Add cell-wise execution for binary ops, fix bin ops for empty dataframes (#4391) Signed-off-by: Alexey Prutskov <[email protected]>
https://github.com/modin-project/modin.git
def binary_operation(cls, left, func, right): [part.drain_call_queue() for part in right.flatten()] func = cls.preprocess_func(func) return np.array( [ [ part.apply( func, right[row_idx][col_idx]._data, ) for col_idx, part in enumerate(left[row_idx]) ] for row_idx in range(len(left)) ] )
84
partition_manager.py
Python
modin/core/dataframe/pandas/partitioning/partition_manager.py
8f35ab57996d18f2c1f001d624682952853454cd
modin
4
293,362
8
6
3
25
4
0
8
22
discovery_hash
Add MQTT notify platform (#64728) * Mqtt Notify service draft * fix updates * Remove TARGET config parameter * do not use protected attributes * complete tests * device support for auto discovery * Add targets attribute and support for data param * Add tests and resolve naming issues * CONF_COMMAND_TEMPLATE from .const * Use mqtt as default service name * make sure service has a unique name * pylint error * fix type error * Conditional device removal and test * Improve tests * update description has_notify_services() * Use TypedDict for service config * casting- fix discovery - hass.data * cleanup * move MqttNotificationConfig after the schemas * fix has_notify_services * do not test log for reg update * Improve casting types * Simplify obtaining the device_id Co-authored-by: Erik Montnemery <[email protected]> * await not needed Co-authored-by: Erik Montnemery <[email protected]> * Improve casting types and naming * cleanup_device_registry signature change and black * remove not needed condition Co-authored-by: Erik Montnemery <[email protected]>
https://github.com/home-assistant/core.git
def discovery_hash(self) -> tuple | None: return self._discovery_hash
14
notify.py
Python
homeassistant/components/mqtt/notify.py
e574a3ef1d855304b2a78c389861c421b1548d74
core
1
293,993
9
6
9
28
5
0
10
24
in_progress
Add update entity platform (#68248) Co-authored-by: Glenn Waters <[email protected]>
https://github.com/home-assistant/core.git
def in_progress(self) -> bool | int | None: return self._attr_in_progress
16
__init__.py
Python
homeassistant/components/update/__init__.py
073fb40b79cf8aa06790fdceb23b6857db888c99
core
1
257,868
89
17
25
556
26
0
153
369
get_type
Classify pipeline's type based on its components (#3132) * Add pipeline get_type mehod * Add pipeline uptime * Add pipeline telemetry event sending * Send pipeline telemetry once a day (at most) * Add pipeline invocation counter, change invocation counter logic * Update allowed telemetry parameters - allow pipeline parameters * PR review: add unit test
https://github.com/deepset-ai/haystack.git
def get_type(self) -> str: # values of the dict are functions evaluating whether components of this pipeline match the pipeline type # specified by dict keys pipeline_types = { "GenerativeQAPipeline": lambda x: {"Generator", "Retriever"} <= set(x.keys()), "FAQPipeline": lambda x: {"Docs2Answers"} <= set(x.keys()), "ExtractiveQAPipeline": lambda x: {"Reader", "Retriever"} <= set(x.keys()), "SearchSummarizationPipeline": lambda x: {"Retriever", "Summarizer"} <= set(x.keys()), "TranslationWrapperPipeline": lambda x: {"InputTranslator", "OutputTranslator"} <= set(x.keys()), "RetrieverQuestionGenerationPipeline": lambda x: {"Retriever", "QuestionGenerator"} <= set(x.keys()), "QuestionAnswerGenerationPipeline": lambda x: {"QuestionGenerator", "Reader"} <= set(x.keys()), "DocumentSearchPipeline": lambda x: {"Retriever"} <= set(x.keys()), "QuestionGenerationPipeline": lambda x: {"QuestionGenerator"} <= set(x.keys()), "MostSimilarDocumentsPipeline": lambda x: len(x.values()) == 1 and isinstance(list(x.values())[0], BaseDocumentStore), } retrievers = [type(comp).__name__ for comp in self.components.values() if isinstance(comp, BaseRetriever)] doc_stores = [type(comp).__name__ for comp in self.components.values() if isinstance(comp, BaseDocumentStore)] pipeline_type = next( (p_type for p_type, eval_f in pipeline_types.items() if eval_f(self.components)), "Unknown pipeline" ) retrievers_used = retrievers if retrievers else "None" doc_stores_used = doc_stores if doc_stores else "None" return f"{pipeline_type} (retriever: {retrievers_used}, doc_store: {doc_stores_used})"
317
base.py
Python
haystack/pipelines/base.py
938e6fda5b686ec49c52cb23f786a74d9321e048
haystack
10
290,236
8
9
3
35
6
0
8
22
available
Bump nexia to 2.0.6 (#81474) * Bump nexia to 2.0.6 - Marks thermostat unavailable when it is offline * is property
https://github.com/home-assistant/core.git
def available(self) -> bool: return super().available and self._thermostat.is_online
20
entity.py
Python
homeassistant/components/nexia/entity.py
b313f3794692fd5edd97e5637efabb1efeeff14d
core
2
168,150
13
12
5
46
8
0
13
27
_disabled
TYP: annotate functions that always error with NoReturn (#48002)
https://github.com/pandas-dev/pandas.git
def _disabled(self, *args, **kwargs) -> NoReturn: raise TypeError(f"'{type(self).__name__}' does not support mutable operations.")
20
frozen.py
Python
pandas/core/indexes/frozen.py
6ba2a67556526db2e5b0b60a566b5f2039cf4a46
pandas
1
133,141
2
6
18
13
2
0
2
5
test_multiple_callbacks
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def test_multiple_callbacks(ray_start_1_cpu):
98
test_dask_callback.py
Python
python/ray/util/dask/tests/test_dask_callback.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
1
130,503
69
13
22
162
17
0
87
384
destroy_autoscaler_workers
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def destroy_autoscaler_workers(self): if self.autoscaler is None: return # Nothing to clean up. if self.autoscaling_config is None: # This is a logic error in the program. Can't do anything. logger.error("Monitor: Cleanup failed due to lack of autoscaler config.") return logger.info("Monitor: Exception caught. Taking down workers...") clean = False while not clean: try: teardown_cluster( config_file=self.autoscaling_config, yes=True, # Non-interactive. workers_only=True, # Retain head node for logs. override_cluster_name=None, keep_min_workers=True, # Retain minimal amount of workers. ) clean = True logger.info("Monitor: Workers taken down.") except Exception: logger.error("Monitor: Cleanup exception. Trying again...") time.sleep(2)
92
monitor.py
Python
python/ray/autoscaler/_private/monitor.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
5
278,879
28
11
4
51
6
0
33
55
_policy_equivalent_to_dtype
Remove pylint comments. PiperOrigin-RevId: 452353044
https://github.com/keras-team/keras.git
def _policy_equivalent_to_dtype(policy): # We use type() instead of isinstance because a subclass of Policy is never # equivalent to a dtype. return type(policy) == Policy and ( policy.name == "_infer" or _is_convertible_to_dtype(policy.name) )
28
policy.py
Python
keras/mixed_precision/policy.py
3613c3defc39c236fb1592c4f7ba1a9cc887343a
keras
3
79,828
10
10
4
50
5
0
11
43
_get_image_dimensions
feat: use Willow instead of Pillow for images. Override all Django code calling Pillow, so that we can more easily implement SVG support when it lands in Willow.
https://github.com/wagtail/wagtail.git
def _get_image_dimensions(self): if not hasattr(self, "_dimensions_cache"): self._dimensions_cache = self.get_image_dimensions() return self._dimensions_cache
28
models.py
Python
wagtail/images/models.py
912747f6aeb585a688b9d3536cf7bd7e1c94487c
wagtail
2
85,040
68
16
23
223
23
0
81
377
test_archiving_interrupted
ruff: Fix B017 `assertRaises(Exception):` should be considered evil. Signed-off-by: Anders Kaseorg <[email protected]>
https://github.com/zulip/zulip.git
def test_archiving_interrupted(self) -> None: expired_msg_ids = self._make_expired_zulip_messages(7) expired_usermsg_ids = self._get_usermessage_ids(expired_msg_ids) # Insert an exception near the end of the archiving process of a chunk: with mock.patch( "zerver.lib.retention.delete_messages", side_effect=Exception("delete_messages error") ): with self.assertRaisesRegex(Exception, r"^delete_messages error$"): # Specify large chunk_size to ensure things happen in a single batch archive_messages(chunk_size=1000) # Archiving code has been executed, but because we got an exception, things should have been rolled back: self._verify_archive_data([], []) self.assertEqual( set(Message.objects.filter(id__in=expired_msg_ids).values_list("id", flat=True)), set(expired_msg_ids), ) self.assertEqual( set( UserMessage.objects.filter(id__in=expired_usermsg_ids).values_list( "id", flat=True ) ), set(expired_usermsg_ids), )
132
test_retention.py
Python
zerver/tests/test_retention.py
033d2615f6614b06c8268fe60c6ee2a37892c204
zulip
1
224,462
8
6
2
24
6
0
8
22
on_page_markdown
Move plugin events docs into source code + refactor * Create real (no-op) methods for each event in the base class. * Refactor event dispatcher to not check for methods' existence, instead just call them. * Move documentation from Markdown into docstrings of these methods. * Activate the 'mkdocstrings' plugin. * Use 'mkdocstrings' to insert documentation from those docstrings into the site.
https://github.com/mkdocs/mkdocs.git
def on_page_markdown(self, markdown, page, config, files): return markdown
16
plugins.py
Python
mkdocs/plugins.py
f79b34d174e41084391868e7b503f5c61b8b1bdf
mkdocs
1
259,566
6
8
2
29
4
0
6
12
completeness_score
DOC Ensure completeness_score passes numpydoc validation (#23016)
https://github.com/scikit-learn/scikit-learn.git
def completeness_score(labels_true, labels_pred): return homogeneity_completeness_v_measure(labels_true, labels_pred)[1]
18
_supervised.py
Python
sklearn/metrics/cluster/_supervised.py
59428da95751fa92d46687850932f8f6ab1b4f3d
scikit-learn
1
165,251
8
13
5
44
5
0
8
22
has_dropped_na
BUG: Fix some cases of groupby(...).transform with dropna=True (#45953)
https://github.com/pandas-dev/pandas.git
def has_dropped_na(self) -> bool: return bool((self.group_info[0] < 0).any())
26
ops.py
Python
pandas/core/groupby/ops.py
1efa4fb9cca4e313b644c66608a08cf768b4ed04
pandas
1
260,585
14
7
3
34
5
0
14
42
fit
MNT TrucatedSVD uses _validate_parameters (#23987) Co-authored-by: jeremiedbb <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def fit(self, X, y=None): # param validation is done in fit_transform self.fit_transform(X) return self
20
_truncated_svd.py
Python
sklearn/decomposition/_truncated_svd.py
7da7ba603d42398c6e7cf89ea5336b8aabac7bae
scikit-learn
1
177,959
11
6
2
19
3
0
11
20
forwards
fix: DEV-2589: Move calculate_stats_all_orgs to rq_workers, swap migration (#2569)
https://github.com/heartexlabs/label-studio.git
def forwards(apps, schema_editor): # This migration was moved to 0024_manual_migrate_counters_again.py return
9
0018_manual_migrate_counters.py
Python
label_studio/tasks/migrations/0018_manual_migrate_counters.py
55f09c794840fcd99659a1ab8f9319560d769495
label-studio
1
147,576
10
7
14
31
3
0
10
31
callbacks
[RLlib] POC: Config objects instead of dicts (PPO only). (#23491)
https://github.com/ray-project/ray.git
def callbacks(self, callbacks_class) -> "TrainerConfig": self.callbacks_class = callbacks_class return self
17
trainer_config.py
Python
rllib/agents/trainer_config.py
2eaa54bd763ae0e63158ae0d939633c804394b78
ray
1
12,673
170
12
95
534
27
0
294
874
mixin_pod_parser
feat: allow to pass a list of port monitoring to replicas (#4961)
https://github.com/jina-ai/jina.git
def mixin_pod_parser(parser, port_monitoring=True): gp = add_arg_group(parser, title='Pod') gp.add_argument( '--runtime-cls', type=str, default='WorkerRuntime', help='The runtime class to run inside the Pod', ) gp.add_argument( '--timeout-ready', type=int, default=600000, help='The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting ' 'forever', ) gp.add_argument( '--env', action=KVAppendAction, metavar='KEY: VALUE', nargs='*', help='The map of environment variables that are available inside runtime', ) # hidden CLI used for internal only gp.add_argument( '--shard-id', type=int, default=0, help='defines the shard identifier for the executor. It is used as suffix for the workspace path of the executor`' if _SHOW_ALL_ARGS else argparse.SUPPRESS, ) gp.add_argument( '--pod-role', type=PodRoleType.from_string, choices=list(PodRoleType), default=PodRoleType.WORKER, help='The role of this Pod in a Deployment' if _SHOW_ALL_ARGS else argparse.SUPPRESS, ) gp.add_argument( '--noblock-on-start', action='store_true', default=False, help='If set, starting a Pod/Deployment does not block the thread/process. It then relies on ' '`wait_start_success` at outer function for the postpone check.' if _SHOW_ALL_ARGS else argparse.SUPPRESS, ) gp.add_argument( '--shards', type=int, default=1, help='The number of shards in the deployment running at the same time. For more details check ' 'https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies', ) gp.add_argument( '--replicas', type=int, default=1, help='The number of replicas in the deployment', ) gp.add_argument( '--port', type=int, default=helper.random_port(), help='The port for input data to bind to, default is a random port between [49152, 65535]', ) gp.add_argument( '--monitoring', action='store_true', default=False, help='If set, spawn an http server with a prometheus endpoint to expose metrics', ) if port_monitoring: gp.add_argument( '--port-monitoring', type=int, default=helper.random_port(), dest='port_monitoring', help=f'The port on which the prometheus server is exposed, default is a random port between [49152, 65535]', ) gp.add_argument( '--retries', type=int, default=-1, dest='retries', help=f'Number of retries per gRPC call. If <0 it defaults to max(3, num_replicas)', ) gp.add_argument( '--floating', action='store_true', default=False, help='If set, the current Pod/Deployment can not be further chained, ' 'and the next `.add()` will chain after the last Pod/Deployment not this current one.', )
326
pod.py
Python
jina/parsers/orchestrate/pod.py
27a3f942c7f228f072c35832aa9e4fb1d30a6118
jina
5
135,648
16
14
10
61
10
0
16
48
ignore_ray_errors
Refactor ActorManager to store underlying remote actors in dict. (#29953) Signed-off-by: Jun Gong <[email protected]>
https://github.com/ray-project/ray.git
def ignore_ray_errors(self) -> Iterator[ResultOrError]: return self._Iterator( [r for r in self.result_or_errors if not isinstance(r.get(), RayError)] )
38
actor_manager.py
Python
rllib/utils/actor_manager.py
b84dac2609bd587c43ed17bb6fa18fb7241a41de
ray
3
250,754
16
9
14
65
7
0
20
92
load
Rotate stream files (#5097) * Example addon for saving streamed data including a small bug fix to make it work. * Revert "Example addon for saving streamed data including a small bug fix to make it work." This reverts commit 02ab78def9a52eaca1a89d0757cd9475ce250eaa. * Add support for rotating stream files every hour or day * Added tests * Modified to change the stream file every time the formating string changes as time moves on. * Update to more compact version * simplify save addon logic * make mypy happy * fix compatibility with Python 3.8 Co-authored-by: Maximilian Hils <[email protected]>
https://github.com/mitmproxy/mitmproxy.git
def load(self, loader): loader.add_option( "save_stream_file", typing.Optional[str], None, ) loader.add_option( "save_stream_filter", typing.Optional[str], None, "Filter which flows are written to file." )
41
save.py
Python
mitmproxy/addons/save.py
3a5550a09cd40d76acfe71aa45c7a8309525ad51
mitmproxy
1
269,121
64
17
14
159
19
0
86
156
validate_per_replica_inputs
Rework a test to avoid instantiating DistributedValues directly. PiperOrigin-RevId: 438824819
https://github.com/keras-team/keras.git
def validate_per_replica_inputs(distribution_strategy, x): # Convert the inputs and targets into a list of PerReplica objects. per_replica_list = tf.nest.flatten(x) x_values_list = [] for x in per_replica_list: # At this point x should contain only tensors. x_values = distribution_strategy.unwrap(x) for value in x_values: if not tf.is_tensor(value): raise ValueError('Dataset input to the model should be tensors instead ' 'they are of type {}'.format(type(value))) if not tf.executing_eagerly(): # Validate that the shape and dtype of all the elements in x are the same. validate_all_tensor_shapes(x, x_values) validate_all_tensor_types(x, x_values) x_values_list.append(x_values[0]) return x_values_list
94
distributed_training_utils_v1.py
Python
keras/distribute/distributed_training_utils_v1.py
2d7dc6080f0824200e317f255e3290da60e0f98a
keras
5
156,724
33
14
10
147
12
0
42
139
transpose
Don't include docs in ``Array`` methods, just refer to module docs (#9244) Co-authored-by: James Bourbeau <[email protected]>
https://github.com/dask/dask.git
def transpose(self, *axes): from dask.array.routines import transpose if not axes: axes = None elif len(axes) == 1 and isinstance(axes[0], Iterable): axes = axes[0] if (axes == tuple(range(self.ndim))) or (axes == tuple(range(-self.ndim, 0))): # no transpose necessary return self else: return transpose(self, axes=axes)
93
core.py
Python
dask/array/core.py
2820bae493a49cb1d0a6e376985c5473b8f04fa8
dask
6
109,425
21
11
7
80
10
0
23
84
set_connectionstyle
Harmonize docstrings for boxstyle/connectionstyle/arrowstyle. - Rely on `__init_subclass__` to avoid the need for the out-of-order `interpd.update`/`dedent_interpd`. - Use consistent wording for all setters, and add ACCEPTS list in all cases. - Move get_boxstyle right next to set_boxstyle (consistently with the other setters/getters). - Make the type check in the setters consistent in all cases (check for str, not for forcing inheritance from the private _Base). - Support `set_connectionstyle()` as equivalent to `set_connectionstyle(None)`, consistently with the other two setters.
https://github.com/matplotlib/matplotlib.git
def set_connectionstyle(self, connectionstyle=None, **kwargs): if connectionstyle is None: return ConnectionStyle.pprint_styles() self._connector = ( ConnectionStyle(connectionstyle, **kwargs) if isinstance(connectionstyle, str) else connectionstyle) self.stale = True
51
patches.py
Python
lib/matplotlib/patches.py
0dc472b4c7cdcc1e88228988fff17762c90f1cb9
matplotlib
3
258,627
10
10
3
50
9
0
11
32
_sample_hiddens
ENH Replaced RandomState with Generator compatible calls (#22271)
https://github.com/scikit-learn/scikit-learn.git
def _sample_hiddens(self, v, rng): p = self._mean_hiddens(v) return rng.uniform(size=p.shape) < p
31
_rbm.py
Python
sklearn/neural_network/_rbm.py
254ea8c453cd2100ade07644648f1f00392611a6
scikit-learn
1
247,584
55
9
12
152
18
0
76
192
test_cancellation_while_holding_read_lock
Add cancellation support to `ReadWriteLock` (#12120) Also convert `ReadWriteLock` to use async context managers. Signed-off-by: Sean Quah <[email protected]>
https://github.com/matrix-org/synapse.git
def test_cancellation_while_holding_read_lock(self): rwlock = ReadWriteLock() key = "key" # 1. A reader takes the lock and blocks. reader_d, _, _ = self._start_blocking_reader(rwlock, key, "read completed") # 2. A writer waits for the reader to complete. writer_d, _ = self._start_nonblocking_writer(rwlock, key, "write completed") self.assertFalse(writer_d.called) # 3. The reader is cancelled. reader_d.cancel() self.failureResultOf(reader_d, CancelledError) # 4. The writer should take the lock and complete. self.assertTrue( writer_d.called, "Writer is stuck waiting for a cancelled reader" ) self.assertEqual("write completed", self.successResultOf(writer_d))
88
test_rwlock.py
Python
tests/util/test_rwlock.py
605d161d7d585847fd1bb98d14d5281daeac8e86
synapse
1
42,077
34
15
11
165
22
0
46
138
adjust_legend_subtitles
Workaround for matplotlib rc_context issue (#2925) * Workaround for matplotlib rc_context issue Fixes #2914 * Add some additional comments about this workaround
https://github.com/mwaskom/seaborn.git
def adjust_legend_subtitles(legend): # Legend title not in rcParams until 3.0 font_size = plt.rcParams.get("legend.title_fontsize", None) hpackers = legend.findobj(mpl.offsetbox.VPacker)[0].get_children() for hpack in hpackers: draw_area, text_area = hpack.get_children() handles = draw_area.get_children() if not all(artist.get_visible() for artist in handles): draw_area.set_width(0) for text in text_area.get_children(): if font_size is not None: text.set_size(font_size)
100
utils.py
Python
seaborn/utils.py
6460a21555ba6557e1f6f06f4d677d9c19148169
seaborn
6
31,387
7
13
2
54
9
0
7
12
fill_with_neg_inf
Not use -1e4 as attn mask (#17306) * Use torch.finfo(self.dtype).min * for GPTNeoX * for Albert * For Splinter * Update src/transformers/models/data2vec/modeling_data2vec_audio.py Co-authored-by: Patrick von Platen <[email protected]> * fix -inf used in Bart-like models * Fix a few remaining -inf * more fix * clean up * For CLIP * For FSMT * clean up * fix test * Add dtype argument and use it for LayoutLMv3 * update FlaxLongT5Attention Co-authored-by: ydshieh <[email protected]> Co-authored-by: Patrick von Platen <[email protected]>
https://github.com/huggingface/transformers.git
def fill_with_neg_inf(t): return t.float().fill_(torch.finfo(t.dtype).min).type_as(t) # Public API
31
modeling_fsmt.py
Python
src/transformers/models/fsmt/modeling_fsmt.py
d3cb28886ac68beba9a6646b422a4d727b056c0c
transformers
1
52,513
11
9
2
44
7
0
11
25
encode
Add Diffsinger Module (#2120) * add diffsinger * update README * update README
https://github.com/PaddlePaddle/PaddleHub.git
def encode(self, s): return [int(w) + self._num_reserved_ids for w in s.split()]
27
text_encoder.py
Python
modules/audio/svs/diffsinger/utils/text_encoder.py
7eef3bfde63d03acbd1fc9a15a5e56bef47c0ef7
PaddleHub
2
136,752
42
12
172
181
25
0
54
150
test_logs_manager_resolve_file
[core][observability] Refactor ray log API (#30422) This PR changes a few ray log semantics based on this: https://docs.google.com/document/d/1mwLz589IZ4LlPh218dDTMskec9hp3r40hw7ROYq3eVo/edit Change ray logs with various ids to ray logs <subcommand> i.e. ray logs worker --pid=x and ray logs actor --id=x and ray logs cluster <file_name> Added suffix options for querying logs through pid/actor id to differentiate .out and .err files. Alias ray logs ... to ray logs cluster ... so that ray logs : print help ray logs cluster: show all logs on head node ray logs <glob> same as ray logs cluster <glob>: list/get files by filename.
https://github.com/ray-project/ray.git
async def test_logs_manager_resolve_file(logs_manager): node_id = NodeID(b"1" * 28) logs_client = logs_manager.data_source_client logs_client.get_all_registered_agent_ids = MagicMock() logs_client.get_all_registered_agent_ids.return_value = [node_id.hex()] expected_filename = "filename" log_file_name, n = await logs_manager.resolve_filename( node_id=node_id, log_filename=expected_filename, actor_id=None, task_id=None, pid=None, get_actor_fn=lambda _: True, timeout=10, ) assert log_file_name == expected_filename assert n == node_id # Actor doesn't exist. with pytest.raises(ValueError): actor_id = ActorID(b"2" * 16)
874
test_state_api_log.py
Python
python/ray/tests/test_state_api_log.py
acff8b6fa615bfe6463fd76660b14ef2bbafda42
ray
1
131,590
43
6
37
45
8
1
61
85
test_run_driver_twice
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def test_run_driver_twice(ray_start_regular): # We used to have issue 2165 and 2288: # https://github.com/ray-project/ray/issues/2165 # https://github.com/ray-project/ray/issues/2288 # both complain that driver will hang when run for the second time. # This test is used to verify the fix for above issue, it will run the # same driver for twice and verify whether both of them succeed. address_info = ray_start_regular driver_script =
driver_script = """
37
test_multi_node_3.py
Python
python/ray/tests/test_multi_node_3.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
2
85,028
29
13
10
96
10
0
32
97
test_create_realm_no_creation_key
realm_creation: Rework error pages. The previous error page was inadequate for serving the two different scenarios where we show errors in realm_creations, in particular containing a misleading sentence about realm creation being disabled (even in the case where it was actually enabled and the user simply had an expired link).
https://github.com/zulip/zulip.git
def test_create_realm_no_creation_key(self) -> None: email = "[email protected]" with self.settings(OPEN_REALM_CREATION=False): # Create new realm with the email, but no creation key. result = self.client_post("/new/", {"email": email}) self.assertEqual(result.status_code, 200) self.assert_in_response("Organization creation link required", result)
53
test_signup.py
Python
zerver/tests/test_signup.py
582d5b0aa31ac79a5ee1af95b2e71c4bfc53d5aa
zulip
1
215,983
20
12
5
68
8
1
20
38
is_photonos
Update to latest ``pyupgrade`` hook. Stop skipping it on CI. Signed-off-by: Pedro Algarvio <[email protected]>
https://github.com/saltstack/salt.git
def is_photonos(): (osname, osrelease, oscodename) = ( x.strip('"').strip("'") for x in linux_distribution() ) return osname == "VMware Photon OS" @real_memoize
@real_memoize
36
platform.py
Python
salt/utils/platform.py
f2a783643de61cac1ff3288b40241e5ce6e1ddc8
salt
2
259,057
24
10
6
94
11
0
29
55
_array_indexing
MNT Clean fixes and compat for old versions of our dependencies (#22642) Co-authored-by: Olivier Grisel <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def _array_indexing(array, key, key_dtype, axis): if issparse(array) and key_dtype == "bool": key = np.asarray(key) if isinstance(key, tuple): key = list(key) return array[key] if axis == 0 else array[:, key]
60
__init__.py
Python
sklearn/utils/__init__.py
34f9dbf54164e3c62d68765fe45f27f067a45562
scikit-learn
5
244,274
33
12
11
130
17
0
47
139
forward_single
[Feature] Support DDOD: Disentangle Your Dense Object Detector(ACM MM2021 oral) (#7279) * add ddod feature * add ddod feature * modify new * [Feature] modify ddod code0225 * [Feature] modify ddod code0226 * [Feature] modify ddod code0228 * [Feature] modify ddod code0228#7279 * [Feature] modify ddod code0301 * [Feature] modify ddod code0301 test draft * [Feature] modify ddod code0301 test * [Feature] modify ddod code0301 extra * [Feature] modify ddod code0301 delete src/mmtrack * [Feature] modify ddod code0302 * [Feature] modify ddod code0302(2) * [Feature] modify ddod code0303 * [Feature] modify ddod code0303(2) * [Feature] modify ddod code0303(3) * [Feature] modify ddod code0305 * [Feature] modify ddod code0305(2) delete diou * [Feature] modify ddod code0305(3) * modify ddod code0306 * [Feature] modify ddod code0307 * [Feature] modify ddod code0311 * [Feature] modify ddod code0311(2) * [Feature] modify ddod code0313 * update * [Feature] modify ddod code0319 * fix * fix lint * [Feature] modify ddod code0321 * update readme * [0502] compute common vars at once for get_target * [0504] update ddod conflicts * [0518] seperate reg and cls loss and get_target compute * [0518] merge ATSSCostAssigner to ATSSAssigner * [0518] refine ATSSAssigner * [0518] refine ATSSAssigner 2 * [0518] refine ATSSAssigner 2 * [0518] refine ATSSAssigner 3 * [0519] fix bugs * update * fix lr * update weight Co-authored-by: hha <[email protected]>
https://github.com/open-mmlab/mmdetection.git
def forward_single(self, x, scale): cls_feat = x reg_feat = x for cls_conv in self.cls_convs: cls_feat = cls_conv(cls_feat) for reg_conv in self.reg_convs: reg_feat = reg_conv(reg_feat) cls_score = self.atss_cls(cls_feat) # we just follow atss, not apply exp in bbox_pred bbox_pred = scale(self.atss_reg(reg_feat)).float() iou_pred = self.atss_iou(reg_feat) return cls_score, bbox_pred, iou_pred
79
ddod_head.py
Python
mmdet/models/dense_heads/ddod_head.py
151a803ed0119560f59dbe7b73824dbdcae08fc6
mmdetection
3
291,307
8
6
3
25
4
0
8
22
native_value
Add `text` platform (#79454) Co-authored-by: Franck Nijhof <[email protected]> Co-authored-by: Franck Nijhof <[email protected]>
https://github.com/home-assistant/core.git
def native_value(self) -> str | None: return self._attr_native_value
14
__init__.py
Python
homeassistant/components/text/__init__.py
003e4224c89a6da381960dc5347750d1521d85c9
core
1
8,423
4
8
2
24
4
0
4
18
to_dict
Config Object (#2426) * Fixed loss instances across features * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed binary OneOfImplementation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Flake 8 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix custom loss components * Fix gbm category * Remove config object code, out of scope * Fixed more tests * Fixed incorrect text preproc default, added clip to category feature level * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixes additional tests * Cache jsonschema validator to reduce memory pressure * Fix imports * Skip neuropod test * Added upgrade audio to default preproc back compat and cleaned up * Small nits * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Change backfill constant for audio * Add docstring to compute feature hash * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Unused import * Another backfill constant change * Unused import * remove default population functions * Added config object test * rewired build_inputs * rewired combiner in ecd, added logic to config object * Refactored ecd.py * Fixing up merge_with_defaults, need metadata changes in master * Refactored defaults section and mega upgraded config obj * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed some formatting * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed feature col, proc col, and render config from defaults.py * Fix duplicate import * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added config initializer to merge defaults flow * Refactored update_config_with_metadata * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added dict conversion method to config object and refactored merge config function in config_utils * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Refactored until preproc entrypoint * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed update_config_with_metadata * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Removed load config base feature method - no longer necessary * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Formatting * Fixed input size assignment * Temp fix * Fixed pretrained encoder path referencing temp until preproc refactor * Solved the WORST BUG EVER * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Switch reduce_input to None for sequence tagger * Fixed another one * Fixed typo * Various test fixes * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Flake 8 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed excess defaults params issue * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Minor fixes * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed some defaults tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed more tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed more tests * Formatting * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * More test fixes * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed defaults tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix more tests * Flake 8 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix more tests * Fixed more tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed more tests * Fixed more tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fixing ghost tests attempt * Deep copy to smash the ghost failures * Copied top level modules now too * Started fixing hyperopt * Fixed Hyperopt Issues * Flake 8 * Remove commented out code * Address Piero feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Flake 8 * Removed merge with defaults * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed various issues with preprocessing and splitting positioning * Fixed hyperopt issues * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Refactored api pipeline to use all config obj references * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed more tests * Flake 8 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix more tests * Fixed auto tune learning rate and batch size * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed sequence feature tests * Fixed image feature test * Fixed last test * flake 8 * Marshmallowify Config object, remove manual to dict method, add Factory method constructors * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Validate config within config object * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * All Travis feedback addressed * Using all new constructors now * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * removed from class attributes * Added deep copies back and piped repr inheritance * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Format * Small error fix, moved back compat into Config Object * Flake8 * Docstring for hyperopt defaults method * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address Joppe feedback * Revert "Address Joppe feedback" This reverts commit 42f1665ef917d062a010550bb960594c355285ff. * Fix tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Flake8 * fix test * Small improvement * Changed repr for input features, added feature enabling/disabling * Added feature enabling/disabling, and better reprs for SDK dev * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Flake 8 * Added rich to requirements.txt * Add some more CO tests and comment more on CO code * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix explain issue * Julian feedback * Added TODOs for future refactor PRs * Fix explain test failure, test shared state improvement and bug fix, remove unncessary code from convert_submodules * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * implement Daniel's feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix residual errors * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Error fix * Using mixins now so no loose attributes on defaults, fixed height width schema restrictions * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Removed unnecessary filtering from defaults schema logic * Piero's simplification and cleanup * Flake 8 * Fix test and update docstrings from Pieros change * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address most of Justin's feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix tests and more feedback implementation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Renamed files to correspond to ModelConfig class name * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Missing constant import * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed incorrect merge conflict resolution * Flake8 * Fix remaining tests (except old models training from trainer type removal) * Fixed old models not validating trainer type * Add output_feature=False to test_hyperopt_ray.py * Implement Kabir's feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Travis Addair <[email protected]> Co-authored-by: w4nderlust <[email protected]>
https://github.com/ludwig-ai/ludwig.git
def to_dict(self): return convert_submodules(self.__dict__)
13
model_config.py
Python
ludwig/schema/model_config.py
4d2d81f9fdefc52eea6a9bf0826a6f2ffc8d681b
ludwig
1
27,990
62
16
21
271
24
0
94
360
preprocess
Better media thumbnails including WebP support (#9988) * Add thumbnail app * Update get_thumbnail_size method and add tests * Add logic for creating thumbnails * Update logic for getting thumbnail * Allow defining format for tumbnail generation * Clear handle_thumbnail views * Add prepare_image_proxy_url method * Use ImageField for user avatar * Allow defining thumbnail format when querying user avatar * Use ImageField for category backgound_image * Use ImageField for Collection backgound_image * Use ImageField for ProductMedia image * Ensure that thumbnails are deleted when category background_image is changed or deleted * Ensure that thumbnails are deleted when collection background_image is changed or deleted * Update product media deleteion task and failing tests * Delete thumbnail from storage when thumbnail objects is deleted * Fix import in product test_bulk_delete * Drop create_thumbnails command * Update Product.thumbnail resolver * Update OrderLine thumbnail resolver * Add missing ADDED_IN_35 and PREVIEW_FEATURE labels * Update account and product signals - ensure the image is deleted from storage * Refactor product_images methods * Add signal for product media image delete * Drop create_thumbnails method and not longer valid settings fields * Clean the ProcessedImage class * Drop versatileimagefield from INSTALLED_APPS * Update changelog * Drop comments from ThumbnailFormat * Add get_image_or_proxy_url method * Apply reiew suggestions - add ThumbnailField and use get_image_or_proxy_ur when it's possible * Update changelog * Replace ADDED_IN_35 with ADDED_IN_36 label * Update changelog Co-authored-by: Marcin Gębala <[email protected]>
https://github.com/saleor/saleor.git
def preprocess(self, image, image_format): format = self.format or image_format save_kwargs = {"format": format} # Ensuring image is properly rotated if hasattr(image, "_getexif"): exif_datadict = image._getexif() # returns None if no EXIF data if exif_datadict is not None: exif = dict(exif_datadict.items()) orientation = exif.get(self.EXIF_ORIENTATION_KEY, None) if orientation == 3: image = image.transpose(Image.ROTATE_180) elif orientation == 6: image = image.transpose(Image.ROTATE_270) elif orientation == 8: image = image.transpose(Image.ROTATE_90) # Ensure any embedded ICC profile is preserved save_kwargs["icc_profile"] = image.info.get("icc_profile") if hasattr(self, "preprocess_%s" % format): image, addl_save_kwargs = getattr(self, "preprocess_%s" % format)( image=image ) save_kwargs.update(addl_save_kwargs) return image, save_kwargs
162
utils.py
Python
saleor/thumbnail/utils.py
5d1a36b9aaf408016957db04f86397b2e53c2500
saleor
8
189,440
38
12
23
163
10
0
42
319
_get_formatter
Hide more private methods from the docs. (#2468) * hide privs from text_mobject.py * hide privs from tex_mobject.py * hide privs from code_mobject.py * hide privs from svg_mobject.py * remove SVGPath and utils from __init__.py * don't import string_to_numbers * hide privs from geometry.py * hide privs from matrix.py * hide privs from numbers.py * hide privs from three_dimensions.py * forgot underscore under set_stroke_width_from_length * there were more i missed * unhidea method that was used in docs * forgot other text2hash * remove svg_path from docs
https://github.com/ManimCommunity/manim.git
def _get_formatter(self, **kwargs): config = { attr: getattr(self, attr) for attr in [ "include_sign", "group_with_commas", "num_decimal_places", ] } config.update(kwargs) return "".join( [ "{", config.get("field_name", ""), ":", "+" if config["include_sign"] else "", "," if config["group_with_commas"] else "", ".", str(config["num_decimal_places"]), "f", "}", ], )
92
numbers.py
Python
manim/mobject/numbers.py
902e7eb4f0147b5882a613b67467e38a1d47f01e
manim
4
87,194
27
12
14
135
16
0
30
184
test_get_dynamic_sampling_after_migrating_to_new_plan_default_biases
feat(ds): Support new DS behaviour in project_details endpoint (#40387) Supports new adaptive dynamic sampling behaviour alongside the deprecated dynamic sampling behaviour and achieves that through feature flag differentiation This PR achieve that through the following: - Introducing a new `DynamicSamplingBiasSerializer` which is composed of id representing the bias name and a boolean flag indicating whether that particular flag is active or not - Modifies current existing behavior for both old sampling flag and new sampling flag. Essentially the new setup entails that to be on the old dynamic sampling, the following flags need to be enabled "organizations:server-side-sampling" and "organizations:server-side-sampling-ui", and to be on the new dynamic sampling configurations, you need the following flags to be enabled "organizations:dynamic-sampling-basic" and "organizations:server-side-sampling" P.S. 1: These flags will be replaced "organizations:server-side-sampling-ui" -> "organizations:dynamic-sampling-deprecated" "organizations:server-side-sampling-basic" -> "organizations:dynamic-sampling" Hence, these feature flags need to be updated once this PR lands https://github.com/getsentry/sentry/pull/40388 P.S. 2: If a project is on the new plan and the old plan, the new plan takes precedence - Introduces default biases that are enabled by default and can be overwritten. The motivation to do this is to be able to add new biases that are enabled by default, and both the GET and PUT request honor this list - `GET` and `POST` endpoint does a dictionary update of user's stored biases on the default biases that are hardcoded, and returns them to the UI/ relay. This means that the introduced project option "sentry:dynamic_sampling_biases" might not have all the toggles enabled/disabled through the UI but only the ones that a customer chose to modify Followup: - This new feature flag behaviour needs to be reflected in ProjectConfig computations
https://github.com/getsentry/sentry.git
def test_get_dynamic_sampling_after_migrating_to_new_plan_default_biases(self): self.project.update_option("sentry:dynamic_sampling", self.dynamic_sampling_data) with Feature( { self.universal_ds_flag: True, self.old_ds_flag: True, self.new_ds_flag: True, } ): response = self.get_success_response( self.organization.slug, self.project.slug, method="get" ) assert response.data["dynamicSampling"] is None assert response.data["dynamicSamplingBiases"] == DEFAULT_BIASES
83
test_project_details.py
Python
tests/sentry/api/endpoints/test_project_details.py
5462ee11ad11ebb9a50323befcd286816d7898c8
sentry
1
19,360
25
11
9
115
9
0
36
107
calc_second_derivative
enhance cubic spline path doc (#698) * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cublic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc * enhance cubic spline path doc
https://github.com/AtsushiSakai/PythonRobotics.git
def calc_second_derivative(self, x): if x < self.x[0]: return None elif x > self.x[-1]: return None i = self.__search_index(x) dx = x - self.x[i] ddy = 2.0 * self.c[i] + 6.0 * self.d[i] * dx return ddy
78
cubic_spline_planner.py
Python
PathPlanning/CubicSpline/cubic_spline_planner.py
def289b723e9216830c2a7b2577cb31b55710167
PythonRobotics
3
80,457
45
14
21
358
36
0
62
273
final_run_hook
Make task logic use consistent artifact dir location
https://github.com/ansible/awx.git
def final_run_hook(self, instance, status, private_data_dir, fact_modification_times): instance.log_lifecycle("finalize_run") artifact_dir = os.path.join(private_data_dir, 'artifacts', str(self.instance.id)) job_profiling_dir = os.path.join(artifact_dir, 'playbook_profiling') awx_profiling_dir = '/var/log/tower/playbook_profiling/' collections_info = os.path.join(artifact_dir, 'collections.json') ansible_version_file = os.path.join(artifact_dir, 'ansible_version.txt') if not os.path.exists(awx_profiling_dir): os.mkdir(awx_profiling_dir) if os.path.isdir(job_profiling_dir): shutil.copytree(job_profiling_dir, os.path.join(awx_profiling_dir, str(instance.pk))) if os.path.exists(collections_info): with open(collections_info) as ee_json_info: ee_collections_info = json.loads(ee_json_info.read()) instance.installed_collections = ee_collections_info instance.save(update_fields=['installed_collections']) if os.path.exists(ansible_version_file): with open(ansible_version_file) as ee_ansible_info: ansible_version_info = ee_ansible_info.readline() instance.ansible_version = ansible_version_info instance.save(update_fields=['ansible_version'])
214
jobs.py
Python
awx/main/tasks/jobs.py
8fac1c18c8c0ad04d4a32ed3e7f24e883b6e5261
awx
5
289,724
8
10
4
41
4
0
8
29
async_added_to_hass
Use DataUpdateCoordinator in scrape (#80593) * Add DataUpdateCoordinator to scrape * Fix tests
https://github.com/home-assistant/core.git
async def async_added_to_hass(self) -> None: await super().async_added_to_hass() self._async_update_from_rest_data()
21
sensor.py
Python
homeassistant/components/scrape/sensor.py
64d6d04ade1ae851e1bd1fe63f89ae62c0de55ab
core
1
19,673
34
15
12
157
18
0
42
149
dist_is_in_project
Issue 4993 Add standard pre commit hooks and apply linting. (#4994) * Add .pre-commit-config.yaml to the project and exclude tests (for now). This does not include the MyPy linting that pip does but does include everything else.
https://github.com/pypa/pipenv.git
def dist_is_in_project(self, dist): # type: (pkg_resources.Distribution) -> bool from .environments import normalize_pipfile_path as _normalized prefixes = [ _normalized(prefix) for prefix in self.base_paths["libdirs"].split(os.pathsep) if _normalized(prefix).startswith(_normalized(self.prefix.as_posix())) ] location = self.locate_dist(dist) if not location: return False location = _normalized(make_posix(location)) return any(location.startswith(prefix) for prefix in prefixes)
95
environment.py
Python
pipenv/environment.py
9a3b3ce70621af6f9adaa9eeac9cf83fa149319c
pipenv
5
187,499
10
12
7
67
11
0
10
34
test_failure_schema
plugin.api.validate: add re.Pattern validation Use the `search()` method and return `None` or a `re.Match` instance. This avoids having to explicitly define the validation pattern ```py validate.transform(re.compile(...).search) ```
https://github.com/streamlink/streamlink.git
def test_failure_schema(self): with pytest.raises(validate.ValidationError) as cm: validate.validate(re.compile(r"foo"), 123) assert_validationerror(cm.value, )
39
test_api_validate.py
Python
tests/test_api_validate.py
b083eb3709c3abf0e3b7c0a2524ddacb5ac80f2e
streamlink
1
81,001
46
21
17
246
33
0
57
156
reap
Specifically abort the reaper if instance not registered
https://github.com/ansible/awx.git
def reap(instance=None, status='failed', excluded_uuids=[]): me = instance if me is None: try: me = Instance.objects.me() except RuntimeError as e: logger.warning(f'Local instance is not registered, not running reaper: {e}') return now = tz_now() workflow_ctype_id = ContentType.objects.get_for_model(WorkflowJob).id jobs = UnifiedJob.objects.filter( (Q(status='running') | Q(status='waiting', modified__lte=now - timedelta(seconds=60))) & (Q(execution_node=me.hostname) | Q(controller_node=me.hostname)) & ~Q(polymorphic_ctype_id=workflow_ctype_id) ).exclude(celery_task_id__in=excluded_uuids) for j in jobs: reap_job(j, status)
147
reaper.py
Python
awx/main/dispatch/reaper.py
fe5736dc7f9566fad26bcc43241e50355a391fd6
awx
4
28,452
16
8
6
67
7
0
20
44
test_delete_files_from_storage_task_files_not_existing_files
Fix the migration for removing media marked as to remove (#10429) * Add celery task for removing multiple files from storage * Fix the migration for removing media marked as to remove
https://github.com/saleor/saleor.git
def test_delete_files_from_storage_task_files_not_existing_files(media_root): # given path = "random/test-path" path_2 = "random/test-path-2" assert not default_storage.exists(path) assert not default_storage.exists(path_2) # when delete_files_from_storage_task([path, path_2])
36
test_tasks.py
Python
saleor/core/tests/test_tasks.py
2611883cda3b84ccbfcbf37221f5b62a08bc9af1
saleor
1
288,884
67
22
75
475
33
0
108
1,553
test_velux_cover_setup
Migrate HomeKit Controller to use stable identifiers (#80064)
https://github.com/home-assistant/core.git
async def test_velux_cover_setup(hass): accessories = await setup_accessories_from_file(hass, "velux_gateway.json") await setup_test_accessories(hass, accessories) await assert_devices_and_entities_created( hass, DeviceTestInfo( unique_id=HUB_TEST_ACCESSORY_ID, name="VELUX Gateway", model="VELUX Gateway", manufacturer="VELUX", sw_version="70", hw_version="", serial_number="a1a11a1", devices=[ DeviceTestInfo( name="VELUX Window", model="VELUX Window", manufacturer="VELUX", sw_version="48", hw_version="", serial_number="1111111a114a111a", unique_id="00:00:00:00:00:00:aid:3", devices=[], entities=[ EntityTestInfo( entity_id="cover.velux_window_roof_window", friendly_name="VELUX Window Roof Window", unique_id="00:00:00:00:00:00_3_8", supported_features=CoverEntityFeature.CLOSE | CoverEntityFeature.SET_POSITION | CoverEntityFeature.OPEN, state="closed", ), ], ), DeviceTestInfo( name="VELUX Sensor", model="VELUX Sensor", manufacturer="VELUX", sw_version="16", hw_version="", serial_number="a11b111", unique_id="00:00:00:00:00:00:aid:2", devices=[], entities=[ EntityTestInfo( entity_id="sensor.velux_sensor_temperature_sensor", friendly_name="VELUX Sensor Temperature sensor", capabilities={"state_class": SensorStateClass.MEASUREMENT}, unique_id="00:00:00:00:00:00_2_8", unit_of_measurement=TEMP_CELSIUS, state="18.9", ), EntityTestInfo( entity_id="sensor.velux_sensor_humidity_sensor", friendly_name="VELUX Sensor Humidity sensor", capabilities={"state_class": SensorStateClass.MEASUREMENT}, unique_id="00:00:00:00:00:00_2_11", unit_of_measurement=PERCENTAGE, state="58", ), EntityTestInfo( entity_id="sensor.velux_sensor_carbon_dioxide_sensor", friendly_name="VELUX Sensor Carbon Dioxide sensor", capabilities={"state_class": SensorStateClass.MEASUREMENT}, unique_id="00:00:00:00:00:00_2_14", unit_of_measurement=CONCENTRATION_PARTS_PER_MILLION, state="400", ), ], ), ], entities=[], ), )
290
test_velux_gateway.py
Python
tests/components/homekit_controller/specific_devices/test_velux_gateway.py
f23b1750e85f07091eb896a0b12b8f95e5646338
core
1
195,147
7
9
2
39
4
0
7
21
_get_text
Friends Dataset Teacher Add `speakers` field and flag to exclude speaker labels in `text` (#4693) * add speakers field; add flag to exclude speaker labels in text * fix logic * update test fixture * refactor out method
https://github.com/facebookresearch/ParlAI.git
def _get_text(self, utterance): return utterance['text'].replace('\n', ' ')
20
agents.py
Python
parlai/tasks/friends/agents.py
982acb50f99261133ffa90ba04c569881389db6d
ParlAI
1
178,803
71
15
28
242
26
0
94
362
getWindowsShortPathName
Windows: Fix, need to ignore permission denied when shortening paths. * We shorten paths that may be inaccessible anyway, then it does not make a difference we assume.
https://github.com/Nuitka/Nuitka.git
def getWindowsShortPathName(filename): import ctypes.wintypes GetShortPathNameW = ctypes.windll.kernel32.GetShortPathNameW GetShortPathNameW.argtypes = [ ctypes.wintypes.LPCWSTR, ctypes.wintypes.LPWSTR, ctypes.wintypes.DWORD, ] GetShortPathNameW.restype = ctypes.wintypes.DWORD output_buf_size = 0 while True: output_buf = ctypes.create_unicode_buffer(output_buf_size) needed = GetShortPathNameW( os.path.abspath(filename), output_buf, output_buf_size ) if needed == 0: # Windows only code, pylint: disable=I0021,undefined-variable # Permission denied. if ctypes.GetLastError() == 5: return filename raise WindowsError( ctypes.GetLastError(), ctypes.FormatError(ctypes.GetLastError()) ) if output_buf_size >= needed: # Short paths should be ASCII. Don't return unicode without a need, # as e.g. Scons hates that in environment variables. if str is bytes: return output_buf.value.encode("utf8") else: return output_buf.value else: output_buf_size = needed
149
FileOperations.py
Python
nuitka/utils/FileOperations.py
5e056fe7f99b902cf7ac0539d6639d8fc666e0ee
Nuitka
6
259,553
154
15
30
524
46
1
238
390
ols_ridge_dataset
TST tight and clean tests for Ridge (#22910) * MNT replace pinvh by solve * DOC more info for svd solver * TST rewrite test_ridge * MNT remove test_ridge_singular * MNT restructure into several tests * MNT remove test_toy_ridge_object * MNT remove test_ridge_sparse_svd This is tested in test_ridge_fit_intercept_sparse_error. * TST exclude cholesky from singular problem * CLN two fixes * MNT parametrize test_ridge_sample_weights * MNT restructure test_ridge_sample_weights * CLN tighten tolerance for sag solver * CLN try to fix saga tolerance * CLN make test_ridge_sample_weights nicer * MNT remove test_ridge_regression_sample_weights * MNT rename to test_ridge_regression_sample_weights * CLN make test_ridge_regression_unpenalized pass for all random seeds * CLN make tests pass for all random seeds * DOC fix typos * TST skip cholesky for singular problems * MNT move up test_ridge_regression_sample_weights * CLN set skip reason as comment
https://github.com/scikit-learn/scikit-learn.git
def ols_ridge_dataset(global_random_seed, request): # Make larger dim more than double as big as the smaller one. # This helps when constructing singular matrices like (X, X). if request.param == "long": n_samples, n_features = 12, 4 else: n_samples, n_features = 4, 12 k = min(n_samples, n_features) rng = np.random.RandomState(global_random_seed) X = make_low_rank_matrix( n_samples=n_samples, n_features=n_features, effective_rank=k ) X[:, -1] = 1 # last columns acts as intercept U, s, Vt = linalg.svd(X) assert np.all(s) > 1e-3 # to be sure U1, U2 = U[:, :k], U[:, k:] Vt1, _ = Vt[:k, :], Vt[k:, :] if request.param == "long": # Add a term that vanishes in the product X'y coef_ols = rng.uniform(low=-10, high=10, size=n_features) y = X @ coef_ols y += U2 @ rng.normal(size=n_samples - n_features) ** 2 else: y = rng.uniform(low=-10, high=10, size=n_samples) # w = X'(XX')^-1 y = V s^-1 U' y coef_ols = Vt1.T @ np.diag(1 / s) @ U1.T @ y # Add penalty alpha * ||coef||_2^2 for alpha=1 and solve via normal equations. # Note that the problem is well conditioned such that we get accurate results. alpha = 1 d = alpha * np.identity(n_features) d[-1, -1] = 0 # intercept gets no penalty coef_ridge = linalg.solve(X.T @ X + d, X.T @ y) # To be sure R_OLS = y - X @ coef_ols R_Ridge = y - X @ coef_ridge assert np.linalg.norm(R_OLS) < np.linalg.norm(R_Ridge) return X, y, coef_ols, coef_ridge @pytest.mark.parametrize("solver", SOLVERS) @pytest.mark.parametrize("fit_intercept", [True, False])
@pytest.mark.parametrize("solver", SOLVERS) @pytest.mark.parametrize("fit_intercept", [True, False])
306
test_ridge.py
Python
sklearn/linear_model/tests/test_ridge.py
6528e14085d059f9d0c94f93378e7e3c0b967f27
scikit-learn
3
143,133
6
6
3
22
4
0
6
20
metric
[tune/structure] Refactor `suggest` into `search` package (#26074) This PR renames the `suggest` package to `search` and alters the layout slightly. In the new package, the higher-level abstractions are on the top level and the search algorithms have their own subdirectories. In a future refactor, we can turn algorithms such as PBT into actual `SearchAlgorithm` classes and move them into the `search` package. The main reason to keep algorithms and searchers in the same directory is to avoid user confusion - for a user, `Bayesopt` is as much a search algorithm as e.g. `PBT`, so it doesn't make sense to split them up.
https://github.com/ray-project/ray.git
def metric(self) -> str: return self._metric
12
searcher.py
Python
python/ray/tune/search/searcher.py
75d08b06328d213656e7280639b35ccecdfc34d0
ray
1
264,299
24
13
6
80
13
0
25
79
export_template
Refactor generic views; add plugins dev documentation
https://github.com/netbox-community/netbox.git
def export_template(self, template, request): try: return template.render_to_response(self.queryset) except Exception as e: messages.error(request, f"There was an error rendering the selected export template ({template.name}): {e}") return redirect(request.path)
42
bulk_views.py
Python
netbox/netbox/views/generic/bulk_views.py
54834c47f8870e7faabcd847c3270da0bd3d2884
netbox
2
135,916
4
6
2
19
3
0
4
18
experiment_name
[air/wandb] Deprecate Wandb mixin, move to `setup_wandb()` function (#29828) The wandb trainable mixin is clunky to use and can't be used with training loops of Ray AIR trainers. Instead we introduce a utility method `setup_wandb()` which initializes a wandb session with (overwritable) default values. This method achieves maximum flexibility for users to interact with the wandb API as they are used to. Additionally, we return a mock-API object for non rank-zero workers so that logging is not duplicated in distributed settings. This can also be disabled. Further, this PR moves the wandb integration into `ray.air.integrations.wandb` and restructures the code so that the callback imports from this location. As a note, this PR introduces `session.get_experiment_name()` as a way to retrieve the experiment name of the current run - this is needed to setup the default group name and seems to be a non-intrusive addition to the session API. But open to suggestions here. Signed-off-by: Kai Fricke <[email protected]>
https://github.com/ray-project/ray.git
def experiment_name(self): return self._experiment_name
10
function_trainable.py
Python
python/ray/tune/trainable/function_trainable.py
e7d9c242afab6028a2dd87def73389969d1dbd70
ray
1
56,987
15
12
12
65
10
0
15
110
activate
add initial test coverage for KubernetesClusterConfig
https://github.com/PrefectHQ/prefect.git
def activate(self) -> None: try: load_kube_config_from_dict( config_dict=self.config, context=self.context, ) except AttributeError as ae: print(str(ae)) raise
38
kubernetes.py
Python
src/prefect/blocks/kubernetes.py
5472ef18cf79bf24ed4c0175a97c70cbca7bb92a
prefect
2
33,579
11
12
7
78
4
0
16
45
get_model
create Past CI results as tables for GitHub issue (#18953) * create Past CI results as tables for GitHub issue Co-authored-by: ydshieh <[email protected]>
https://github.com/huggingface/transformers.git
def get_model(test): test = test.split("::")[0] if test.startswith("tests/models/"): test = test.split("/")[2] else: test = None return test
43
get_ci_error_statistics.py
Python
utils/get_ci_error_statistics.py
367026000bbe9957f95eb1eb7d9649d78ac0b468
transformers
2
82,124
38
12
12
125
13
0
52
180
get_reason_if_failed
No InventoryUpdates when source Project is failed (#13063) Previously, in some cases, an InventoryUpdate sourced by an SCM project would still run and be successful even after the project it is sourced from failed to update. This would happen because the InventoryUpdate would revert the project back to its last working revision. This behavior is confusing and inconsistent with how we handle jobs (which just refuse to launch when the project is failed). This change pulls out the logic that the job launch serializer and RunJob#pre_run_hook had implemented (independently) to check if the project is in a failed state, and puts it into a method on the Project model. This is then checked in the project launch serializer as well as the inventory update serializer, along with SourceControlMixin#sync_and_copy as a fallback for things that don't run the serializer validation (such as scheduled jobs and WFJT jobs). Signed-off-by: Rick Elrod <[email protected]>
https://github.com/ansible/awx.git
def get_reason_if_failed(self): if self.status not in ('error', 'failed'): return None latest_update = self.project_updates.last() if latest_update is not None and latest_update.failed: failed_validation_tasks = latest_update.project_update_events.filter( event='runner_on_failed', play="Perform project signature/checksum verification", ) if failed_validation_tasks: return _("Last project update failed due to signature validation failure.") return _("Missing a revision to run due to failed project update.")
69
projects.py
Python
awx/main/models/projects.py
1c65339a24cab91d0ab6c49824dff3925160b23f
awx
5
208,058
49
11
9
190
16
0
71
148
apply
Canvas Header Stamping (#7384) * Strip down the header-stamping PR to the basics. * Serialize groups. * Add groups to result backend meta data. * Fix spelling mistake. * Revert changes to canvas.py * Revert changes to app/base.py * Add stamping implementation to canvas.py * Send task to AMQP with groups. * Successfully pass single group to result. * _freeze_gid dict merge fixed * First draft of the visitor API. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * OptionsVisitor created * Fixed canvas.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added test for simple test for chord and fixed chord implementation * Changed _IMMUTABLE_OPTIONS * Fixed chord interface * Fixed chord interface * Fixed chord interface * Fixed chord interface * Fixed list order * Fixed tests (stamp test and chord test), fixed order in groups * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed lint and elements * Changed implementation of stamp API and fix lint * Added documentation to Stamping API. Added chord with groups test * Implemented stamping inside replace and added test for an implementation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added test additonal tests for chord, improved coverage * Added test additonal tests for chord, improved coverage * Added test additonal tests for chord, improved coverage * Splitted into subtests * Group stamping rollback * group.id is None fixed * Added integration test * Added integration test * apply_async fixed * Integration test and test_chord fixed * Lint fixed * chord freeze fixed * Minor fixes. * Chain apply_async fixed and tests fixed * lint fixed * Added integration test for chord * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * type -> isinstance * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Redo header stamping (#7341) * _freeze_gid dict merge fixed * OptionsVisitor created * Fixed canvas.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added test for simple test for chord and fixed chord implementation * Changed _IMMUTABLE_OPTIONS * Fixed chord interface * Fixed chord interface * Fixed chord interface * Fixed chord interface * Fixed list order * Fixed tests (stamp test and chord test), fixed order in groups * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed lint and elements * Changed implementation of stamp API and fix lint * Added documentation to Stamping API. Added chord with groups test * Implemented stamping inside replace and added test for an implementation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added test additonal tests for chord, improved coverage * Added test additonal tests for chord, improved coverage * Added test additonal tests for chord, improved coverage * Splitted into subtests * Group stamping rollback * group.id is None fixed * Added integration test * Added integration test * apply_async fixed * Integration test and test_chord fixed * Lint fixed * chord freeze fixed * Minor fixes. * Chain apply_async fixed and tests fixed * lint fixed * Added integration test for chord * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * type -> isinstance * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Omer Katz <[email protected]> * Added stamping mechanism * Manual stamping improved * flake8 fixed * Added subtests * Add comma. * Moved groups to stamps * Fixed chord and added test for that * Strip down the header-stamping PR to the basics. * Serialize groups. * Add groups to result backend meta data. * Fix spelling mistake. * Revert changes to canvas.py * Revert changes to app/base.py * Add stamping implementation to canvas.py * Send task to AMQP with groups. * Successfully pass single group to result. * _freeze_gid dict merge fixed * First draft of the visitor API. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * OptionsVisitor created * Fixed canvas.py * Added test for simple test for chord and fixed chord implementation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Changed _IMMUTABLE_OPTIONS * Fixed chord interface * Fixed chord interface * Fixed chord interface * Fixed chord interface * Fixed list order * Fixed tests (stamp test and chord test), fixed order in groups * Fixed lint and elements * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Changed implementation of stamp API and fix lint * Added documentation to Stamping API. Added chord with groups test * Implemented stamping inside replace and added test for an implementation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added test additonal tests for chord, improved coverage * Added test additonal tests for chord, improved coverage * Added test additonal tests for chord, improved coverage * Splitted into subtests * Group stamping rollback * group.id is None fixed * Added integration test * Added integration test * apply_async fixed * Integration test and test_chord fixed * Lint fixed * chord freeze fixed * Minor fixes. * Chain apply_async fixed and tests fixed * lint fixed * Added integration test for chord * type -> isinstance * Added stamping mechanism * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Manual stamping improved * fail_ci_if_error uncommented * flake8 fixed * Added subtests * Changes * Add comma. * Fixed chord and added test for that * canvas.py fixed * Test chord.py fixed * Fixed stamped_headers * collections import fixed * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * collections import fixed * Update celery/backends/base.py Co-authored-by: Omer Katz <[email protected]> * ampq.py fixed * Refrain from using deprecated import path. * Fix test_complex_chain regression. Whenever we stamp a group we need to freeze it first if it wasn't already frozen. Somewhere along the line, the group id changed because we were freezing twice. This commit places the stamping operation after preparing the chain's steps which fixes the problem somehow. We don't know why yet. * Fixed integration tests * Fixed integration tests * Fixed integration tests * Fixed integration tests * Fixed issues with maybe_list. Add documentation * Fixed potential issue with integration tests * Fixed issues with _regen * Fixed issues with _regen * Fixed test_generator issues * Fixed _regen stamping * Fixed _regen stamping * Fixed TimeOut issue * Fixed TimeOut issue * Fixed TimeOut issue * Update docs/userguide/canvas.rst Co-authored-by: Omer Katz <[email protected]> * Fixed Couchbase * Better stamping intro * New GroupVisitor example * Adjust documentation. Co-authored-by: Naomi Elstein <[email protected]> Co-authored-by: Omer Katz <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Asif Saif Uddin <[email protected]> Co-authored-by: Omer Katz <[email protected]>
https://github.com/celery/celery.git
def apply(self, args=None, kwargs=None, **options): args = args if args else () kwargs = kwargs if kwargs else {} groups = self.options.get("groups") stamped_headers = self.options.get("stamped_headers") self.stamp(visitor=GroupStampingVisitor(groups=groups, stamped_headers=stamped_headers)) # Extra options set to None are dismissed options = {k: v for k, v in options.items() if v is not None} # For callbacks: extra args are prepended to the stored args. args, kwargs, options = self._merge(args, kwargs, options) return self.type.apply(args, kwargs, **options)
122
canvas.py
Python
celery/canvas.py
1c4ff33bd22cf94e297bd6449a06b5a30c2c1fbc
celery
5
288,859
63
10
20
252
14
0
109
181
test_ecobee3_add_sensors_at_runtime
Migrate HomeKit Controller to use stable identifiers (#80064)
https://github.com/home-assistant/core.git
async def test_ecobee3_add_sensors_at_runtime(hass): entity_registry = er.async_get(hass) # Set up a base Ecobee 3 with no additional sensors. # There shouldn't be any entities but climate visible. accessories = await setup_accessories_from_file(hass, "ecobee3_no_sensors.json") await setup_test_accessories(hass, accessories) climate = entity_registry.async_get("climate.homew") assert climate.unique_id == "00:00:00:00:00:00_1_16" occ1 = entity_registry.async_get("binary_sensor.kitchen") assert occ1 is None occ2 = entity_registry.async_get("binary_sensor.porch") assert occ2 is None occ3 = entity_registry.async_get("binary_sensor.basement") assert occ3 is None # Now added 3 new sensors at runtime - sensors should appear and climate # shouldn't be duplicated. accessories = await setup_accessories_from_file(hass, "ecobee3.json") await device_config_changed(hass, accessories) occ1 = entity_registry.async_get("binary_sensor.kitchen") assert occ1.unique_id == "00:00:00:00:00:00_2_56" occ2 = entity_registry.async_get("binary_sensor.porch") assert occ2.unique_id == "00:00:00:00:00:00_3_56" occ3 = entity_registry.async_get("binary_sensor.basement") assert occ3.unique_id == "00:00:00:00:00:00_4_56"
138
test_ecobee3.py
Python
tests/components/homekit_controller/specific_devices/test_ecobee3.py
f23b1750e85f07091eb896a0b12b8f95e5646338
core
1
281,119
21
11
18
97
18
0
24
158
call_dev
Crypto menu refactor (#1119) * enabled some crypto commands in dd to be called independent of source loaded * support for coin_map_df in all dd functions + load ta and plot chart refactor * updated tests and removed coingecko scrapping where possible * removed ref of command from hugo * updated pycoingecko version * refactoring load * refactored load to fetch prices; pred can run independent of source now * load by default usd on cp/cg and usdt on cb/bin * updated to rich for formatting and updated dependencies * fixed changes requested * update docs * revert discord requirements * removed absolute from calculate change for price * fixing pr issues * fix loading issue when similar coins exist, move coins to home, fill n/a * update docs for coins * adds load to ta and pred menu
https://github.com/OpenBB-finance/OpenBBTerminal.git
def call_dev(self, other_args): parser = argparse.ArgumentParser( add_help=False, formatter_class=argparse.ArgumentDefaultsHelpFormatter, prog="dev", description=, ) ns_parser = parse_known_args_and_warn( parser, other_args, EXPORT_ONLY_RAW_DATA_ALLOWED ) if ns_parser: pycoingecko_view.display_dev( self.coin_map_df["CoinGecko"], ns_parser.export )
61
dd_controller.py
Python
gamestonk_terminal/cryptocurrency/due_diligence/dd_controller.py
ea964109d654394cc0a5237e6ec5510ba6404097
OpenBBTerminal
2
20,763
6
7
3
27
3
0
6
20
unsplit
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def unsplit(self) -> None: del self._children[:]
15
layout.py
Python
pipenv/patched/notpip/_vendor/rich/layout.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
1
300,613
28
13
10
107
15
0
33
106
timestamp_custom
Fail template functions when no default specified (#71687)
https://github.com/home-assistant/core.git
def timestamp_custom(value, date_format=DATE_STR_FORMAT, local=True, default=_SENTINEL): try: date = dt_util.utc_from_timestamp(value) if local: date = dt_util.as_local(date) return date.strftime(date_format) except (ValueError, TypeError): # If timestamp can't be converted if default is _SENTINEL: raise_no_default("timestamp_custom", value) return default
66
template.py
Python
homeassistant/helpers/template.py
4885331509eeffe50f42d76b234996467b06170f
core
4
106,953
53
8
13
214
19
0
110
215
rotate
Micro-optimize rotation transform. The following test script shows a ~3x speedup. ```python import math, numpy as np mtx = np.array([[.1, .2, .3], [.4, .5, .6], [0, 0, 1]]) theta = np.pi / 4 def rotate(mtx, theta): a = math.cos(theta) b = math.sin(theta) rotate_mtx = np.array([[a, -b, 0.0], [b, a, 0.0], [0.0, 0.0, 1.0]], float) return np.dot(rotate_mtx, mtx) def rfast(mtx, theta): a = math.cos(theta) b = math.sin(theta) (xx, xy, x0), (yx, yy, y0), _ = mtx.tolist() # mtx = [[a -b 0], [b a 0], [0 0 1]] * mtx mtx[0, 0] = a * xx - b * yx mtx[0, 1] = a * xy - b * yy mtx[0, 2] = a * x0 - b * y0 mtx[1, 0] = b * xx + a * yx mtx[1, 1] = b * xy + a * yy mtx[1, 2] = b * x0 + a * y0 return mtx %timeit rotate(mtx, theta) %timeit rfast(mtx, theta) ```
https://github.com/matplotlib/matplotlib.git
def rotate(self, theta): a = math.cos(theta) b = math.sin(theta) mtx = self._mtx # Operating and assigning one scalar at a time is much faster. (xx, xy, x0), (yx, yy, y0), _ = mtx.tolist() # mtx = [[a -b 0], [b a 0], [0 0 1]] * mtx mtx[0, 0] = a * xx - b * yx mtx[0, 1] = a * xy - b * yy mtx[0, 2] = a * x0 - b * y0 mtx[1, 0] = b * xx + a * yx mtx[1, 1] = b * xy + a * yy mtx[1, 2] = b * x0 + a * y0 self.invalidate() return self
143
transforms.py
Python
lib/matplotlib/transforms.py
ff120cdc5aef1d609913678b1ac8c26e6f30691e
matplotlib
1
113,296
88
15
45
734
23
0
191
730
import_data_test
[FEAT]: resume waiting/running, dedup on tuner side (TPE-only) (#4931)
https://github.com/microsoft/nni.git
def import_data_test(self, tuner_factory, stype="choice_str", support_middle=True): if stype == "choice_str": search_space = { "choice_str": { "_type": "choice", "_value": ["cat", "dog", "elephant", "cow", "sheep", "panda", "tiger"] } } elif stype == "choice_num": search_space = { "choice_num": { "_type": "choice", "_value": [10, 20, 30, 40, 50, 60] } } else: raise RuntimeError("Unexpected stype") tuner = tuner_factory() self.assertIsInstance(tuner, Tuner) tuner.update_search_space(search_space) # import data at the beginning if stype == "choice_str": data = [{"parameter": {"choice_str": "cat"}, "value": 1.1}, {"parameter": {"choice_str": "dog"}, "value": {"default": 1.2, "tmp": 2}}] else: data = [{"parameter": {"choice_num": 20}, "value": 1.1}, {"parameter": {"choice_num": 60}, "value": {"default": 1.2, "tmp": 2}}] tuner.import_data(data) logger.info("Imported data successfully at the beginning") # generate parameters parameters = tuner.generate_multiple_parameters(list(range(3))) for i in range(3): tuner.receive_trial_result(i, parameters[i], random.uniform(-100, 100)) if not support_middle: return # import data in the middle if stype == "choice_str": data = [{"parameter": {"choice_str": "cat"}, "value": 1.1}, {"parameter": {"choice_str": "dog"}, "value": {"default": 1.2, "tmp": 2}}, {"parameter": {"choice_str": "cow"}, "value": 1.3}] else: data = [{"parameter": {"choice_num": 20}, "value": 1.1}, {"parameter": {"choice_num": 60}, "value": {"default": 1.2, "tmp": 2}}, {"parameter": {"choice_num": 50}, "value": 1.3}] tuner.import_data(data) logger.info("Imported data successfully in the middle") # generate parameters again parameters = tuner.generate_multiple_parameters([3]) tuner.receive_trial_result(3, parameters[0], random.uniform(-100, 100))
429
test_builtin_tuners.py
Python
test/ut/sdk/test_builtin_tuners.py
d03c411c8eb865a840242d59260a942d041e2fba
nni
7
155,163
8
10
3
49
8
0
8
17
wait
FEAT-#5053: Add pandas on unidist execution with MPI backend (#5059) Signed-off-by: Igoshev, Iaroslav <[email protected]>
https://github.com/modin-project/modin.git
def wait(obj_refs): unique_refs = list(set(obj_refs)) return unidist.wait(unique_refs, num_returns=len(unique_refs))
29
utils.py
Python
modin/core/execution/unidist/common/utils.py
193505fdf0c984743397ba3df56262f30aee13a8
modin
1
288,487
13
9
5
43
4
0
15
47
template
Refactor bayesian observations using dataclass (#79590) * refactor * remove some changes * remove typehint * improve codestyle * move docstring to comment * < 88 chars * avoid short var names * more readable * fix rename * Update homeassistant/components/bayesian/helpers.py Co-authored-by: epenet <[email protected]> * Update homeassistant/components/bayesian/binary_sensor.py Co-authored-by: epenet <[email protected]> * Update homeassistant/components/bayesian/binary_sensor.py Co-authored-by: epenet <[email protected]> * no intermediate * comment why set before list Co-authored-by: epenet <[email protected]>
https://github.com/home-assistant/core.git
def template(self) -> str | None: if self.value_template is not None: return self.value_template.template return None
26
helpers.py
Python
homeassistant/components/bayesian/helpers.py
dd1463da287f591652e47b00eee0c5b77f5f5b7c
core
2
249,242
21
10
12
109
14
0
22
97
test_media_does_not_exist
Use literals in place of `HTTPStatus` constants in tests (#13479) Replace - `HTTPStatus.NOT_FOUND` - `HTTPStatus.FORBIDDEN` - `HTTPStatus.UNAUTHORIZED` - `HTTPStatus.CONFLICT` - `HTTPStatus.CREATED` Signed-off-by: Dirk Klimpel <[email protected]>
https://github.com/matrix-org/synapse.git
def test_media_does_not_exist(self) -> None: url = "/_synapse/admin/v1/media/%s/%s" % (self.server_name, "12345") channel = self.make_request( "DELETE", url, access_token=self.admin_user_tok, ) self.assertEqual(404, channel.code, msg=channel.json_body) self.assertEqual(Codes.NOT_FOUND, channel.json_body["errcode"])
67
test_media.py
Python
tests/rest/admin/test_media.py
1595052b2681fb86c1c1b9a6028c1bc0d38a2e4b
synapse
1
21,744
8
9
11
38
5
0
8
14
inline_table
Update tomlkit==0.9.2 Used: python -m invoke vendoring.update --package=tomlkit
https://github.com/pypa/pipenv.git
def inline_table() -> InlineTable: return InlineTable(Container(), Trivia(), new=True)
22
api.py
Python
pipenv/vendor/tomlkit/api.py
8faa74cdc9da20cfdcc69f5ec29b91112c95b4c9
pipenv
1
323,146
78
15
28
341
31
0
115
408
_pad_across_processes
[Trainer] Add init version of paddlenlp trainer and apply finetune for ernie-1.0 pretraining. (#1761) * add some datasets for finetune. * support fine tune for all tastks. * add trainer prototype. * init verison for paddlenlp trainer. * refine trainer. * update for some details. * support multi-cards training evaluation. * support load from ckpt. * support for export inference model. * first version of trainer. * seq cls support clue. * trainer support for token classification and question answersing tasks. * fix as reviews. Co-authored-by: Zeyu Chen <[email protected]>
https://github.com/PaddlePaddle/PaddleNLP.git
def _pad_across_processes(self, tensor, pad_index=-100): if isinstance(tensor, (list, tuple)): return type(tensor)(self._pad_across_processes( t, pad_index=pad_index) for t in tensor) elif isinstance(tensor, dict): return type(tensor)({ k: self._pad_across_processes( v, pad_index=pad_index) for k, v in tensor.items() }) elif not isinstance(tensor, paddle.Tensor): raise TypeError( f"Can't pad the values of type {type(tensor)}, only of nested list/tuple/dicts of tensors." ) if len(tensor.shape) < 2: return tensor # Gather all sizes size = paddle.to_tensor(tensor.shape)[None] sizes = self._nested_gather(size).cpu() max_size = max(s[1] for s in sizes) if tensor.shape[1] == max_size: return tensor # Then pad to the maximum size old_size = tensor.shape new_size = list(old_size) new_size[1] = max_size # new_tensor = tensor.new_zeros(tuple(new_size)) + pad_index new_tensor = paddle.zeros( tuple(new_size), dtype=tensor.dtype) + pad_index new_tensor[:, :old_size[1]] = tensor return new_tensor
214
trainer_base.py
Python
paddlenlp/trainer/trainer_base.py
44a290e94d1becd1f09fddc3d873f9e19c9d6919
PaddleNLP
9
124,808
103
16
22
221
17
0
140
490
prepare_for_shutdown
docs: Fix a few typos (#26556) There are small typos in: - doc/source/data/faq.rst - python/ray/serve/replica.py Fixes: - Should read `successfully` rather than `succssifully`. - Should read `pseudo` rather than `psuedo`.
https://github.com/ray-project/ray.git
async def prepare_for_shutdown(self): while True: # Sleep first because we want to make sure all the routers receive # the notification to remove this replica first. await asyncio.sleep(self._shutdown_wait_loop_s) method_stat = self._get_handle_request_stats() # The handle_request method wasn't even invoked. if method_stat is None: break # The handle_request method has 0 inflight requests. if method_stat["running"] + method_stat["pending"] == 0: break else: logger.info( "Waiting for an additional " f"{self._shutdown_wait_loop_s}s to shut down because " f"there are {self.num_ongoing_requests} ongoing requests." ) # Explicitly call the del method to trigger clean up. # We set the del method to noop after successfully calling it so the # destructor is called only once. try: if hasattr(self.callable, "__del__"): # Make sure to accept `async def __del__(self)` as well. await sync_to_async(self.callable.__del__)() except Exception as e: logger.exception(f"Exception during graceful shutdown of replica: {e}") finally: if hasattr(self.callable, "__del__"): del self.callable.__del__
110
replica.py
Python
python/ray/serve/replica.py
e42dc7943e11449e224419b0bae846766c06bbab
ray
8
45,056
6
10
3
38
7
0
6
20
all_weight_rules
Refactor TriggerRule & WeightRule classes to inherit from Enum (#21264) closes: #19905 related: #5302,#18627 Co-authored-by: Tzu-ping Chung <[email protected]> Co-authored-by: Tzu-ping Chung <[email protected]>
https://github.com/apache/airflow.git
def all_weight_rules(cls) -> Set[str]: return set(cls.__members__.values())
22
weight_rule.py
Python
airflow/utils/weight_rule.py
9ad4de835cbbd296b9dbd1ff0ea88c1cd0050263
airflow
1
299,993
76
12
21
234
22
0
104
285
_get_content_filter
Relax dlna_dmr filtering when browsing media (#69576) * Fix incorrect types of test data structures * Loosen MIME-type filtering for async_browse_media * Add option to not filter results when browsing media Some devices do not report all that they support, and in this case filtering will hide media that's actually playable. Most devices are OK, though, and it's better to hide what they can't play. Add an option, off by default, to show all media. * Fix linting issues
https://github.com/home-assistant/core.git
def _get_content_filter(self) -> Callable[[BrowseMedia], bool]: if not self._device or not self._device.sink_protocol_info: # Nothing is specified by the renderer, so show everything _LOGGER.debug("Get content filter with no device or sink protocol info") return lambda _: True _LOGGER.debug("Get content filter for %s", self._device.sink_protocol_info) if self._device.sink_protocol_info[0] == "*": # Renderer claims it can handle everything, so show everything return lambda _: True # Convert list of things like "http-get:*:audio/mpeg;codecs=mp3:*" # to just "audio/mpeg" content_types = set[str]() for protocol_info in self._device.sink_protocol_info: protocol, _, content_format, _ = protocol_info.split(":", 3) # Transform content_format for better generic matching content_format = content_format.lower().replace("/x-", "/", 1) content_format = content_format.partition(";")[0] if protocol in STREAMABLE_PROTOCOLS: content_types.add(content_format)
143
media_player.py
Python
homeassistant/components/dlna_dmr/media_player.py
eebf3acb93507f8f706f8043d57fdfd09942a750
core
6
304,811
35
10
17
107
10
0
52
231
_queued_event_check
Improve type hint in flic binary sensor entity (#77161)
https://github.com/home-assistant/core.git
def _queued_event_check(self, click_type, time_diff): time_string = f"{time_diff:d} {'second' if time_diff == 1 else 'seconds'}" if time_diff > self._timeout: _LOGGER.warning( "Queued %s dropped for %s. Time in queue was %s", click_type, self._address, time_string, ) return True _LOGGER.info( "Queued %s allowed for %s. Time in queue was %s", click_type, self._address, time_string, ) return False
55
binary_sensor.py
Python
homeassistant/components/flic/binary_sensor.py
3031caafed9811e0b3da146c2ee5a8a7f0080b5e
core
2
3,604
9
7
5
34
5
0
10
42
state_checkpoint_interval
:tada: Source Looker: Migrate to native CDK (#9609)
https://github.com/airbytehq/airbyte.git
def state_checkpoint_interval(self) -> Optional[int]: if self._is_finished: return 1 return 100
20
streams.py
Python
airbyte-integrations/connectors/source-looker/source_looker/streams.py
27b5ba338656b9adbfc8ebd90960a200a14d5935
airbyte
2
141,965
50
10
13
153
26
0
55
116
test_syncer_callback_wait_for_all_error
[tune] Refactor Syncer / deprecate Sync client (#25655) This PR includes / depends on #25709 The two concepts of Syncer and SyncClient are confusing, as is the current API for passing custom sync functions. This PR refactors Tune's syncing behavior. The Sync client concept is hard deprecated. Instead, we offer a well defined Syncer API that can be extended to provide own syncing functionality. However, the default will be to use Ray AIRs file transfer utilities. New API: - Users can pass `syncer=CustomSyncer` which implements the `Syncer` API - Otherwise our off-the-shelf syncing is used - As before, syncing to cloud disables syncing to driver Changes: - Sync client is removed - Syncer interface introduced - _DefaultSyncer is a wrapper around the URI upload/download API from Ray AIR - SyncerCallback only uses remote tasks to synchronize data - Rsync syncing is fully depracated and removed - Docker and kubernetes-specific syncing is fully deprecated and removed - Testing is improved to use `file://` URIs instead of mock sync clients
https://github.com/ray-project/ray.git
def test_syncer_callback_wait_for_all_error(ray_start_2_cpus, temp_data_dirs): tmp_source, tmp_target = temp_data_dirs syncer_callback = TestSyncerCallback( sync_period=0, local_logdir_override=tmp_target, ) trial1 = MockTrial(trial_id="a", logdir=tmp_source) # Inject FailingProcess into callback sync_process = syncer_callback._get_trial_sync_process(trial1) sync_process.should_fail = True # This sync will fail because the remote location does not exist syncer_callback.on_trial_result(iteration=1, trials=[], trial=trial1, result={}) with pytest.raises(TuneError) as e: syncer_callback.wait_for_all() assert "At least one" in e
92
test_syncer_callback.py
Python
python/ray/tune/tests/test_syncer_callback.py
6313ddc47cf9df4df8c8907997df559850a1b874
ray
1
275,131
4
6
2
19
3
0
4
18
variable_dtype
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def variable_dtype(self): return self._variable_dtype
10
policy.py
Python
keras/mixed_precision/policy.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
248,360
25
12
36
139
9
0
41
299
test_query_3pe_authenticates_token
Add authentication to thirdparty bridge APIs (#12746) Co-authored-by: Brendan Abolivier <[email protected]>
https://github.com/matrix-org/synapse.git
def test_query_3pe_authenticates_token(self): SUCCESS_RESULT_USER = [ { "protocol": PROTOCOL, "userid": "@a:user", "fields": { "more": "fields", }, } ] SUCCESS_RESULT_LOCATION = [ { "protocol": PROTOCOL, "alias": "#a:room", "fields": { "more": "fields", }, } ] URL_USER = f"{URL}/_matrix/app/unstable/thirdparty/user/{PROTOCOL}" URL_LOCATION = f"{URL}/_matrix/app/unstable/thirdparty/location/{PROTOCOL}" self.request_url = None
178
test_api.py
Python
tests/appservice/test_api.py
6855024e0a363ff09d50586dcf1b089b77ac3b0c
synapse
1
34,270
26
12
11
112
11
0
32
149
_clean_text
Add FastTokenizer to REALM (#15211) * Remove BertTokenizer abstraction * Add FastTokenizer to REALM * Fix config archive map * Fix copies * Update realm.mdx * Apply suggestions from code review
https://github.com/huggingface/transformers.git
def _clean_text(self, text): output = [] for char in text: cp = ord(char) if cp == 0 or cp == 0xFFFD or _is_control(char): continue if _is_whitespace(char): output.append(" ") else: output.append(char) return "".join(output)
65
tokenization_realm.py
Python
src/transformers/models/realm/tokenization_realm.py
841d979190319098adc8101f9820a02ee3be4c8b
transformers
6
191,408
19
9
5
52
7
0
21
36
test_document_lookups_dont_exist
Harrison/add react chain (#24) from https://arxiv.org/abs/2210.03629 still need to think if docstore abstraction makes sense
https://github.com/hwchase17/langchain.git
def test_document_lookups_dont_exist() -> None: page = Document(page_content=_PAGE_CONTENT) # Start with lookup on "harrison". output = page.lookup("harrison") assert output == "No Results"
27
test_document.py
Python
tests/unit_tests/docstore/test_document.py
ce7b14b84381c766ae42a0f71953b2a56c024dbb
langchain
1
52,343
46
11
20
244
29
0
57
351
tracking
Update mot modules (#2111) * remove fluid api * fix readme
https://github.com/PaddlePaddle/PaddleHub.git
def tracking(self, video_stream, output_dir='mot_result', visualization=True, draw_threshold=0.5, use_gpu=False): self.video_stream = video_stream self.output_dir = output_dir self.visualization = visualization self.draw_threshold = draw_threshold self.use_gpu = use_gpu cfg = load_config(os.path.join(self.directory, 'config', 'fairmot_dla34_30e_1088x608.yml')) check_config(cfg) place = 'gpu:0' if use_gpu else 'cpu' place = paddle.set_device(place) paddle.disable_static() tracker = StreamTracker(cfg, mode='test') # load weights tracker.load_weights_jde(self.pretrained_model) signal.signal(signal.SIGINT, self.signalhandler) # inference tracker.videostream_predict(video_stream=video_stream, output_dir=output_dir, data_type='mot', model_type='FairMOT', visualization=visualization, draw_threshold=draw_threshold)
152
module.py
Python
modules/video/multiple_object_tracking/fairmot_dla34/module.py
e8f5bf8eab3aa1159fe0d96d87be64a2ba6fd7f2
PaddleHub
2
299,420
6
10
2
54
8
1
6
11
kpl_properties_data_fixture
Insteon Device Control Panel (#70834) Co-authored-by: Paulus Schoutsen <[email protected]>
https://github.com/home-assistant/core.git
def kpl_properties_data_fixture(): return json.loads(load_fixture("insteon/kpl_properties.json")) @pytest.fixture(name="iolinc_properties_data", scope="session")
@pytest.fixture(name="iolinc_properties_data", scope="session")
15
test_api_properties.py
Python
tests/components/insteon/test_api_properties.py
a9ca774e7ed1d8fe502a53d5b765c1d9b393a524
core
1
181,660
10
9
11
54
6
0
16
73
test_dense2_with_non_sparse_components
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def test_dense2_with_non_sparse_components(): fit_then_transform( dense2_partial_1h, dense2, categorical_features=[True, True, False] ) fit_then_transform_dense( dense2_partial_1h, dense2, categorical_features=[True, True, False] )
37
one_hot_encoder_tests.py
Python
tests/one_hot_encoder_tests.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
1
316,448
13
13
30
61
10
0
13
39
test_flow_with_default_discovery
Search/replace RESULT_TYPE_* by FlowResultType enum (#74642)
https://github.com/home-assistant/core.git
async def test_flow_with_default_discovery(hass, manager, discovery_source): mock_integration( hass, MockModule("comp", async_setup_entry=AsyncMock(return_value=True)), ) mock_entity_platform(hass, "config_flow.comp", None)
225
test_config_entries.py
Python
tests/test_config_entries.py
7cd68381f1d4f58930ffd631dfbfc7159d459832
core
1
181,736
23
10
13
99
18
0
24
91
test_fit_3
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def test_fit_3(): tpot_obj = TPOTClassifier( random_state=42, population_size=1, offspring_size=2, generations=1, subsample=0.8, verbosity=0, config_dict='TPOT light' ) tpot_obj.fit(training_features, training_target) assert isinstance(tpot_obj._optimized_pipeline, creator.Individual) assert not (tpot_obj._start_datetime is None)
67
tpot_tests.py
Python
tests/tpot_tests.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
1
292,452
81
13
19
176
19
0
112
352
async_added_to_hass
Add dlna_dms integration to support DLNA Digital Media Servers (#66437)
https://github.com/home-assistant/core.git
async def async_added_to_hass(self) -> None: # Try to connect to the last known location, but don't worry if not available if not self._device and self.location: try: await self.device_connect() except UpnpError as err: LOGGER.debug("Couldn't connect immediately: %r", err) # Get SSDP notifications for only this device self.config_entry.async_on_unload( await ssdp.async_register_callback( self.hass, self.async_ssdp_callback, {"USN": self.usn} ) ) # async_upnp_client.SsdpListener only reports byebye once for each *UDN* # (device name) which often is not the USN (service within the device) # that we're interested in. So also listen for byebye advertisements for # the UDN, which is reported in the _udn field of the combined_headers. self.config_entry.async_on_unload( await ssdp.async_register_callback( self.hass, self.async_ssdp_callback, {"_udn": self.udn, "NTS": NotificationSubType.SSDP_BYEBYE}, ) )
102
dms.py
Python
homeassistant/components/dlna_dms/dms.py
b19bf9b147f4321e89d1f7f01e68337f2102f460
core
4
69,015
4
8
2
25
3
0
4
2
get_doctypes_for_bank_reconciliation
fix: Remove Expense Claim from Bank Reconciliation - add hooks `get_matching_queries` and `bank_reconciliation_doctypes` to extend the functionality in other apps
https://github.com/frappe/erpnext.git
def get_doctypes_for_bank_reconciliation(): return frappe.get_hooks("bank_reconciliation_doctypes")
12
bank_transaction.py
Python
erpnext/accounts/doctype/bank_transaction/bank_transaction.py
466bf998354b7452fbbebb62b751a93c41aa2f8f
erpnext
1
276,031
24
12
8
111
18
0
30
118
_unblock_model_reconstruction
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _unblock_model_reconstruction(self, layer_id, layer): for model_id, v in self.model_layer_dependencies.items(): _, layers = v if layer_id not in layers: continue layers[layers.index(layer_id)] = layer if all(isinstance(x, base_layer.Layer) for x in layers): self._models_to_reconstruct.append(model_id)
71
load.py
Python
keras/saving/saved_model/load.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
5
245,504
8
8
18
34
6
0
8
14
autocast_box_type
[Refactor] Refactor pipelines with boxlist. (#8562) * Refactor pipelines and data_preprocesser by boxlist * Refactor browse_dataset.py * Update * Update * Update * Update * update * Update * Change with_box_wrapped to with_boxlist * Fix comments * Fix commits * Update UT
https://github.com/open-mmlab/mmdetection.git
def autocast_box_type(dst_box_type='hbox') -> Callable: _, box_type_cls = get_box_type(dst_box_type)
22
box_type.py
Python
mmdet/structures/bbox/box_type.py
af063a6f25ddae4de90646f86b2db824f3d00138
mmdetection
1
245,255
51
12
26
239
28
0
68
287
load_data_list
[Refactor] refactor XMLDataset and VOCDataset, and add VOCMetric
https://github.com/open-mmlab/mmdetection.git
def load_data_list(self) -> List[dict]: assert self._metainfo.get('CLASSES', None) is not None, \ 'CLASSES in `XMLDataset` can not be None.' self.cat2label = { cat: i for i, cat in enumerate(self._metainfo['CLASSES']) } data_list = [] img_ids = mmcv.list_from_file( self.ann_file, file_client_args=self.file_client_args) for img_id in img_ids: file_name = osp.join(self.img_subdir, f'{img_id}.jpg') xml_path = osp.join(self.sub_data_root, self.ann_subdir, f'{img_id}.xml') raw_img_info = {} raw_img_info['img_id'] = img_id raw_img_info['file_name'] = file_name raw_img_info['xml_path'] = xml_path parsed_data_info = self.parse_data_info(raw_img_info) data_list.append(parsed_data_info) return data_list
144
xml_style.py
Python
mmdet/datasets/xml_style.py
2d9e2a00bc8214f670fe13293201a050f7e58be5
mmdetection
3
155,986
6
9
2
38
5
0
6
20
ordered
Add `Series.str`,`Series.dt`, and `Series.cat` accessors to docs (#8757) - Refactors the dataframe accessor implementation to define all methods/properties statically on import. This makes it so that methods can be accessed on the class itself, rather than only on instances. - Use a different descriptor for adding the accessor classes to the `Series` class, which lets the accessor methods be accessed via e.g. `dd.Series.str.cat`. - Fix a bug in `dd.Series.str.rsplit` - General cleanliness improvements for the accessor code - Update the api docs to include the accessors Co-authored-by: Jim Crist-Harif <[email protected]>
https://github.com/dask/dask.git
def ordered(self): return self._delegate_property(self._series._meta, "cat", "ordered")
21
categorical.py
Python
dask/dataframe/categorical.py
9634da11a5a6e5eb64cf941d2088aabffe504adb
dask
1
22,223
70
14
23
177
17
0
98
279
get_dependencies_from_cache
Rename notpip to pip. Vendor in pip-22.2.1 and latest requirementslib and vistir.
https://github.com/pypa/pipenv.git
def get_dependencies_from_cache(ireq): if ireq.editable or not is_pinned_requirement(ireq): return if ireq not in DEPENDENCY_CACHE: return cached = set(DEPENDENCY_CACHE[ireq]) # Preserving sanity: Run through the cache and make sure every entry if # valid. If this fails, something is wrong with the cache. Drop it. try: broken = False for line in cached: dep_ireq = shims.InstallRequirement.from_line(line) name = canonicalize_name(dep_ireq.name) if _marker_contains_extra(dep_ireq): broken = True # The "extra =" marker breaks everything. elif name == canonicalize_name(ireq.name): broken = True # A package cannot depend on itself. if broken: break except Exception: broken = True if broken: del DEPENDENCY_CACHE[ireq] return return cached
105
dependencies.py
Python
pipenv/vendor/requirementslib/models/dependencies.py
cd5a9683be69c86c8f3adcd13385a9bc5db198ec
pipenv
10
106,241
44
17
15
243
22
0
59
212
_n_descramble
Implement n-param descrambling using JSInterp Fixes #29326, closes #29790, closes #30004, closes #30024, closes #30052, closes #30088, closes #30097, closes #30102, closes #30109, closes #30119, closes #30125, closes #30128, closes #30162, closes #30173, closes #30186, closes #30192, closes #30221, closes #30239, closes #30539, closes #30552.
https://github.com/ytdl-org/youtube-dl.git
def _n_descramble(self, n_param, player_url, video_id): sig_id = ('nsig_value', n_param) if sig_id in self._player_cache: return self._player_cache[sig_id] try: player_id = ('nsig', player_url) if player_id not in self._player_cache: self._player_cache[player_id] = self._extract_n_function(video_id, player_url) func = self._player_cache[player_id] self._player_cache[sig_id] = func(n_param) if self._downloader.params.get('verbose', False): self._downloader.to_screen('[debug] [%s] %s' % (self.IE_NAME, 'Decrypted nsig {0} => {1}'.format(n_param, self._player_cache[sig_id]))) return self._player_cache[sig_id] except Exception as e: raise ExtractorError(traceback.format_exc(), cause=e, video_id=video_id)
155
youtube.py
Python
youtube_dl/extractor/youtube.py
af9e72507ea38e5ab3fa2751ed09ec88021260cb
youtube-dl
5
32,145
59
13
15
206
8
0
96
183
booleans_processing
TF: remove graph mode distinction when processing boolean options (#18102)
https://github.com/huggingface/transformers.git
def booleans_processing(config, **kwargs): final_booleans = {} # Pure conv models (such as ConvNext) do not have `output_attentions`. If the signature has # `output_attentions`, it will be present here in `kwargs`, even if unset (in that case, as `None`) if "output_attentions" in kwargs: final_booleans["output_attentions"] = ( kwargs["output_attentions"] if kwargs["output_attentions"] is not None else config.output_attentions ) final_booleans["output_hidden_states"] = ( kwargs["output_hidden_states"] if kwargs["output_hidden_states"] is not None else config.output_hidden_states ) final_booleans["return_dict"] = kwargs["return_dict"] if kwargs["return_dict"] is not None else config.return_dict if "use_cache" in kwargs: final_booleans["use_cache"] = ( kwargs["use_cache"] if kwargs["use_cache"] is not None else getattr(config, "use_cache", None) ) return final_booleans
120
modeling_tf_utils.py
Python
src/transformers/modeling_tf_utils.py
fcefa200b2d9636d98fd21ea3b176a09fe801c29
transformers
7
195,057
81
15
20
389
47
0
105
334
rank_eval_label_candidates
Decoder-Only Transformer (#4329) * quick and dirty decoder-only implementation * fix decoder_only incremental decoding * remove unused code, add some comments, propogate func signature change * consolidate code in decoder.py * unify encoder_state * export PassThroughEncoder * add missing build_ functions * defaults in TransformerDecoderLayer __init__ * comments, consolidating more logic, simplified forward_layers args * resize token embeddings and unit test * attempt to suppress some unused import warnings * padded_tensor fp16 friendly * autoformat * decoder_only -> decoder * more documentation * update name in test * add missing dict args * more argument massaging * update TestBartDistillation::test_narrow_distillation_losses numbers * update TestTransformerDistillation::test_narrow_distillation_losses numbers * fix _pad_tensor in seeker Co-authored-by: klshuster <[email protected]>
https://github.com/facebookresearch/ParlAI.git
def rank_eval_label_candidates(self, batch, batchsize): # compute roughly ppl to rank candidates cand_choices = [] cand_choices_scores = [] encoder_states = self.model.encoder(*self._encoder_input(batch)) for i in range(batchsize): num_cands = len(batch.candidate_vecs[i]) enc = self.model.reorder_encoder_states(encoder_states, [i] * num_cands) cands, _ = self._pad_tensor(batch.candidate_vecs[i], is_label=True) cands = cands.to(batch.text_vec.device) scores, _ = self.model.decode_forced(enc, cands) score_view = scores.reshape(num_cands * cands.size(1), -1) cand_losses = F.cross_entropy( score_view, cands.view(-1), reduction='none' ).view(num_cands, cands.size(1)) # now cand_losses is cands x seqlen size, but we still need to # check padding and such mask = (cands != self.NULL_IDX).float() cand_scores = (cand_losses * mask).sum(dim=1) / (mask.sum(dim=1) + 1e-9) sorted_scores, ordering = cand_scores.sort() cand_choices.append([batch.candidates[i][o] for o in ordering]) cand_choices_scores.append(sorted_scores.tolist()) return cand_choices, cand_choices_scores
249
torch_generator_agent.py
Python
parlai/core/torch_generator_agent.py
ecdfbd0bb2ab76876e9fd3817d4502c3938a2ade
ParlAI
3
3,753
10
9
10
46
10
0
10
31
_get_current_throttle_value
🎉 🎉 Source FB Marketing: performance and reliability fixes (#9805) * Facebook Marketing performance improvement * add comments and little refactoring * fix integration tests with the new config * improve job status handling, limit concurrency to 10 * fix campaign jobs, refactor manager * big refactoring of async jobs, support random order of slices * update source _read_incremental to hook new state logic * fix issues with timeout * remove debugging and clean up, improve retry logic * merge changes from #8234 * fix call super _read_increment * generalize batch execution, add use_batch flag * improve coverage, do some refactoring of spec * update test, remove overrides of source * add split by AdSet * add smaller insights * fix end_date < start_date case * add account_id to PK * add notes * fix new streams * fix reversed incremental stream * update spec.json for SAT * upgrade CDK and bump version Co-authored-by: Dmytro Rezchykov <[email protected]> Co-authored-by: Eugene Kulak <[email protected]>
https://github.com/airbytehq/airbyte.git
def _get_current_throttle_value(self) -> float: throttle = self._api.api.ads_insights_throttle return min(throttle.per_account, throttle.per_application)
28
async_job_manager.py
Python
airbyte-integrations/connectors/source-facebook-marketing/source_facebook_marketing/streams/async_job_manager.py
a3aae8017a0a40ff2006e2567f71dccb04c997a5
airbyte
1
138,432
6
6
3
22
4
0
6
20
base_dir
[Datasets] Add Path Partitioning Support for All Content Types (#23624) Adds a content-type-agnostic partition parser with support for filtering files. Also adds some corner-case bug fixes and usability improvements for supporting more robust input path types.
https://github.com/ray-project/ray.git
def base_dir(self) -> str: return self._base_dir
12
partitioning.py
Python
python/ray/data/datasource/partitioning.py
9f4cb9b3c9c27ae21bf7807595973231b6814648
ray
1
777
16
15
6
70
14
0
18
36
test_object_with_id_binary_deserialization
Refactored store interface to eliminate confusion with __getitem__ - Fixed serde tests effected by protobuf magic bytes
https://github.com/OpenMined/PySyft.git
def test_object_with_id_binary_deserialization() -> None: obj = sy.deserialize(blob=blob_bytes, from_bytes=True) assert obj == ObjectWithID( id=UID(value=uuid.UUID(int=333779996850170035686993356951732753684)) ) # ----------------------- CHILDREN -----------------------
42
object_test.py
Python
packages/syft/tests/syft/core/common/object_test.py
b61c1fc4b83fc740d3d9d0d84d0ca6022a3c49bb
PySyft
1
269,421
25
8
2
149
18
0
33
38
decode_predictions
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def decode_predictions(preds, top=5): return imagenet_utils.decode_predictions(preds, top=top) preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format( mode="", ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_CAFFE, error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC, ) decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__ DOC = setattr(ResNet50, "__doc__", ResNet50.__doc__ + DOC) setattr(ResNet101, "__doc__", ResNet101.__doc__ + DOC) setattr(ResNet152, "__doc__", ResNet152.__doc__ + DOC)
20
resnet.py
Python
keras/applications/resnet.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
105,909
31
13
10
143
24
0
36
91
parquet_to_arrow
Multiprocessed dataset builder [WIP] (#5107) * multiprocessing-compatible naming scheme and refactor * multiprocessed shard writing for GeneratorBasedBuilder * multiprocessed shard writing for ArrowBasedBuilder * style * multiprocessed dataset loading * compatibility with non-sharded datasets * bugfix * bugfix * removed unused import * fixed bad ordering * less misleading tqdm * fix gen_kwargs distribution + read shards * minor * minor2 * support beam datasets * docstrings + minor * add iflatmap_unordered for parallel write & progress updates * use 1 tqdm bar receiving updates from subprocesses * docs * add test_iflatmap_unordered * style * test arrow_reader.py * fix test_iflatmap_unordered * add Beam test_download_and_prepare_sharded * test gen_kwargs distribution * test download_and_prepare with num_proc * style * improve test * don't close the pool * fix multiprocessing on windows * keep multiprocessing disabled by default * again + docs * more docs * more docs * some var renaming * style * Apply suggestions from code review Co-authored-by: Mario Šaško <[email protected]> * Apply suggestions from code review Co-authored-by: Mario Šaško <[email protected]> * added utils/sharding.py * style * style Co-authored-by: Quentin Lhoest <[email protected]> Co-authored-by: Quentin Lhoest <[email protected]> Co-authored-by: Mario Šaško <[email protected]>
https://github.com/huggingface/datasets.git
def parquet_to_arrow(source, destination) -> List[int]: stream = None if isinstance(destination, str) else destination with ArrowWriter(path=destination, stream=stream) as writer: parquet_file = pa.parquet.ParquetFile(source) for record_batch in parquet_file.iter_batches(): pa_table = pa.Table.from_batches([record_batch]) writer.write_table(pa_table) num_bytes, num_examples = writer.finalize() return num_bytes, num_examples
89
arrow_writer.py
Python
src/datasets/arrow_writer.py
2945690ea731f85a356220a71cdc630281c676f4
datasets
3