id
int64
20
338k
vocab_size
int64
2
671
ast_levels
int64
4
32
nloc
int64
1
451
n_ast_nodes
int64
12
5.6k
n_identifiers
int64
1
186
n_ast_errors
int64
0
10
n_words
int64
2
2.17k
n_whitespaces
int64
2
13.8k
fun_name
stringlengths
2
73
commit_message
stringlengths
51
15.3k
url
stringlengths
31
59
code
stringlengths
51
31k
ast_errors
stringlengths
0
1.46k
token_counts
int64
6
3.32k
file_name
stringlengths
5
56
language
stringclasses
1 value
path
stringlengths
7
134
commit_id
stringlengths
40
40
repo
stringlengths
3
28
complexity
int64
1
153
143,848
14
11
5
63
11
0
14
53
_extra_input_signature_def
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def _extra_input_signature_def(self): feed_dict = self.extra_compute_action_feed_dict() return { k.name: tf1.saved_model.utils.build_tensor_info(k) for k in feed_dict.keys() }
38
tf_policy.py
Python
rllib/policy/tf_policy.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
2
310,124
53
14
39
310
28
0
83
316
test_summary_correctly_updated
fix: 17track package summary status is not updated when there are no more packages in that summary (#64421) * 17track package status is not updated when there are no packages * 17track package status is not updated when there are no packages * 17track package status is not updated when there are no packages
https://github.com/home-assistant/core.git
async def test_summary_correctly_updated(hass): package = Package( tracking_number="456", destination_country=206, friendly_name="friendly name 1", info_text="info text 1", location="location 1", timestamp="2020-08-10 10:32", origin_country=206, package_type=2, status=30, ) ProfileMock.package_list = [package] await _setup_seventeentrack(hass, summary_data=DEFAULT_SUMMARY) assert len(hass.states.async_entity_ids()) == 8 for state in hass.states.async_all(): if state.entity_id == "sensor.seventeentrack_package_456": break assert state.state == "0" assert ( len( hass.states.get( "sensor.seventeentrack_packages_ready_to_be_picked_up" ).attributes["packages"] ) == 1 ) ProfileMock.package_list = [] ProfileMock.summary_data = NEW_SUMMARY_DATA await _goto_future(hass) assert len(hass.states.async_entity_ids()) == 7 for state in hass.states.async_all(): assert state.state == "1" assert ( hass.states.get( "sensor.seventeentrack_packages_ready_to_be_picked_up" ).attributes["packages"] is None )
186
test_sensor.py
Python
tests/components/seventeentrack/test_sensor.py
6176bb954c4aa68b33c9db487dbb5712059f4b38
core
4
323,214
55
17
18
194
21
0
64
386
acquire
Add model parallel for FasterGPT. (#1755) * Add model parallel for FasterGPT. * Make GPT model parallel runable * Make FT model parallel optional. * Fix _write_setup_file when kwargs is not empty. * Fix ext_utils.load * Add file lock for model parallel. * Fix model_parallel.flag in CMakeLists.txt. * Use a separated lib for model parallel. * Make from_pretrained get partial model. * Make model parallel support layer group in python. * Fix fit_partial_model when model having keys state not including. Add api docs for model parallel. * Fix the default world_size when mpi is not available. * Add demo for GPT model parallel. * Fix default global ft_para_conf. * Fix GPTModel state_dict wrapper for layer parallel. * Set seed for tensor parallel. * Fix location of gpt.h in cmake. * Fix seed in gpt.h * Fix NV FT GPT embedding. * Add more cases in gpt_mp_sample.py * Fix seed in ker_curand_setupLauncher. Put build dir of FG in PPNLP_HOME with digest of current path. * Refine gpt_mp_sample.py
https://github.com/PaddlePaddle/PaddleNLP.git
def acquire(self): start_time = time.time() while True: try: self.fd = os.open(self.lock_file_path, os.O_CREAT | os.O_EXCL | os.O_RDWR) self.is_locked = True # moved to ensure tag only when locked break except OSError as e: if e.errno != errno.EEXIST: raise if self.timeout is None: raise FileLockException("Could not acquire lock on {}". format(self.lock_file_path)) if self.timeout > 0 and (time.time() - start_time ) >= self.timeout: raise FileLockException("Timeout occured.") time.sleep(self.delay)
116
file_lock.py
Python
paddlenlp/utils/file_lock.py
c541f4ba1fcab8304c7ac4efdce3d63a2e478176
PaddleNLP
7
171,627
20
14
13
163
4
0
36
142
render_pep440
BLD: use nonvendor versioneer (#49924) * BLD: remove vendored versioneer * run vis * move config to pyproject.toml * add versioneer to deps * run pyupgrade * fix isort and pylint * fix ci * fix env
https://github.com/pandas-dev/pandas.git
def render_pep440(pieces): if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += plus_or_dot(pieces) rendered += f"{pieces['distance']}.g{pieces['short']}" if pieces["dirty"]: rendered += ".dirty" else: # exception #1 rendered = f"0+untagged.{pieces['distance']}.g{pieces['short']}" if pieces["dirty"]: rendered += ".dirty" return rendered
65
_version.py
Python
pandas/_version.py
e2df99823758210fb2b7c4aba39e23f3445f7cd3
pandas
6
139,165
23
11
27
104
13
0
27
86
workflow_logging_context
[Workflow]Make workflow logs publish to the correct driver. (#24089) All workflow tasks are executed as remote functions that submitted from WorkflowManagmentActor. WorkflowManagmentActor is a detached long-running actor whose owner is the first driver in the cluster that runs the very first workflow execution. Therefore, for new drivers that run workflows, the loggings won't be properly published back to the driver because loggings are saved and published based on job_id and the job_id is always the first driver's job_id as the ownership goes like: first_driver -> WorkflowManagmentActor -> workflow executions using remote functions. To solve this, during workflow execution, we pass the actual driver's job_id along with execution, and re-configure the logging files on each worker that runs the remote functions. Notice that we need to do this in multiple places as a workflow task is executed with more than one remote functions that are running in different workers.
https://github.com/ray-project/ray.git
def workflow_logging_context(job_id) -> None: node = ray.worker._global_node original_out_file, original_err_file = node.get_log_file_handles( get_worker_log_file_name("WORKER") ) out_file, err_file = node.get_log_file_handles( get_worker_log_file_name("WORKER", job_id) ) try: configure_log_file(out_file, err_file) yield finally: configure_log_file(original_out_file, original_err_file)
60
workflow_context.py
Python
python/ray/workflow/workflow_context.py
e8fc66af348f2afd2b578fe1c6776cc88ea82499
ray
2
271,851
45
12
15
134
16
0
58
150
verify_dataset_shuffled
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def verify_dataset_shuffled(x): assert isinstance(x, tf.data.Dataset) graph_def = get_dataset_graph_def(x) for node in graph_def.node: if node.op.startswith("ShuffleDataset"): return True # Also check graph_def.library.function for ds.interleave or ds.flat_map for function in graph_def.library.function: for node in function.node_def: if node.op.startswith("ShuffleDataset"): return True logging.warning( "Expected a shuffled dataset but input dataset `x` is " "not shuffled. Please invoke `shuffle()` on input dataset." ) return False
79
training_utils_v1.py
Python
keras/engine/training_utils_v1.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
6
264,296
123
17
40
565
47
0
192
698
get
Refactor generic views; add plugins dev documentation
https://github.com/netbox-community/netbox.git
def get(self, request): model = self.queryset.model content_type = ContentType.objects.get_for_model(model) if self.filterset: self.queryset = self.filterset(request.GET, self.queryset).qs # Compile a dictionary indicating which permissions are available to the current user for this model permissions = {} for action in ('add', 'change', 'delete', 'view'): perm_name = get_permission_for_model(model, action) permissions[action] = request.user.has_perm(perm_name) if 'export' in request.GET: # Export the current table view if request.GET['export'] == 'table': table = self.get_table(request, permissions) columns = [name for name, _ in table.selected_columns] return self.export_table(table, columns) # Render an ExportTemplate elif request.GET['export']: template = get_object_or_404(ExportTemplate, content_type=content_type, name=request.GET['export']) return self.export_template(template, request) # Check for YAML export support on the model elif hasattr(model, 'to_yaml'): response = HttpResponse(self.export_yaml(), content_type='text/yaml') filename = 'netbox_{}.yaml'.format(self.queryset.model._meta.verbose_name_plural) response['Content-Disposition'] = 'attachment; filename="{}"'.format(filename) return response # Fall back to default table/YAML export else: table = self.get_table(request, permissions) return self.export_table(table) # Render the objects table table = self.get_table(request, permissions) configure_table(table, request) # If this is an HTMX request, return only the rendered table HTML if is_htmx(request): return render(request, 'htmx/table.html', { 'table': table, }) context = { 'content_type': content_type, 'table': table, 'permissions': permissions, 'action_buttons': self.action_buttons, 'filter_form': self.filterset_form(request.GET, label_suffix='') if self.filterset_form else None, } context.update(self.get_extra_context(request)) return render(request, self.template_name, context) # # Export methods #
342
bulk_views.py
Python
netbox/netbox/views/generic/bulk_views.py
54834c47f8870e7faabcd847c3270da0bd3d2884
netbox
10
286,002
11
8
9
48
8
1
11
16
get_project_ids
Add 3 Token Terminal commands (#2447) * add crypto/ov/fun * add tokenterminal to dependencies * update website content * add to main.yml * fix tests * add tests * Update _index.md * Update _index.md * fix tests * fix test * List hint added * improve code based on Jose input * fix tests * requirements for token terminal * add source and fix source bug * some improvements * colors bars * fix dependencies * update kaleido version * update setuptools for pkg_resources * replace pkg_resources by importlib_metadata * Added fixes * Fixed tests * fix stuff for Josecas Co-authored-by: Colin Delahunty <[email protected]> Co-authored-by: colin99d <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def get_project_ids() -> List[str]: return [project["project_id"] for project in PROJECTS_DATA] @log_start_end(log=logger)
@log_start_end(log=logger)
21
tokenterminal_model.py
Python
openbb_terminal/cryptocurrency/due_diligence/tokenterminal_model.py
7979b1fc071a1c3e7463044bea617d7305b4a17e
OpenBBTerminal
2
85,425
28
11
9
191
27
0
32
102
test_delete_performance_issue
feat(perf issues): Prevent deleting and merging (#38479) * Prevent deleting, discarding, and merging in single and bulk operations for performance issues.
https://github.com/getsentry/sentry.git
def test_delete_performance_issue(self): self.login_as(user=self.user) group = self.create_group(type=GroupType.PERFORMANCE_SLOW_SPAN.value) GroupHash.objects.create(project=group.project, hash="x" * 32, group=group) url = f"/api/0/issues/{group.id}/" response = self.client.delete(url, format="json") assert response.status_code == 400, response.content # Ensure it's still there assert Group.objects.filter(id=group.id).exists() assert GroupHash.objects.filter(group_id=group.id).exists()
114
test_group_details.py
Python
tests/sentry/api/endpoints/test_group_details.py
dfe1d3442af1535cc2d4f8a511ee5733b3887572
sentry
1
275,037
4
6
2
16
3
0
4
18
dynamic_counter
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def dynamic_counter(self): raise NotImplementedError
8
loss_scale_optimizer.py
Python
keras/mixed_precision/loss_scale_optimizer.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
295,912
17
12
5
84
10
0
18
37
test_missing_tones_dict
Add EntityFeature enum to Siren (#69585) Co-authored-by: Franck Nijhof <[email protected]>
https://github.com/home-assistant/core.git
async def test_missing_tones_dict(hass): siren = MockSirenEntity(SirenEntityFeature.TONES, {1: "a", 2: "b"}) siren.hass = hass with pytest.raises(ValueError): process_turn_on_params(siren, {"tone": 3})
47
test_init.py
Python
tests/components/siren/test_init.py
a61ac3ddc6d65522dfa1eb599adf73420a9267dc
core
1
85,913
40
14
15
166
19
0
68
247
get_matching_frame_actions
fix(grouping): Exception matcher with no frames (#38994) We used to pass `-1` as a frame index for exception matchers, which worked by accident because `-1` is a valid list index in Python, except when the list of frames was empty. Replace `-1` by `None` and make sure we do not attempt to access the list of frames in the exception matcher, by giving it its own `matches_frame` override. Fixes SENTRY-VWW
https://github.com/getsentry/sentry.git
def get_matching_frame_actions(self, frames, platform, exception_data=None, cache=None): if not self.matchers: return [] # 1 - Check if exception matchers match for m in self._exception_matchers: if not m.matches_frame(frames, None, platform, exception_data, cache): return [] rv = [] # 2 - Check if frame matchers match for idx, frame in enumerate(frames): if all( m.matches_frame(frames, idx, platform, exception_data, cache) for m in self._other_matchers ): for action in self.actions: rv.append((idx, action)) return rv
112
__init__.py
Python
src/sentry/grouping/enhancer/__init__.py
686675f81bf9402bc9b671e61ea0481b0c5c3468
sentry
8
160,327
73
12
16
239
17
0
104
183
eye
BUG: lib: Allow type uint64 for eye() arguments. Closes gh-9982. (Plus a few small PEP 8 fixes.)
https://github.com/numpy/numpy.git
def eye(N, M=None, k=0, dtype=float, order='C', *, like=None): if like is not None: return _eye_with_like(N, M=M, k=k, dtype=dtype, order=order, like=like) if M is None: M = N m = zeros((N, M), dtype=dtype, order=order) if k >= M: return m # Ensure M and k are integers, so we don't get any surprise casting # results in the expressions `M-k` and `M+1` used below. This avoids # a problem with inputs with type (for example) np.uint64. M = operator.index(M) k = operator.index(k) if k >= 0: i = k else: i = (-k) * M m[:M-k].flat[i::M+1] = 1 return m _eye_with_like = array_function_dispatch( _eye_dispatcher )(eye)
146
twodim_base.py
Python
numpy/lib/twodim_base.py
f9355942f6ef7c5d27691c4571096234efb67a2b
numpy
5
271,557
95
13
12
84
5
0
148
337
_validate_target_and_loss
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _validate_target_and_loss(self, y, loss): # `self.loss` references the loss added via `compile` call. If users have # provided such, the target must be provided; otherwise it's a user error. # Note that `self.loss` does not include losses added via `add_loss`, and it # is a valid use when such loss from `add_loss` exists and target does not. if self.loss and y is None: raise ValueError( "Target data is missing. Your model was compiled with " f"loss={self.loss}, " "and therefore expects target data to be provided in `fit()`." ) # For training, there must be compiled loss or regularization loss to exist # in order to apply the gradients. If one is not found, it means no loss # was supplied via `compile` or `add_loss`. elif loss is None: raise ValueError( "No loss found. You may have forgotten to provide a `loss` argument " "in the `compile()` method." )
38
training.py
Python
keras/engine/training.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
4
276,843
22
12
18
78
8
0
24
56
func_load
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def func_load(code, defaults=None, closure=None, globs=None): if isinstance(code, (tuple, list)): # unpack previous dump code, defaults, closure = code if isinstance(defaults, list): defaults = tuple(defaults)
147
generic_utils.py
Python
keras/utils/generic_utils.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
7
181,893
20
13
8
95
7
0
28
64
float_range
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def float_range(value): try: value = float(value) except Exception: raise argparse.ArgumentTypeError('Invalid float value: \'{}\''.format(value)) if value < 0.0 or value > 1.0: raise argparse.ArgumentTypeError('Invalid float value: \'{}\''.format(value)) return value
56
driver.py
Python
tpot/driver.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
4
268,980
14
10
6
61
8
0
15
23
config_for_enable_caching_device
Reorganize RNN layers, cells and wrappers into smaller logically organized files hosted under an `rnn` directory. PiperOrigin-RevId: 428841673
https://github.com/keras-team/keras.git
def config_for_enable_caching_device(rnn_cell): default_enable_caching_device = tf.compat.v1.executing_eagerly_outside_functions( ) if rnn_cell._enable_caching_device != default_enable_caching_device: return {'enable_caching_device': rnn_cell._enable_caching_device} return {}
35
rnn_utils.py
Python
keras/layers/rnn/rnn_utils.py
01c906c4178db5ae03b7eb2d298a052c952a0667
keras
2
298,605
33
9
7
134
25
1
38
63
test_update_hvac_mode
Use climate enums in gree (#70655) * Use climate enums in gree * Adjust tests
https://github.com/home-assistant/core.git
async def test_update_hvac_mode(hass, discovery, device, mock_now, hvac_mode): device().power = hvac_mode != HVACMode.OFF device().mode = HVAC_MODES_REVERSE.get(hvac_mode) await async_setup_gree(hass) state = hass.states.get(ENTITY_ID) assert state is not None assert state.state == hvac_mode @pytest.mark.parametrize( "fan_mode", (FAN_AUTO, FAN_LOW, FAN_MEDIUM_LOW, FAN_MEDIUM, FAN_MEDIUM_HIGH, FAN_HIGH), )
@pytest.mark.parametrize( "fan_mode", (FAN_AUTO, FAN_LOW, FAN_MEDIUM_LOW, FAN_MEDIUM, FAN_MEDIUM_HIGH, FAN_HIGH), )
63
test_climate.py
Python
tests/components/gree/test_climate.py
23c5bd97793af4eed9806a237593b482f8e1b932
core
1
267,080
22
16
9
104
14
0
24
95
retry
ansible-test - Fix subprocess management. (#77638) * Run code-smell sanity tests in UTF-8 Mode. * Update subprocess use in sanity test programs. * Use raw_command instead of run_command with always=True set. * Add more capture=True usage. * Don't expose stdin to subprocesses. * Capture more output. Warn on retry. * Add more captures. * Capture coverage cli output. * Capture windows and network host checks. * Be explicit about interactive usage. * Use a shell for non-captured, non-interactive subprocesses. * Add integration test to assert no TTY. * Add unit test to assert no TTY. * Require blocking stdin/stdout/stderr. * Use subprocess.run in ansible-core sanity tests. * Remove unused arg. * Be explicit with subprocess.run check=False. * Add changelog.
https://github.com/ansible/ansible.git
def retry(func, ex_type=SubprocessError, sleep=10, attempts=10, warn=True): for dummy in range(1, attempts): try: return func() except ex_type as ex: if warn: display.warning(str(ex)) time.sleep(sleep) return func()
65
util.py
Python
test/lib/ansible_test/_internal/util.py
62d03c8e752ee35057031a91d7028e0a2e5d43e4
ansible
4
81,378
34
10
9
69
9
0
38
108
event_processing_finished
Split TaskManager into - DependencyManager spawns dependencies if necessary - WorkflowManager processes running workflows to see if a new job is ready to spawn - TaskManager starts tasks if unblocked and has execution capacity
https://github.com/ansible/awx.git
def event_processing_finished(self): if self.status in ACTIVE_STATES: return False # tally of events is only available at end of run try: event_qs = self.get_event_queryset() except NotImplementedError: return True # Model without events, such as WFJT return self.emitted_events == event_qs.count()
45
unified_jobs.py
Python
awx/main/models/unified_jobs.py
431b9370dfbbbcb64dee0b4ebc8af7df12740d08
awx
3
20,916
15
9
2
52
7
2
15
27
TypeAlias
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def TypeAlias(self, parameters): raise TypeError(f"{self} is not subscriptable") # 3.7-3.8 elif sys.version_info[:2] >= (3, 7):
elif sys.version_info[:2] >= (3, 7):sys
14
typing_extensions.py
Python
pipenv/patched/notpip/_vendor/typing_extensions.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
1
224,055
75
17
27
303
28
0
106
411
_load_theme_config
Remove spaces at the ends of docstrings, normalize quotes
https://github.com/mkdocs/mkdocs.git
def _load_theme_config(self, name): theme_dir = utils.get_theme_dir(name) self.dirs.append(theme_dir) try: file_path = os.path.join(theme_dir, 'mkdocs_theme.yml') with open(file_path, 'rb') as f: theme_config = utils.yaml_load(f) if theme_config is None: theme_config = {} except OSError as e: log.debug(e) raise ValidationError( f"The theme '{name}' does not appear to have a configuration file. " f"Please upgrade to a current version of the theme." ) log.debug(f"Loaded theme configuration for '{name}' from '{file_path}': {theme_config}") parent_theme = theme_config.pop('extends', None) if parent_theme: themes = utils.get_theme_names() if parent_theme not in themes: raise ValidationError( f"The theme '{name}' inherits from '{parent_theme}', which does not appear to be installed. " f"The available installed themes are: {', '.join(themes)}" ) self._load_theme_config(parent_theme) self.static_templates.update(theme_config.pop('static_templates', [])) self._vars.update(theme_config)
155
theme.py
Python
mkdocs/theme.py
e7f07cc82ab2be920ab426ba07456d8b2592714d
mkdocs
5
47,693
32
16
16
195
15
0
42
166
test_default_args
Replace usage of `DummyOperator` with `EmptyOperator` (#22974) * Replace usage of `DummyOperator` with `EmptyOperator`
https://github.com/apache/airflow.git
def test_default_args(): execution_date = pendulum.parse("20201109") with DAG( dag_id='example_task_group_default_args', start_date=execution_date, default_args={ "owner": "dag", }, ): with TaskGroup("group1", default_args={"owner": "group"}): task_1 = EmptyOperator(task_id='task_1') task_2 = EmptyOperator(task_id='task_2', owner='task') task_3 = EmptyOperator(task_id='task_3', default_args={"owner": "task"}) assert task_1.owner == 'group' assert task_2.owner == 'task' assert task_3.owner == 'task'
103
test_task_group.py
Python
tests/utils/test_task_group.py
49e336ae0302b386a2f47269a6d13988382d975f
airflow
1
275,981
12
9
6
68
9
0
14
60
trackable_children
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def trackable_children(self, serialization_cache): if not utils.should_save_traces(): return {} children = self.objects_to_serialize(serialization_cache) children.update(self.functions_to_serialize(serialization_cache)) return children
40
base_serialization.py
Python
keras/saving/saved_model/base_serialization.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
35,104
19
10
4
35
5
0
19
51
copy
Constrained Beam Search [without disjunctive decoding] (#15416) * added classes to get started with constrained beam search * in progress, think i can directly force tokens now but not yet with the round robin * think now i have total control, now need to code the bank selection * technically works as desired, need to optimize and fix design choices leading to undersirable outputs * complete PR #1 without disjunctive decoding * removed incorrect tests * Delete k.txt * Delete test.py * Delete test.sh * revert changes to test scripts * genutils * full implementation with testing, no disjunctive yet * shifted docs * passing all tests realistically ran locally * removing accidentally included print statements * fixed source of error in initial PR test * fixing the get_device() vs device trap * fixed documentation docstrings about constrained_beam_search * fixed tests having failing for Speech2TextModel's floating point inputs * fix cuda long tensor * added examples and testing for them and founx & fixed a bug in beam_search and constrained_beam_search * deleted accidentally added test halting code with assert False * code reformat * Update tests/test_generation_utils.py Co-authored-by: Patrick von Platen <[email protected]> * Update tests/test_generation_utils.py Co-authored-by: Patrick von Platen <[email protected]> * Update tests/test_generation_utils.py Co-authored-by: Patrick von Platen <[email protected]> * Update tests/test_generation_utils.py Co-authored-by: Patrick von Platen <[email protected]> * Update tests/test_generation_utils.py * fixing based on comments on PR * took out the testing code that should but work fails without the beam search moditification ; style changes * fixing comments issues * docstrings for ConstraintListState * typo in PhrsalConstraint docstring * docstrings improvements Co-authored-by: Patrick von Platen <[email protected]>
https://github.com/huggingface/transformers.git
def copy(self, stateful=False): raise NotImplementedError( f"{self.__class__} is an abstract class. Only classes inheriting this class can be called." )
16
generation_beam_constraints.py
Python
src/transformers/generation_beam_constraints.py
2b5603f6ac58f0cd3b2116c01d6b9f62575248b2
transformers
1
78,341
28
10
5
61
8
0
32
85
test_settings_no_request_no_use_default
Add generic settings to compliment site-specific settings (#8327)
https://github.com/wagtail/wagtail.git
def test_settings_no_request_no_use_default(self): context = {} # Without a request in the context, and without use_default_site, this # should bail with an error template = '{{ settings("tests.testsitesetting").title }}' with self.assertRaises(RuntimeError): self.render(template, context, request_context=False)
33
test_templates.py
Python
wagtail/contrib/settings/tests/site_specific/test_templates.py
d967eccef28ce47f60d26be1c28f2d83a25f40b0
wagtail
1
97,856
11
9
3
45
6
0
11
32
render_warning
ref(py): Split up large file (#32862) Co-authored-by: getsantry[bot] <66042841+getsantry[bot]@users.noreply.github.com>
https://github.com/getsentry/sentry.git
def render_warning(self, message): context = {"error": message} return render_to_response("sentry/pipeline-provider-error.html", context, self.request)
26
base.py
Python
src/sentry/pipeline/base.py
d246d2b6d3e014270941209e54f2f12e09ad9a81
sentry
1
82,608
92
15
36
379
39
0
152
404
get_page_from_request
fix: Prefer titles matching request language (#7144) * prefer titles matching request language * add comments on use of annotate * fix wayward imports * Add changelog entry Co-authored-by: Vinit Kumar <[email protected]> Co-authored-by: Mark Walker <[email protected]>
https://github.com/django-cms/django-cms.git
def get_page_from_request(request, use_path=None, clean_path=None): from cms.utils.page_permissions import user_can_view_page_draft if not bool(use_path) and hasattr(request, '_current_page_cache'): # The following is set by CurrentPageMiddleware return request._current_page_cache if clean_path is None: clean_path = not bool(use_path) draft = use_draft(request) preview = 'preview' in request.GET path = request.path_info if use_path is None else use_path if clean_path: pages_root = reverse("pages-root") if path.startswith(pages_root): path = path[len(pages_root):] # strip any final slash if path.endswith("/"): path = path[:-1] site = get_current_site() request_language_code = getattr(request, "LANGUAGE_CODE", None) page = get_page_from_path( site, path, preview, draft, language_code=request_language_code ) if draft and page and not user_can_view_page_draft(request.user, page): page = get_page_from_path( site, path, preview, draft=False, language_code=request_language_code ) # For public pages, check if any parent is hidden due to published dates # In this case the selected page is not reachable if page and not draft: now = timezone.now() unpublished_ancestors = ( page .get_ancestor_pages() .filter( Q(publication_date__gt=now) | Q(publication_end_date__lt=now), ) ) if unpublished_ancestors.exists(): page = None return page
235
page.py
Python
cms/utils/page.py
06c9a85df486581f152dbf11bbf40a1c6c5e6cd3
django-cms
14
132,826
63
15
15
162
25
0
79
312
__call__
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def __call__(self, checkpoint): if not self.runner: return if checkpoint.storage == Checkpoint.PERSISTENT and checkpoint.value: checkpoint_path = checkpoint.value logger.debug( "Trial %s: Deleting checkpoint %s", self.trial_id, checkpoint_path ) # TODO(ujvl): Batch remote deletes. # We first delete the remote checkpoint. If it is on the same # node as the driver, it will also remove the local copy. ray.get(self.runner.delete_checkpoint.remote(checkpoint_path)) # Delete local copy, if any exists. if os.path.exists(checkpoint_path): try: checkpoint_dir = TrainableUtil.find_checkpoint_dir(checkpoint_path) shutil.rmtree(checkpoint_dir) except FileNotFoundError: logger.debug("Local checkpoint dir not found during deletion.")
95
trial.py
Python
python/ray/tune/trial.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
6
320,678
40
10
5
205
18
1
70
190
test_filename_rubout
Add :rl-rubout and :rl-filename-rubout Closes #4561
https://github.com/qutebrowser/qutebrowser.git
def test_filename_rubout(os_sep, monkeypatch, lineedit, text, deleted, rest): monkeypatch.setattr(os, "sep", os_sep) _validate_deletion(lineedit, readlinecommands.rl_filename_rubout, [], text, deleted, rest) @pytest.mark.parametrize('text, deleted, rest', [ pytest.param('test foobar| delete', ' delete', 'test foobar|', marks=fixme), ('test foobar| delete', ' ', 'test foobar|delete'), # wrong pytest.param('test foo|delete bar', 'delete', 'test foo| bar', marks=fixme), ('test foo|delete bar', 'delete ', 'test foo|bar'), # wrong pytest.param('test foo<bar> delete', ' delete', 'test foobar|', marks=fixme), ('test foo<bar>delete', 'bardelete', 'test foo|'), # wrong ])
@pytest.mark.parametrize('text, deleted, rest', [ pytest.param('test foobar| delete', ' delete', 'test foobar|', marks=fixme), ('test foobar| delete', ' ', 'test foobar|delete'), # wrong pytest.param('test foo|delete bar', 'delete', 'test foo| bar', marks=fixme), ('test foo|delete bar', 'delete ', 'test foo|bar'), # wrong pytest.param('test foo<bar> delete', ' delete', 'test foobar|', marks=fixme), ('test foo<bar>delete', 'bardelete', 'test foo|'), # wrong ])
43
test_readlinecommands.py
Python
tests/unit/components/test_readlinecommands.py
ab65c542a0551abf105eeb58803cd08bd040753b
qutebrowser
1
41,878
47
13
18
198
16
0
72
249
_map_attributes
Downgrade exception on mapping list length mismatch to warning (#2856) * Downgrade exception on mapping list length mismatch to warning * Lint * Fix pairplot test * Set stacklevel to report warning in user code
https://github.com/mwaskom/seaborn.git
def _map_attributes(self, arg, levels, defaults, attr): if arg is True: lookup_table = dict(zip(levels, defaults)) elif isinstance(arg, dict): missing = set(levels) - set(arg) if missing: err = f"These `{attr}` levels are missing values: {missing}" raise ValueError(err) lookup_table = arg elif isinstance(arg, Sequence): arg = self._check_list_length(levels, arg, attr) lookup_table = dict(zip(levels, arg)) elif arg: err = f"This `{attr}` argument was not understood: {arg}" raise ValueError(err) else: lookup_table = {} return lookup_table # =========================================================================== #
115
_oldcore.py
Python
seaborn/_oldcore.py
563e96d3be1eaee8db8dfbccf7eed1f1c66dfd31
seaborn
6
300,836
42
11
9
119
17
0
48
133
process_new_events
Clean up accessing dispatcher helpers via hass (#72014) Clean up accessing ditpatcher helpers via hass
https://github.com/home-assistant/core.git
def process_new_events(self, new_values_dict) -> None: self.async_set_available_state(True) # Process any stateless events (via device_triggers) async_fire_triggers(self, new_values_dict) for (aid, cid), value in new_values_dict.items(): accessory = self.current_state.setdefault(aid, {}) accessory[cid] = value # self.current_state will be replaced by entity_map in a future PR # For now we update both self.entity_map.process_changes(new_values_dict) async_dispatcher_send(self.hass, self.signal_state_updated)
74
connection.py
Python
homeassistant/components/homekit_controller/connection.py
c8f700c80319cef81a9a817c1b9111887ea98b1a
core
2
22,222
59
16
20
240
24
0
75
202
get_dependencies
Rename notpip to pip. Vendor in pip-22.2.1 and latest requirementslib and vistir.
https://github.com/pypa/pipenv.git
def get_dependencies(ireq, sources=None, parent=None): # type: (Union[InstallRequirement, InstallationCandidate], Optional[List[Dict[S, Union[S, bool]]]], Optional[AbstractDependency]) -> Set[S, ...] if not isinstance(ireq, shims.InstallRequirement): name = getattr(ireq, "project_name", getattr(ireq, "project", ireq.name)) version = getattr(ireq, "version", None) if not version: ireq = shims.InstallRequirement.from_line("{0}".format(name)) else: ireq = shims.InstallRequirement.from_line("{0}=={1}".format(name, version)) pip_options = get_pip_options(sources=sources) getters = [ get_dependencies_from_cache, get_dependencies_from_wheel_cache, get_dependencies_from_json, functools.partial(get_dependencies_from_index, pip_options=pip_options), ] for getter in getters: deps = getter(ireq) if deps is not None: return deps raise RuntimeError("failed to get dependencies for {}".format(ireq))
150
dependencies.py
Python
pipenv/vendor/requirementslib/models/dependencies.py
cd5a9683be69c86c8f3adcd13385a9bc5db198ec
pipenv
5
119,312
54
15
16
252
16
0
83
184
odd_ext
Add some functions for spectral analysis. This commit adds "stft", "csd", and "welch" functions in scipy.signal.
https://github.com/google/jax.git
def odd_ext(x, n, axis=-1): if n < 1: return x if n > x.shape[axis] - 1: raise ValueError( f"The extension length n ({n}) is too big. " f"It must not exceed x.shape[axis]-1, which is {x.shape[axis] - 1}.") left_end = lax.slice_in_dim(x, 0, 1, axis=axis) left_ext = jnp.flip(lax.slice_in_dim(x, 1, n + 1, axis=axis), axis=axis) right_end = lax.slice_in_dim(x, -1, None, axis=axis) right_ext = jnp.flip(lax.slice_in_dim(x, -(n + 1), -1, axis=axis), axis=axis) ext = jnp.concatenate((2 * left_end - left_ext, x, 2 * right_end - right_ext), axis=axis) return ext
159
signal.py
Python
jax/_src/scipy/signal.py
e085370ec4137cf0f73c5163cb664bc4e1c46082
jax
3
34,604
38
14
4
90
13
0
44
58
create_position_ids_from_input_ids
Add XGLM models (#14876) * add xglm * update vocab size * fix model name * style and tokenizer * typo * no mask token * fix pos embed compute * fix args * fix tokenizer * fix positions * fix tokenization * style and dic fixes * fix imports * add fast tokenizer * update names * add pt tests * fix tokenizer * fix typo * fix tokenizer import * fix fast tokenizer * fix tokenizer * fix converter * add tokenizer test * update checkpoint names * fix tokenizer tests * fix slow tests * add copied from comments * rst -> mdx * flax model * update flax tests * quality * style * doc * update index and readme * fix copies * fix doc * update toctrr * fix indent * minor fixes * fix config doc * don't save embed_pos weights * Apply suggestions from code review Co-authored-by: Sylvain Gugger <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> * address Sylvains commnets, few doc fixes * fix check_repo * align order of arguments * fix copies * fix labels * remove unnecessary mapping * fix saving tokenizer Co-authored-by: Sylvain Gugger <[email protected]> Co-authored-by: Patrick von Platen <[email protected]>
https://github.com/huggingface/transformers.git
def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0): # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA. mask = input_ids.ne(padding_idx).int() incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask return incremental_indices.long() + padding_idx # Copied from transformers.models.m2m_100.modeling_m2m_100.M2M100SinusoidalPositionalEmbedding with M2M100->XGLM
55
modeling_xglm.py
Python
src/transformers/models/xglm/modeling_xglm.py
d25e25ee2b63ebfcd099deb689a5a7272574a10f
transformers
1
101,448
20
15
7
110
15
0
21
75
_build_tabs
Bugfix: convert - Gif Writer - Fix non-launch error on Gif Writer - convert plugins - linting - convert/fs_media/preview/queue_manager - typing - Change convert items from dict to Dataclass
https://github.com/deepfakes/faceswap.git
def _build_tabs(self) -> None: logger.debug("Build Tabs") for section in self.config_tools.sections: tab = ttk.Notebook(self) self._tabs[section] = {"tab": tab} self.add(tab, text=section.replace("_", " ").title())
64
preview.py
Python
tools/preview/preview.py
1022651eb8a7741014f5d2ec7cbfe882120dfa5f
faceswap
2
295,982
9
9
3
46
7
1
9
14
token_expiry
Refresh google calendar tokens with invalid expiration times (#69679) * Refresh google calendar tokens with invalid expiration times * Update tests/components/google/conftest.py Co-authored-by: Martin Hjelmare <[email protected]> * Remove unnecessary async methods in functions being touched already Co-authored-by: Martin Hjelmare <[email protected]>
https://github.com/home-assistant/core.git
def token_expiry() -> datetime.datetime: return utcnow() + datetime.timedelta(days=7) @pytest.fixture
@pytest.fixture
22
conftest.py
Python
tests/components/google/conftest.py
06d2aeec6b153a104b275c73068cf05a7b5c0c6b
core
1
148,501
131
18
44
672
40
0
181
675
start
Implement previous backtest result reuse when config and strategy did not change.
https://github.com/freqtrade/freqtrade.git
def start(self) -> None: data: Dict[str, Any] = {} data, timerange = self.load_bt_data() self.load_bt_data_detail() logger.info("Dataload complete. Calculating indicators") run_ids = { strategy.get_strategy_name(): get_strategy_run_id(strategy) for strategy in self.strategylist } # Load previous result that will be updated incrementally. if self.config.get('timerange', '-').endswith('-'): self.config['no_backtest_cache'] = True logger.warning('Backtest result caching disabled due to use of open-ended timerange.') if not self.config.get('no_backtest_cache', False): self.results = find_existing_backtest_stats( self.config['user_data_dir'] / 'backtest_results', run_ids) for strat in self.strategylist: if self.results and strat.get_strategy_name() in self.results['strategy']: # When previous result hash matches - reuse that result and skip backtesting. logger.info(f'Reusing result of previous backtest for {strat.get_strategy_name()}') continue min_date, max_date = self.backtest_one_strategy(strat, data, timerange) # Update old results with new ones. if len(self.all_results) > 0: results = generate_backtest_stats( data, self.all_results, min_date=min_date, max_date=max_date) if self.results: self.results['metadata'].update(results['metadata']) self.results['strategy'].update(results['strategy']) self.results['strategy_comparison'].extend(results['strategy_comparison']) else: self.results = results if self.config.get('export', 'none') == 'trades': store_backtest_stats(self.config['exportfilename'], self.results) # Results may be mixed up now. Sort them so they follow --strategy-list order. if 'strategy_list' in self.config and len(self.results) > 0: self.results['strategy_comparison'] = sorted( self.results['strategy_comparison'], key=lambda c: self.config['strategy_list'].index(c['key'])) self.results['strategy'] = dict( sorted(self.results['strategy'].items(), key=lambda kv: self.config['strategy_list'].index(kv[0]))) if len(self.strategylist) > 0: # Show backtest results show_backtest_results(self.config, self.results)
391
backtesting.py
Python
freqtrade/optimize/backtesting.py
16861db653ec8166f73fc8480894f186a137e7bd
freqtrade
13
158,185
25
13
5
82
11
0
26
46
evaluate_accuracy
[PaddlePaddle] Merge master into Paddle branch (#1186) * change 15.2 title in chinese version (#1109) change title ’15.2. 情感分析:使用递归神经网络‘ to ’15.2. 情感分析:使用循环神经网络‘ * 修改部分语义表述 (#1105) * Update r0.17.5 (#1120) * Bump versions in installation * 94行typo: (“bert.mall”)->(“bert.small”) (#1129) * line 313: "bert.mall" -> "bert.small" (#1130) * fix: update language as native reader (#1114) * Fix the translation of "stride" (#1115) * Update index.md (#1118) 修改部分语义表述 * Update self-attention-and-positional-encoding.md (#1133) 依照本书的翻译习惯,将pooling翻译成汇聚 * maybe a comment false (#1149) * maybe a little false * maybe a little false * A minor bug in the rcnn section (Chinese edition) (#1148) * Update bert.md (#1137) 一个笔误 # 假设batch_size=2,num_pred_positions=3 # 那么batch_idx应该是np.repeat( [0,1], 3 ) = [0,0,0,1,1,1] * Update calculus.md (#1135) * fix typo in git documentation (#1106) * fix: Update the Chinese translation in lr-scheduler.md (#1136) * Update lr-scheduler.md * Update chapter_optimization/lr-scheduler.md Co-authored-by: goldmermaid <[email protected]> Co-authored-by: goldmermaid <[email protected]> * fix translation for kaggle-house-price.md (#1107) * fix translation for kaggle-house-price.md * fix translation for kaggle-house-price.md Signed-off-by: sunhaizhou <[email protected]> * Update weight-decay.md (#1150) * Update weight-decay.md 关于“k多选d”这一部分,中文读者使用排列组合的方式可能更容易理解 关于“给定k个变量,阶数的个数为...”这句话是有歧义的,不是很像中国话,应该是说“阶数为d的项的个数为...”。 并增加了一句对“因此即使是阶数上的微小变化,比如从$2$到$3$,也会显著增加我们模型的复杂性。”的解释 解释为何会增加复杂性以及为何需要细粒度工具。 * Update chapter_multilayer-perceptrons/weight-decay.md yep Co-authored-by: goldmermaid <[email protected]> * Update chapter_multilayer-perceptrons/weight-decay.md yep Co-authored-by: goldmermaid <[email protected]> Co-authored-by: goldmermaid <[email protected]> * Fix a spelling error (#1161) * Update gru.md (#1152) The key distinction between vanilla RNNs and GRUs is that the latter support gating of the hidden state. 翻译错误 * Unify the function naming (#1113) Unify naming of the function 'init_xavier()'. * Update mlp-concise.md (#1166) * Update mlp-concise.md 语句不通顺 * Update environment.md 语序异常 * Update config.ini * fix the imprecise description (#1168) Co-authored-by: yuande <yuande> * fix typo in chapter_natural-language-processing-pretraining/glove.md (#1175) * Fix some typos. (#1163) * Update batch-norm.md (#1170) fixing typos u->x in article * Update linear-regression.md (#1090) We invoke Stuart Russell and Peter Norvig who, in their classic AI text book Artificial Intelligence: A Modern Approach :cite:Russell.Norvig.2016, pointed out that 原译文把who也直接翻译出来了。 * Update mlp.md (#1117) * Update mlp.md 修改部分语义表述 * Update chapter_multilayer-perceptrons/mlp.md Co-authored-by: goldmermaid <[email protected]> * Update chapter_multilayer-perceptrons/mlp.md Co-authored-by: Aston Zhang <[email protected]> Co-authored-by: goldmermaid <[email protected]> * Correct a translation error. (#1091) * Correct a translation error. * Update chapter_computer-vision/image-augmentation.md Co-authored-by: Aston Zhang <[email protected]> * Update aws.md (#1121) * Update aws.md * Update chapter_appendix-tools-for-deep-learning/aws.md Co-authored-by: Aston Zhang <[email protected]> * Update image-augmentation.md (#1093) * Update anchor.md (#1088) fix a minor issue in code * Update anchor.md * Update image-augmentation.md * fix typo and improve translation in chapter_linear-networks\softmax-regression.md (#1087) * Avoid `torch.meshgrid` user warning (#1174) Avoids the following user warning: ```python ~/anaconda3/envs/torch/lib/python3.10/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2228.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] ``` * bump to 2.0.0-beta1 * Update sequence.md * bump beta1 on readme * Add latex code block background to config * BLD: Bump python support version 3.9 (#1183) * BLD: Bump python support version 3.9 * Remove clear and manually downgrade protobuf 4.21.4 to 3.19.4 * BLD: Bump torch and tensorflow * Update Jenkinsfile * Update chapter_installation/index.md * Update chapter_installation/index.md Co-authored-by: Aston Zhang <[email protected]> * Update config.ini * Update INFO.md * Update INFO.md * Drop mint to show code in pdf, use Inconsolata font, apply code cell color (#1187) * resolve the conflicts * revise from publisher (#1089) * revise from publisher * d2l api * post_latex * revise from publisher * revise ch11 * Delete d2l-Copy1.bib * clear cache * rm d2lbook clear * debug anchor * keep original d2l doc Co-authored-by: Ubuntu <[email protected]> Co-authored-by: Aston Zhang <[email protected]> Co-authored-by: Aston Zhang <[email protected]> * 重复语句 (#1188) Co-authored-by: Aston Zhang <[email protected]> * Improve expression for chapter_preliminaries/pandas.md (#1184) * Update pandas.md * Improve expression * Improve expression * Update chapter_preliminaries/pandas.md Co-authored-by: Aston Zhang <[email protected]> * Improce expression for chapter_preliminaries/linear-algebra.md (#1185) * Improce expression * Improve code comments * Update chapter_preliminaries/linear-algebra.md * Update chapter_preliminaries/linear-algebra.md * Update chapter_preliminaries/linear-algebra.md * Update chapter_preliminaries/linear-algebra.md Co-authored-by: Aston Zhang <[email protected]> * Fix multibox_detection bugs * Update d2l to 0.17.5 version * restore older version * Upgrade pandas * change to python3.8 * Test warning log * relocate warning log * test logs filtering * Update gru.md * Add DeprecationWarning filter * Test warning log * Update attention mechanisms & computational performance * Update multilayer perceptron& linear & convolution networks & computer vision * Update recurrent&optimition&nlp pretraining & nlp applications * ignore warnings * Update index.md * Update linear networks * Update multilayer perceptrons&deep learning computation * Update preliminaries * Check and Add warning filter * Update kaggle-cifar10.md * Update object-detection-dataset.md * Update ssd.md fcn.md * Update hybridize.md * Update hybridize.md Signed-off-by: sunhaizhou <[email protected]> Co-authored-by: zhou201505013 <[email protected]> Co-authored-by: Xinwei Liu <[email protected]> Co-authored-by: Anirudh Dagar <[email protected]> Co-authored-by: Aston Zhang <[email protected]> Co-authored-by: hugo_han <[email protected]> Co-authored-by: gyro永不抽风 <[email protected]> Co-authored-by: CanChengZheng <[email protected]> Co-authored-by: linlin <[email protected]> Co-authored-by: iuk <[email protected]> Co-authored-by: yoos <[email protected]> Co-authored-by: Mr. Justice Lawrence John Wargrave <[email protected]> Co-authored-by: Chiyuan Fu <[email protected]> Co-authored-by: Sunhuashan <[email protected]> Co-authored-by: Haiker Sun <[email protected]> Co-authored-by: Ming Liu <[email protected]> Co-authored-by: goldmermaid <[email protected]> Co-authored-by: silenceZheng66 <[email protected]> Co-authored-by: Wenchao Yan <[email protected]> Co-authored-by: Kiki2049 <[email protected]> Co-authored-by: Krahets <[email protected]> Co-authored-by: friedmainfunction <[email protected]> Co-authored-by: Jameson <[email protected]> Co-authored-by: P. Yao <[email protected]> Co-authored-by: Yulv-git <[email protected]> Co-authored-by: Liu,Xiao <[email protected]> Co-authored-by: YIN, Gang <[email protected]> Co-authored-by: Joe-HZ <[email protected]> Co-authored-by: lybloveyou <[email protected]> Co-authored-by: VigourJiang <[email protected]> Co-authored-by: zxhd863943427 <[email protected]> Co-authored-by: LYF <[email protected]> Co-authored-by: Aston Zhang <[email protected]> Co-authored-by: xiaotinghe <[email protected]> Co-authored-by: Ubuntu <[email protected]> Co-authored-by: Holly-Max <[email protected]> Co-authored-by: HinGwenWoong <[email protected]> Co-authored-by: Shuai Zhang <[email protected]>
https://github.com/d2l-ai/d2l-zh.git
def evaluate_accuracy(net, data_iter): metric = Accumulator(2) # No. of correct predictions, no. of predictions for X, y in data_iter: metric.add(accuracy(net(X), y), d2l.size(y)) return metric[0] / metric[1]
52
mxnet.py
Python
d2l/mxnet.py
b64b41d8c1ac23c43f7a4e3f9f6339d6f0012ab2
d2l-zh
2
287,724
7
9
3
31
4
0
7
21
async_terminate_apps
Add Button platform to Bravia TV (#78093) * Add Button platform to Bravia TV * Add button.py to coveragerc * improve callable type
https://github.com/home-assistant/core.git
async def async_terminate_apps(self) -> None: await self.client.terminate_apps()
16
coordinator.py
Python
homeassistant/components/braviatv/coordinator.py
ab4c1ebfd6ab79abfc4e214853f71afba2380099
core
1
139,515
14
8
8
34
6
0
14
29
extra_compute_grad_fetches
[RLlib] Introduce new policy base classes. (#24742)
https://github.com/ray-project/ray.git
def extra_compute_grad_fetches(self) -> Dict[str, Any]: return {LEARNER_STATS_KEY: {}} # e.g, stats, td error, etc.
20
torch_policy_v2.py
Python
rllib/policy/torch_policy_v2.py
bc3a1d35cf6e9a5fd7eef908a8e76aefb80ce6a9
ray
1
248,550
68
12
54
488
17
0
145
717
test_join_rules_public
EventAuthTestCase: build events for the right room version In practice, when we run the auth rules, all of the events have the right room version. Let's stop building Room V1 events for these tests and use the right version.
https://github.com/matrix-org/synapse.git
def test_join_rules_public(self): creator = "@creator:example.com" pleb = "@joiner:example.com" auth_events = { ("m.room.create", ""): _create_event(RoomVersions.V6, creator), ("m.room.member", creator): _join_event(RoomVersions.V6, creator), ("m.room.join_rules", ""): _join_rules_event( RoomVersions.V6, creator, "public" ), } # Check join. event_auth.check_auth_rules_for_event( RoomVersions.V6, _join_event(RoomVersions.V6, pleb), auth_events.values(), ) # A user cannot be force-joined to a room. with self.assertRaises(AuthError): event_auth.check_auth_rules_for_event( RoomVersions.V6, _member_event(RoomVersions.V6, pleb, "join", sender=creator), auth_events.values(), ) # Banned should be rejected. auth_events[("m.room.member", pleb)] = _member_event( RoomVersions.V6, pleb, "ban" ) with self.assertRaises(AuthError): event_auth.check_auth_rules_for_event( RoomVersions.V6, _join_event(RoomVersions.V6, pleb), auth_events.values(), ) # A user who left can re-join. auth_events[("m.room.member", pleb)] = _member_event( RoomVersions.V6, pleb, "leave" ) event_auth.check_auth_rules_for_event( RoomVersions.V6, _join_event(RoomVersions.V6, pleb), auth_events.values(), ) # A user can send a join if they're in the room. auth_events[("m.room.member", pleb)] = _member_event( RoomVersions.V6, pleb, "join" ) event_auth.check_auth_rules_for_event( RoomVersions.V6, _join_event(RoomVersions.V6, pleb), auth_events.values(), ) # A user can accept an invite. auth_events[("m.room.member", pleb)] = _member_event( RoomVersions.V6, pleb, "invite", sender=creator ) event_auth.check_auth_rules_for_event( RoomVersions.V6, _join_event(RoomVersions.V6, pleb), auth_events.values(), )
309
test_event_auth.py
Python
tests/test_event_auth.py
2959184a42398277ff916206235b844a8f7be5d7
synapse
1
178,014
35
15
17
179
21
0
51
182
check_toname_in_config_by_regex
fix: DEV-1462: Fix changing label config for repeater tag (#2725) * fix: DEV-1462: Fix changing label config for repeater tag with created annotations
https://github.com/heartexlabs/label-studio.git
def check_toname_in_config_by_regex(config_string, to_name, control_type=None): c = parse_config(config_string) if control_type: check_list = [control_type] else: check_list = list(c.keys()) for control in check_list: item = c[control].get('regex', {}) for to_name_item in c[control]['to_name']: expression = to_name_item for key in item: expression = expression.replace(key, item[key]) pattern = re.compile(expression) full_match = pattern.fullmatch(to_name) if full_match: return True return False
112
label_config.py
Python
label_studio/core/label_config.py
583b3cb3b03a36a30b3ce9fe96eb4fb28548a070
label-studio
6
100,928
18
12
6
85
12
0
19
73
ask_multi_load
Core updates - Change loss loading mechanism - Autosize tooltips based on content size - Random linting + code modernisation
https://github.com/deepfakes/faceswap.git
def ask_multi_load(filepath, filetypes): filenames = FileHandler("filename_multi", filetypes).return_file if filenames: final_names = " ".join(f"\"{fname}\"" for fname in filenames) logger.debug(final_names) filepath.set(final_names)
46
control_helper.py
Python
lib/gui/control_helper.py
bad5025aea1adb9126580e14e064e6c99089243d
faceswap
3
38,106
64
18
33
349
24
0
105
564
tokenize
Black preview (#17217) * Black preview * Fixup too! * Fix check copies * Use the same version as the CI * Bump black
https://github.com/huggingface/transformers.git
def tokenize(self, x): if isinstance(x, list) and all([isinstance(_x, list) for _x in x]): d = None for l in x: t = self.tokenizer( l, padding="max_length", max_length=384, truncation=True, return_tensors="pt", ) t["sizes"] = torch.tensor([len(l)]) if d is not None: for k in d.keys(): d[k] = torch.cat((d[k], t[k]), 0) else: d = t d["start_token_id"] = torch.tensor(self.tokenizer.convert_tokens_to_ids("[E]")) d["end_token_id"] = torch.tensor(self.tokenizer.convert_tokens_to_ids("[/E]")) elif isinstance(x, list) and all([isinstance(_x, str) for _x in x]): d = self.tokenizer( x, padding="max_length", max_length=384, truncation=True, return_tensors="pt", ) else: raise Exception( "Type of parameter x was not recognized! Only `list of strings` for query or `list of lists of" " strings` for supports are supported." ) return d
219
tokenizer_utils.py
Python
examples/research_projects/fsner/src/fsner/tokenizer_utils.py
afe5d42d8d1d80af911ed980c2936bfe887078f6
transformers
10
266,410
331
24
134
1,651
111
0
624
3,924
run
end_play: end the current play only (#76674) Fixes #76672
https://github.com/ansible/ansible.git
def run(self): result = 0 entrylist = [] entry = {} try: # preload become/connection/shell to set config defs cached list(connection_loader.all(class_only=True)) list(shell_loader.all(class_only=True)) list(become_loader.all(class_only=True)) for playbook in self._playbooks: # deal with FQCN resource = _get_collection_playbook_path(playbook) if resource is not None: playbook_path = resource[1] playbook_collection = resource[2] else: playbook_path = playbook # not fqcn, but might still be colleciotn playbook playbook_collection = _get_collection_name_from_path(playbook) if playbook_collection: display.warning("running playbook inside collection {0}".format(playbook_collection)) AnsibleCollectionConfig.default_collection = playbook_collection else: AnsibleCollectionConfig.default_collection = None pb = Playbook.load(playbook_path, variable_manager=self._variable_manager, loader=self._loader) # FIXME: move out of inventory self._inventory.set_playbook_basedir(os.path.realpath(os.path.dirname(playbook_path))) if self._tqm is None: # we are doing a listing entry = {'playbook': playbook_path} entry['plays'] = [] else: # make sure the tqm has callbacks loaded self._tqm.load_callbacks() self._tqm.send_callback('v2_playbook_on_start', pb) i = 1 plays = pb.get_plays() display.vv(u'%d plays in %s' % (len(plays), to_text(playbook_path))) for play in plays: if play._included_path is not None: self._loader.set_basedir(play._included_path) else: self._loader.set_basedir(pb._basedir) # clear any filters which may have been applied to the inventory self._inventory.remove_restriction() # Allow variables to be used in vars_prompt fields. all_vars = self._variable_manager.get_vars(play=play) templar = Templar(loader=self._loader, variables=all_vars) setattr(play, 'vars_prompt', templar.template(play.vars_prompt)) # FIXME: this should be a play 'sub object' like loop_control if play.vars_prompt: for var in play.vars_prompt: vname = var['name'] prompt = var.get("prompt", vname) default = var.get("default", None) private = boolean(var.get("private", True)) confirm = boolean(var.get("confirm", False)) encrypt = var.get("encrypt", None) salt_size = var.get("salt_size", None) salt = var.get("salt", None) unsafe = var.get("unsafe", None) if vname not in self._variable_manager.extra_vars: if self._tqm: self._tqm.send_callback('v2_playbook_on_vars_prompt', vname, private, prompt, encrypt, confirm, salt_size, salt, default, unsafe) play.vars[vname] = display.do_var_prompt(vname, private, prompt, encrypt, confirm, salt_size, salt, default, unsafe) else: # we are either in --list-<option> or syntax check play.vars[vname] = default # Post validate so any play level variables are templated all_vars = self._variable_manager.get_vars(play=play) templar = Templar(loader=self._loader, variables=all_vars) play.post_validate(templar) if context.CLIARGS['syntax']: continue if self._tqm is None: # we are just doing a listing entry['plays'].append(play) else: self._tqm._unreachable_hosts.update(self._unreachable_hosts) previously_failed = len(self._tqm._failed_hosts) previously_unreachable = len(self._tqm._unreachable_hosts) break_play = False # we are actually running plays batches = self._get_serialized_batches(play) if len(batches) == 0: self._tqm.send_callback('v2_playbook_on_play_start', play) self._tqm.send_callback('v2_playbook_on_no_hosts_matched') for batch in batches: # restrict the inventory to the hosts in the serialized batch self._inventory.restrict_to_hosts(batch) # and run it... try: result = self._tqm.run(play=play) except AnsibleEndPlay as e: result = e.result break # break the play if the result equals the special return code if result & self._tqm.RUN_FAILED_BREAK_PLAY != 0: result = self._tqm.RUN_FAILED_HOSTS break_play = True # check the number of failures here, to see if they're above the maximum # failure percentage allowed, or if any errors are fatal. If either of those # conditions are met, we break out, otherwise we only break out if the entire # batch failed failed_hosts_count = len(self._tqm._failed_hosts) + len(self._tqm._unreachable_hosts) - \ (previously_failed + previously_unreachable) if len(batch) == failed_hosts_count: break_play = True break # update the previous counts so they don't accumulate incorrectly # over multiple serial batches previously_failed += len(self._tqm._failed_hosts) - previously_failed previously_unreachable += len(self._tqm._unreachable_hosts) - previously_unreachable # save the unreachable hosts from this batch self._unreachable_hosts.update(self._tqm._unreachable_hosts) if break_play: break i = i + 1 # per play if entry: entrylist.append(entry) # per playbook # send the stats callback for this playbook if self._tqm is not None: if C.RETRY_FILES_ENABLED: retries = set(self._tqm._failed_hosts.keys()) retries.update(self._tqm._unreachable_hosts.keys()) retries = sorted(retries) if len(retries) > 0: if C.RETRY_FILES_SAVE_PATH: basedir = C.RETRY_FILES_SAVE_PATH elif playbook_path: basedir = os.path.dirname(os.path.abspath(playbook_path)) else: basedir = '~/' (retry_name, _) = os.path.splitext(os.path.basename(playbook_path)) filename = os.path.join(basedir, "%s.retry" % retry_name) if self._generate_retry_inventory(filename, retries): display.display("\tto retry, use: --limit @%s\n" % filename) self._tqm.send_callback('v2_playbook_on_stats', self._tqm._stats) # if the last result wasn't zero, break out of the playbook file name loop if result != 0: break if entrylist: return entrylist finally: if self._tqm is not None: self._tqm.cleanup() if self._loader: self._loader.cleanup_all_tmp_files() if context.CLIARGS['syntax']: display.display("No issues encountered") return result if context.CLIARGS['start_at_task'] and not self._tqm._start_at_done: display.error( "No matching task \"%s\" found." " Note: --start-at-task can only follow static includes." % context.CLIARGS['start_at_task'] ) return result
1,002
playbook_executor.py
Python
lib/ansible/executor/playbook_executor.py
f78deccec2d4b5447f32d4fc67eaa549f479ccaa
ansible
34
119,595
13
8
2
63
10
1
16
17
_coo_matmat
[sparse] change call signature of coo primitive wrappers
https://github.com/google/jax.git
def _coo_matmat(data, row, col, B, *, spinfo, transpose=False): return coo_matmat_p.bind(data, row, col, B, spinfo=spinfo, transpose=transpose) @coo_matmat_p.def_impl
@coo_matmat_p.def_impl
41
coo.py
Python
jax/experimental/sparse/coo.py
424536dcf421a8dd4b6dfd7ddffc066fec7661c7
jax
1
215,816
22
12
5
91
14
0
24
55
test_symlink_exists_file
Add some funtional tests Add functional tests for the following: - file.readlink - file.replace - file.symlink Remove unit tests for file.replace as they are duplicated in the added functional test
https://github.com/saltstack/salt.git
def test_symlink_exists_file(file, source): with pytest.helpers.temp_file("symlink.txt", contents="Source content") as target: with pytest.raises(CommandExecutionError) as exc: file.symlink(source, target) assert "Existing path is not a symlink:" in exc.value.message
50
test_symlink.py
Python
tests/pytests/functional/modules/file/test_symlink.py
a35b29b2651bf33c5d5b45e64bc7765ffde4aff4
salt
1
110,159
15
10
3
43
5
0
15
41
nargs_error
Factor out error generation for function calls with wrong nargs. ... matching the wording for standard functions. Note that nargs_error returns the exception without raising it itself to make the control flow clearer on the caller side.
https://github.com/matplotlib/matplotlib.git
def nargs_error(name, takes, given): return TypeError(f"{name}() takes {takes} positional arguments but " f"{given} were given")
18
__init__.py
Python
lib/matplotlib/_api/__init__.py
973e475ef85524c5e9cef0638c90ca9a159935e4
matplotlib
1
121,221
78
14
20
338
30
0
102
199
conv_shape_tuple
lax.conv_general_dilated: validate negative paddings
https://github.com/google/jax.git
def conv_shape_tuple(lhs_shape, rhs_shape, strides, pads, batch_group_count=1): if isinstance(pads, str): pads = lax.padtype_to_pads(lhs_shape[2:], rhs_shape[2:], strides, pads) if len(pads) != len(lhs_shape) - 2: msg = "Wrong number of explicit pads for convolution: expected {}, got {}." raise TypeError(msg.format(len(lhs_shape) - 2, len(pads))) lhs_padded = np.add(lhs_shape[2:], np.sum(np.array(pads).reshape(-1, 2), axis=1)) if np.any(lhs_padded < 0): raise ValueError("Negative padding is larger than the size of the corresponding dimension: " f"got padding={pads} for lhs_shape[2:]={lhs_shape[2:]}") out_space = core.stride_shape(lhs_padded, rhs_shape[2:], strides) out_space = np.maximum(0, out_space) if batch_group_count > 1: assert lhs_shape[0] % batch_group_count == 0 out_shape_0 = lhs_shape[0] // batch_group_count else: out_shape_0 = lhs_shape[0] out_shape = (out_shape_0, rhs_shape[0]) return tuple(out_shape + tuple(out_space))
210
convolution.py
Python
jax/_src/lax/convolution.py
489596c0e268bf37a7f6c2cb86822f38d24eecc9
jax
5
185,776
7
8
10
27
4
0
7
13
test_widget_remove_order
Add a unit test for removal ordering via DOMQuery.remove
https://github.com/Textualize/textual.git
async def test_widget_remove_order(): removals: list[str] = []
87
test_widget_removing.py
Python
tests/test_widget_removing.py
d3e7f5ad994a92ae1734caea8bb66cfb043fcfc4
textual
1
37,528
50
14
22
337
29
0
80
186
nested_simplify
Replace dict/BatchEncoding instance checks by Mapping (#17014) * Replace dict/BatchEncoding instance checks by Mapping * Typo
https://github.com/huggingface/transformers.git
def nested_simplify(obj, decimals=3): import numpy as np if isinstance(obj, list): return [nested_simplify(item, decimals) for item in obj] elif isinstance(obj, np.ndarray): return nested_simplify(obj.tolist()) elif isinstance(obj, Mapping): return {nested_simplify(k, decimals): nested_simplify(v, decimals) for k, v in obj.items()} elif isinstance(obj, (str, int, np.int64)): return obj elif obj is None: return obj elif is_torch_available() and isinstance(obj, torch.Tensor): return nested_simplify(obj.tolist(), decimals) elif is_tf_available() and tf.is_tensor(obj): return nested_simplify(obj.numpy().tolist()) elif isinstance(obj, float): return round(obj, decimals) elif isinstance(obj, (np.int32, np.float32)): return nested_simplify(obj.item(), decimals) else: raise Exception(f"Not supported: {type(obj)}")
213
testing_utils.py
Python
src/transformers/testing_utils.py
18df440709f1b19d1c5617c0d987c5ff8fd0915d
transformers
14
20,451
46
13
26
248
23
0
75
244
guess_lexer_for_filename
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def guess_lexer_for_filename(_fn, _text, **options): fn = basename(_fn) primary = {} matching_lexers = set() for lexer in _iter_lexerclasses(): for filename in lexer.filenames: if _fn_matches(fn, filename): matching_lexers.add(lexer) primary[lexer] = True for filename in lexer.alias_filenames: if _fn_matches(fn, filename): matching_lexers.add(lexer) primary[lexer] = False if not matching_lexers: raise ClassNotFound('no lexer for filename %r found' % fn) if len(matching_lexers) == 1: return matching_lexers.pop()(**options) result = [] for lexer in matching_lexers: rv = lexer.analyse_text(_text) if rv == 1.0: return lexer(**options) result.append((rv, lexer))
179
__init__.py
Python
pipenv/patched/notpip/_vendor/pygments/lexers/__init__.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
10
111,538
55
12
21
269
30
1
68
197
test_issue4674
Refactor KB for easier customization (#11268) * Add implementation of batching + backwards compatibility fixes. Tests indicate issue with batch disambiguation for custom singular entity lookups. * Fix tests. Add distinction w.r.t. batch size. * Remove redundant and add new comments. * Adjust comments. Fix variable naming in EL prediction. * Fix mypy errors. * Remove KB entity type config option. Change return types of candidate retrieval functions to Iterable from Iterator. Fix various other issues. * Update spacy/pipeline/entity_linker.py Co-authored-by: Paul O'Leary McCann <[email protected]> * Update spacy/pipeline/entity_linker.py Co-authored-by: Paul O'Leary McCann <[email protected]> * Update spacy/kb_base.pyx Co-authored-by: Paul O'Leary McCann <[email protected]> * Update spacy/kb_base.pyx Co-authored-by: Paul O'Leary McCann <[email protected]> * Update spacy/pipeline/entity_linker.py Co-authored-by: Paul O'Leary McCann <[email protected]> * Add error messages to NotImplementedErrors. Remove redundant comment. * Fix imports. * Remove redundant comments. * Rename KnowledgeBase to InMemoryLookupKB and BaseKnowledgeBase to KnowledgeBase. * Fix tests. * Update spacy/errors.py Co-authored-by: Sofie Van Landeghem <[email protected]> * Move KB into subdirectory. * Adjust imports after KB move to dedicated subdirectory. * Fix config imports. * Move Candidate + retrieval functions to separate module. Fix other, small issues. * Fix docstrings and error message w.r.t. class names. Fix typing for candidate retrieval functions. * Update spacy/kb/kb_in_memory.pyx Co-authored-by: Sofie Van Landeghem <[email protected]> * Update spacy/ml/models/entity_linker.py Co-authored-by: Sofie Van Landeghem <[email protected]> * Fix typing. * Change typing of mentions to be Span instead of Union[Span, str]. * Update docs. * Update EntityLinker and _architecture docs. * Update website/docs/api/entitylinker.md Co-authored-by: Paul O'Leary McCann <[email protected]> * Adjust message for E1046. * Re-add section for Candidate in kb.md, add reference to dedicated page. * Update docs and docstrings. * Re-add section + reference for KnowledgeBase.get_alias_candidates() in docs. * Update spacy/kb/candidate.pyx * Update spacy/kb/kb_in_memory.pyx * Update spacy/pipeline/legacy/entity_linker.py * Remove canididate.md. Remove mistakenly added config snippet in entity_linker.py. Co-authored-by: Paul O'Leary McCann <[email protected]> Co-authored-by: Sofie Van Landeghem <[email protected]>
https://github.com/explosion/spaCy.git
def test_issue4674(): nlp = English() kb = InMemoryLookupKB(nlp.vocab, entity_vector_length=3) vector1 = [0.9, 1.1, 1.01] vector2 = [1.8, 2.25, 2.01] with pytest.warns(UserWarning): kb.set_entities( entity_list=["Q1", "Q1"], freq_list=[32, 111], vector_list=[vector1, vector2], ) assert kb.get_size_entities() == 1 # dumping to file & loading back in with make_tempdir() as d: dir_path = ensure_path(d) if not dir_path.exists(): dir_path.mkdir() file_path = dir_path / "kb" kb.to_disk(str(file_path)) kb2 = InMemoryLookupKB(nlp.vocab, entity_vector_length=3) kb2.from_disk(str(file_path)) assert kb2.get_size_entities() == 1 @pytest.mark.issue(6730)
@pytest.mark.issue(6730)
166
test_entity_linker.py
Python
spacy/tests/pipeline/test_entity_linker.py
1f23c615d7a7326ca5a38a7d768b8b70caaa0e17
spaCy
2
50,918
107
19
50
611
48
0
172
610
postprocess
update yolov3_darknet53_vehicles (#1957) * update yolov3_darknet53_vehicles * update gpu config * update * add clean func * update save inference model
https://github.com/PaddlePaddle/PaddleHub.git
def postprocess(paths, images, data_out, score_thresh, label_names, output_dir, handle_id, visualization=True): results = data_out.copy_to_cpu() lod = data_out.lod()[0] check_dir(output_dir) if paths: assert type(paths) is list, "type(paths) is not list." if handle_id < len(paths): unhandled_paths = paths[handle_id:] unhandled_paths_num = len(unhandled_paths) else: unhandled_paths_num = 0 if images is not None: if handle_id < len(images): unhandled_paths = None unhandled_paths_num = len(images) - handle_id else: unhandled_paths_num = 0 output = list() for index in range(len(lod) - 1): output_i = {'data': []} if unhandled_paths and index < unhandled_paths_num: org_img_path = unhandled_paths[index] org_img = Image.open(org_img_path) else: org_img = images[index - unhandled_paths_num] org_img = org_img.astype(np.uint8) org_img = Image.fromarray(org_img[:, :, ::-1]) if visualization: org_img_path = get_save_image_name(org_img, output_dir, 'image_numpy_{}'.format((handle_id + index))) org_img.save(org_img_path) org_img_height = org_img.height org_img_width = org_img.width result_i = results[lod[index]:lod[index + 1]] for row in result_i: if len(row) != 6: continue if row[1] < score_thresh: continue category_id = int(row[0]) confidence = row[1] bbox = row[2:] dt = {} dt['label'] = label_names[category_id] dt['confidence'] = float(confidence) dt['left'], dt['top'], dt['right'], dt['bottom'] = clip_bbox(bbox, org_img_width, org_img_height) output_i['data'].append(dt) output.append(output_i) if visualization: output_i['save_path'] = draw_bounding_box_on_image(org_img_path, output_i['data'], output_dir) return output
380
processor.py
Python
modules/image/object_detection/yolov3_darknet53_vehicles/processor.py
7a847a39b1da6e6867031f52f713d92391b9729d
PaddleHub
13
255,400
43
13
15
229
23
0
54
209
test_error_opset_import_mismatch
Use Python type annotations rather than comments (#3962) * These have been supported since Python 3.5. ONNX doesn't support Python < 3.6, so we can use the annotations. Diffs generated by https://pypi.org/project/com2ann/. Signed-off-by: Gary Miguel <[email protected]> * Remove MYPY conditional logic in gen_proto.py It breaks the type annotations and shouldn't be needed. Signed-off-by: Gary Miguel <[email protected]> * Get rid of MYPY bool from more scripts Signed-off-by: Gary Miguel <[email protected]> * move Descriptors class above where its referenced in type annotation Signed-off-by: Gary Miguel <[email protected]> * fixes Signed-off-by: Gary Miguel <[email protected]> * remove extra blank line Signed-off-by: Gary Miguel <[email protected]> * fix type annotations Signed-off-by: Gary Miguel <[email protected]> * fix type annotation in gen_docs Signed-off-by: Gary Miguel <[email protected]> * fix Operators.md Signed-off-by: Gary Miguel <[email protected]> * fix TestCoverage.md Signed-off-by: Gary Miguel <[email protected]> * fix protoc-gen-mypy.py Signed-off-by: Gary Miguel <[email protected]>
https://github.com/onnx/onnx.git
def test_error_opset_import_mismatch(self) -> None: m1, m2 = _load_model(m1_def), _load_model(m2_def) m1 = helper.make_model(m1.graph, producer_name='test', opset_imports=[helper.make_opsetid("", 10)]) m2 = helper.make_model(m2.graph, producer_name='test', opset_imports=[helper.make_opsetid("", 15)]) io_map = [("B00", "B01"), ("B10", "B11"), ("B20", "B21")] self.assertRaises(ValueError, compose.merge_models, m1, m2, io_map) # Converting to the same Operator set version, should work m1 = version_converter.convert_version(m1, 15) m3 = compose.merge_models(m1, m2, io_map=io_map) checker.check_model(m3)
142
compose_test.py
Python
onnx/test/compose_test.py
83fa57c74edfd13ddac9548b8a12f9e3e2ed05bd
onnx
1
319,856
17
10
4
63
11
0
17
56
test_load_corrupt_file
Updates the classifier to catch warnings from scikit-learn and rebuild the model file when this happens
https://github.com/paperless-ngx/paperless-ngx.git
def test_load_corrupt_file(self, patched_pickle_load): # First load is the schema version patched_pickle_load.side_effect = [DocumentClassifier.FORMAT_VERSION, OSError()] with self.assertRaises(ClassifierModelCorruptError): self.classifier.load()
36
test_classifier.py
Python
src/documents/tests/test_classifier.py
77fbbe95ffb965525136982846f50e3ad8244de9
paperless-ngx
1
186,604
14
9
9
55
6
0
20
85
ipv4_enabled
Fully type certbot-nginx module (#9124) * Work in progress * Fix type * Work in progress * Work in progress * Work in progress * Work in progress * Work in progress * Oups. * Fix typing in UnspacedList * Fix logic * Finish typing * List certbot-nginx as fully typed in tox * Fix lint * Fix checks * Organize imports * Fix typing for Python 3.6 * Fix checks * Fix lint * Update certbot-nginx/certbot_nginx/_internal/configurator.py Co-authored-by: alexzorin <[email protected]> * Update certbot-nginx/certbot_nginx/_internal/configurator.py Co-authored-by: alexzorin <[email protected]> * Fix signature of deploy_cert regarding the installer interface * Update certbot-nginx/certbot_nginx/_internal/obj.py Co-authored-by: alexzorin <[email protected]> * Fix types * Update certbot-nginx/certbot_nginx/_internal/parser.py Co-authored-by: alexzorin <[email protected]> * Precise type * Precise _coerce possible inputs/outputs * Fix type * Update certbot-nginx/certbot_nginx/_internal/http_01.py Co-authored-by: ohemorange <[email protected]> * Fix type * Remove an undesirable implementation. * Fix type Co-authored-by: alexzorin <[email protected]> Co-authored-by: ohemorange <[email protected]>
https://github.com/certbot/certbot.git
def ipv4_enabled(self) -> bool: if not self.addrs: return True for a in self.addrs: if not a.ipv6: return True return False
33
obj.py
Python
certbot-nginx/certbot_nginx/_internal/obj.py
16aad35d31a887dab157f9d4f5e0fe9218d06064
certbot
4
176,485
22
12
5
99
11
0
25
60
test_basic
Update black (#5438) * CI: sync up black dev requirements version with precommit * Run black Co-authored-by: Jarrod Millman <[email protected]>
https://github.com/networkx/networkx.git
def test_basic(self): trees = [(nx.full_rary_tree(2, 2**2 - 1), 0) for i in range(2)] actual = nx.join(trees) expected = nx.full_rary_tree(2, 2**3 - 1) assert nx.is_isomorphic(actual, expected)
64
test_operations.py
Python
networkx/algorithms/tree/tests/test_operations.py
f6755ffa00211b523c6c0bec5398bc6c3c43c8b1
networkx
2
296,688
5
15
2
53
6
0
5
11
format_target_temperature
Fix #69952: Daikin AC Temperature jumps after being set (#70326)
https://github.com/home-assistant/core.git
def format_target_temperature(target_temperature): return str(round(float(target_temperature), 1)).rstrip("0").rstrip(".")
29
climate.py
Python
homeassistant/components/daikin/climate.py
b0ed42a5a58976ebe82b5bbbb60c499648a1718b
core
1
70,730
21
11
4
70
8
0
23
58
test_not_collapsed_with_legacy
Update Wagtail test cases to match slim sidebar capabilities and implementation details
https://github.com/wagtail/wagtail.git
def test_not_collapsed_with_legacy(self): # Sidebar should not be collapsed because the feature flag is not enabled self.client.cookies['wagtail_sidebar_collapsed'] = '1' response = self.client.get(reverse('wagtailadmin_home')) self.assertNotContains(response, 'sidebar-collapsed')
37
test_menu.py
Python
wagtail/admin/tests/test_menu.py
18c4d7c81356dbd5c4503db2ea24b21492512317
wagtail
1
47,623
21
12
9
83
14
0
21
128
test_pool_slots_property
Replace usage of `DummyOperator` with `EmptyOperator` (#22974) * Replace usage of `DummyOperator` with `EmptyOperator`
https://github.com/apache/airflow.git
def test_pool_slots_property(self): with pytest.raises(ValueError, match="pool slots .* cannot be less than 1"): dag = models.DAG(dag_id='test_run_pooling_task') EmptyOperator( task_id='test_run_pooling_task_op', dag=dag, pool='test_pool', pool_slots=0, )
47
test_taskinstance.py
Python
tests/models/test_taskinstance.py
49e336ae0302b386a2f47269a6d13988382d975f
airflow
1
163,655
127
16
47
435
46
0
191
651
putmask
BUG: setting pd.NA into Series casts to object (#45431)
https://github.com/pandas-dev/pandas.git
def putmask(self, mask, new) -> list[Block]: orig_mask = mask values = cast(np.ndarray, self.values) mask, noop = validate_putmask(values.T, mask) assert not isinstance(new, (ABCIndex, ABCSeries, ABCDataFrame)) if new is lib.no_default: new = self.fill_value new = self._standardize_fill_value(new) if self._can_hold_element(new): putmask_without_repeat(values.T, mask, new) return [self] elif np_version_under1p20 and infer_dtype_from(new)[0].kind in ["m", "M"]: # using putmask with object dtype will incorrectly cast to object # Having excluded self._can_hold_element, we know we cannot operate # in-place, so we are safe using `where` return self.where(new, ~mask) elif noop: return [self] elif self.ndim == 1 or self.shape[0] == 1: # no need to split columns if not is_list_like(new): # putmask_smart can't save us the need to cast return self.coerce_to_target_dtype(new).putmask(mask, new) # This differs from # `self.coerce_to_target_dtype(new).putmask(mask, new)` # because putmask_smart will check if new[mask] may be held # by our dtype. nv = putmask_smart(values.T, mask, new).T return [self.make_block(nv)] else: is_array = isinstance(new, np.ndarray) res_blocks = [] nbs = self._split() for i, nb in enumerate(nbs): n = new if is_array: # we have a different value per-column n = new[:, i : i + 1] submask = orig_mask[:, i : i + 1] rbs = nb.putmask(submask, n) res_blocks.extend(rbs) return res_blocks
275
blocks.py
Python
pandas/core/internals/blocks.py
3510b1fd2a9cf752638f4af751bdeb33496db766
pandas
11
109,118
92
14
27
281
19
1
100
285
xkcd
Simplify impl. of functions optionally used as context managers. We can actually just put the "exit" logic into an ExitStack callback. If the return value is never `__enter__`'d via a "with" statement, it is never `__exit__`'d either.
https://github.com/matplotlib/matplotlib.git
def xkcd(scale=1, length=100, randomness=2): # This cannot be implemented in terms of contextmanager() or rc_context() # because this needs to work as a non-contextmanager too. if rcParams['text.usetex']: raise RuntimeError( "xkcd mode is not compatible with text.usetex = True") stack = ExitStack() stack.callback(dict.update, rcParams, rcParams.copy()) from matplotlib import patheffects rcParams.update({ 'font.family': ['xkcd', 'xkcd Script', 'Humor Sans', 'Comic Neue', 'Comic Sans MS'], 'font.size': 14.0, 'path.sketch': (scale, length, randomness), 'path.effects': [ patheffects.withStroke(linewidth=4, foreground="w")], 'axes.linewidth': 1.5, 'lines.linewidth': 2.0, 'figure.facecolor': 'white', 'grid.linewidth': 0.0, 'axes.grid': False, 'axes.unicode_minus': False, 'axes.edgecolor': 'black', 'xtick.major.size': 8, 'xtick.major.width': 3, 'ytick.major.size': 8, 'ytick.major.width': 3, }) return stack ## Figures ## @_api.make_keyword_only("3.6", "facecolor")
@_api.make_keyword_only("3.6", "facecolor")
158
pyplot.py
Python
lib/matplotlib/pyplot.py
2d918ba09155810194bb4ba136369082ad46c8c8
matplotlib
2
157,202
38
14
23
200
13
0
41
111
test_use_nullable_dtypes
Add support for `use_nullable_dtypes` to `dd.read_parquet` (#9617)
https://github.com/dask/dask.git
def test_use_nullable_dtypes(tmp_path, engine): df = pd.DataFrame( { "a": pd.Series([1, 2, pd.NA, 3, 4], dtype="Int64"), "b": pd.Series([True, pd.NA, False, True, False], dtype="boolean"), "c": pd.Series([0.1, 0.2, 0.3, pd.NA, 0.4], dtype="Float64"), "d": pd.Series(["a", "b", "c", "d", pd.NA], dtype="string"), } ) ddf = dd.from_pandas(df, npartitions=2)
257
test_parquet.py
Python
dask/dataframe/io/tests/test_parquet.py
b1e468e8645baee30992fbfa84250d816ac1098a
dask
3
107,623
113
14
26
418
49
0
169
501
update_positions
Cleanup AnnotationBbox. Inline _update_position_xybox into update_positions. Avoid unpacking x,y pairs where unnecessary. Don't bother copying arrowprops, as we don't actually modify it. Reuse mutation scale for both patch and arrow. Clarify the doc for frameon. Various small extra cleanups.
https://github.com/matplotlib/matplotlib.git
def update_positions(self, renderer): x, y = self.xybox if isinstance(self.boxcoords, tuple): xcoord, ycoord = self.boxcoords x1, y1 = self._get_xy(renderer, x, y, xcoord) x2, y2 = self._get_xy(renderer, x, y, ycoord) ox0, oy0 = x1, y2 else: ox0, oy0 = self._get_xy(renderer, x, y, self.boxcoords) w, h, xd, yd = self.offsetbox.get_extent(renderer) fw, fh = self._box_alignment self.offsetbox.set_offset((ox0 - fw * w + xd, oy0 - fh * h + yd)) bbox = self.offsetbox.get_window_extent(renderer) self.patch.set_bounds(bbox.bounds) mutation_scale = renderer.points_to_pixels(self.get_fontsize()) self.patch.set_mutation_scale(mutation_scale) if self.arrowprops: # Use FancyArrowPatch if self.arrowprops has "arrowstyle" key. # Adjust the starting point of the arrow relative to the textbox. # TODO: Rotation needs to be accounted. arrow_begin = bbox.p0 + bbox.size * self._arrow_relpos arrow_end = self._get_position_xy(renderer) # The arrow (from arrow_begin to arrow_end) will be first clipped # by patchA and patchB, then shrunk by shrinkA and shrinkB (in # points). If patch A is not set, self.bbox_patch is used. self.arrow_patch.set_positions(arrow_begin, arrow_end) if "mutation_scale" in self.arrowprops: mutation_scale = renderer.points_to_pixels( self.arrowprops["mutation_scale"]) # Else, use fontsize-based mutation_scale defined above. self.arrow_patch.set_mutation_scale(mutation_scale) patchA = self.arrowprops.get("patchA", self.patch) self.arrow_patch.set_patchA(patchA)
264
offsetbox.py
Python
lib/matplotlib/offsetbox.py
924d7c7f9900d8839e66616791121237101e7b57
matplotlib
4
180,449
63
12
31
419
27
0
83
412
test_component_functions
detect all types of null default value (#1685) * detect all types of null default value * fix test * address review comments
https://github.com/gradio-app/gradio.git
def test_component_functions(self): radio_input = gr.Radio(["a", "b", "c"]) self.assertEqual(radio_input.preprocess("c"), "c") self.assertEqual(radio_input.preprocess_example("a"), "a") self.assertEqual(radio_input.serialize("a", True), "a") with tempfile.TemporaryDirectory() as tmpdirname: to_save = radio_input.save_flagged(tmpdirname, "radio_input", "a", None) self.assertEqual(to_save, "a") restored = radio_input.restore_flagged(tmpdirname, to_save, None) self.assertEqual(restored, "a") self.assertIsInstance(radio_input.generate_sample(), str) radio_input = gr.Radio( choices=["a", "b", "c"], default="a", label="Pick Your One Input" ) self.assertEqual( radio_input.get_config(), { "choices": ["a", "b", "c"], "value": None, "name": "radio", "show_label": True, "label": "Pick Your One Input", "style": {}, "elem_id": None, "visible": True, "interactive": None, }, ) with self.assertRaises(ValueError): wrong_type = gr.Radio(["a", "b"], type="unknown") wrong_type.preprocess(0)
235
test_components.py
Python
test/test_components.py
a2b84199d88f84fd2dc515e092e79380ed7cef50
gradio
1
189,528
10
8
11
40
5
0
10
38
stop_submobject_movement
Upgraded typehints (#2429) * Future Annotations * Delete template_twitter_post.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Apply suggestions from code review * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed broken RTD Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
https://github.com/ManimCommunity/manim.git
def stop_submobject_movement(self) -> VectorField: self.remove_updater(self.submob_movement_updater) self.submob_movement_updater = None return self
23
vector_field.py
Python
manim/mobject/vector_field.py
daf23c9d1031b12d9c119b8f6b7e60727d7f9242
manim
1
107,875
14
11
3
57
6
0
14
35
_extend_upper
FIX: Handle inverted colorbar axes with extensions This fixes the colorbar extensions to use the proper color when the long axis is inverted.
https://github.com/matplotlib/matplotlib.git
def _extend_upper(self): minmax = "min" if self._long_axis().get_inverted() else "max" return self.extend in ('both', minmax)
31
colorbar.py
Python
lib/matplotlib/colorbar.py
ec374f5148631e4d392ed7e6d4c454d163a62f21
matplotlib
2
260,354
24
10
11
135
14
0
34
119
fit
MAINT Use _validate_params in SparsePCA and MiniBatchSparsePCA (#23710) Co-authored-by: Guillaume Lemaitre <[email protected]> Co-authored-by: jeremiedbb <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def fit(self, X, y=None): self._validate_params() random_state = check_random_state(self.random_state) X = self._validate_data(X) self.mean_ = X.mean(axis=0) X = X - self.mean_ if self.n_components is None: n_components = X.shape[1] else: n_components = self.n_components return self._fit(X, n_components, random_state)
85
_sparse_pca.py
Python
sklearn/decomposition/_sparse_pca.py
db6123fe40400828918037f3fae949bfcc4d9d05
scikit-learn
2
86,890
32
14
17
168
24
0
39
206
get_repo
feat(github): Log Github integration errors (#39993) We are flying blind without this. This helps the debugging of issues.
https://github.com/getsentry/sentry.git
def get_repo(self, integration, organization, event): try: project_id = event["project"]["id"] except KeyError: logger.info( "gitlab.webhook.missing-projectid", extra={"integration_id": integration.id} ) logger.exception("Missing project ID.") raise Http404() external_id = "{}:{}".format(integration.metadata["instance"], project_id) try: repo = Repository.objects.get( organization_id=organization.id, provider=PROVIDER_NAME, external_id=external_id ) except Repository.DoesNotExist: return None return repo
100
webhooks.py
Python
src/sentry/integrations/gitlab/webhooks.py
746c20250f419a227bed0d174791e9c9b75daa13
sentry
3
93,332
60
16
19
186
27
1
62
293
handle_subscription_metrics_logger
refs(metric_alerts): Consolidate `QueryDatasets` and `Dataset` (#36894) This refactor pr removes `QueryDatasets` and just uses `Dataset` everywhere. `QueryDatasets` existed before `Dataset`, but `Dataset` is now more widely used and is more up to date. The values here are the same, `Dataset` just supports a few more datasets. We already make sure that only datasets that are valid for alerts can be passed to the alert rules api, so this won't allow people to attempt to create alerts on datasets that don't support them.
https://github.com/getsentry/sentry.git
def handle_subscription_metrics_logger(subscription_update, subscription): from sentry.incidents.subscription_processor import SubscriptionProcessor try: if subscription.snuba_query.dataset == Dataset.Metrics.value: processor = SubscriptionProcessor(subscription) # XXX: Temporary hack so that we can extract these values without raising an exception processor.reset_trigger_counts = lambda *arg, **kwargs: None aggregation_value = processor.get_aggregation_value(subscription_update) logger.info( "handle_subscription_metrics_logger.message", extra={ "subscription_id": subscription.id, "dataset": subscription.snuba_query.dataset, "snuba_subscription_id": subscription.subscription_id, "result": subscription_update, "aggregation_value": aggregation_value, }, ) except Exception: logger.exception("Failed to log subscription results") @register_subscriber(INCIDENTS_SNUBA_SUBSCRIPTION_TYPE)
@register_subscriber(INCIDENTS_SNUBA_SUBSCRIPTION_TYPE)
106
tasks.py
Python
src/sentry/incidents/tasks.py
e1482001662b446c7c2be7c9daa19cba562c615c
sentry
3
186,683
26
9
16
130
14
0
36
122
_set_locations
Add typing to certbot.apache (#9071) * Add typing to certbot.apache Co-authored-by: Adrien Ferrand <[email protected]>
https://github.com/certbot/certbot.git
def _set_locations(self) -> Dict[str, str]: default: str = self.loc["root"] temp: str = os.path.join(self.root, "ports.conf") if os.path.isfile(temp): listen = temp name = temp else: listen = default name = default return {"default": default, "listen": listen, "name": name}
77
parser.py
Python
certbot-apache/certbot_apache/_internal/parser.py
7d9e9a49005de7961e84d2a7c608db57dbab3046
certbot
2
127,552
19
11
20
77
12
0
22
37
test_placement_group_parent
Migrate the deprecated placement_group option to PlacementGroupSchedulingStrategy (#28437) placement_group option is deprecated, use PlacementGroupSchedulingStrategy instead.
https://github.com/ray-project/ray.git
def test_placement_group_parent(ray_4_node_4_cpu, placement_group_capture_child_tasks): num_workers = 2 bundle = {"CPU": 1} bundles = [bundle.copy() for _ in range(num_workers + 1)] placement_group = ray.util.placement_group(bundles)
109
test_backend.py
Python
python/ray/train/tests/test_backend.py
57cdbb1769a9c32972ba0ec9e7e857eeea961869
ray
4
314,093
10
10
5
47
4
0
10
38
async_will_remove_from_hass
Fix cover, light, select, sensor, switch type hints in zha (#73770) * Fix zha sensor type hints * Fix zha entity type hints * Fix switch type hints * Fix light type hints * Fix cover type hints * Fix select type hints
https://github.com/home-assistant/core.git
async def async_will_remove_from_hass(self) -> None: assert self._cancel_refresh_handle self._cancel_refresh_handle() await super().async_will_remove_from_hass()
25
light.py
Python
homeassistant/components/zha/light.py
243905ae3e10f21c9bc8cbde565532e1b7b9112f
core
1
48,202
49
12
22
330
42
0
65
227
test_find_executable_task_instances_negative_open_pool_slots
Pools with negative open slots should not block other pools (#23143)
https://github.com/apache/airflow.git
def test_find_executable_task_instances_negative_open_pool_slots(self, dag_maker): set_default_pool_slots(0) self.scheduler_job = SchedulerJob(subdir=os.devnull) session = settings.Session() pool1 = Pool(pool='pool1', slots=1) pool2 = Pool(pool='pool2', slots=1) session.add(pool1) session.add(pool2) dag_id = 'SchedulerJobTest.test_find_executable_task_instances_negative_open_pool_slots' with dag_maker(dag_id=dag_id): op1 = EmptyOperator(task_id='op1', pool='pool1') op2 = EmptyOperator(task_id='op2', pool='pool2', pool_slots=2) dr1 = dag_maker.create_dagrun(run_type=DagRunType.SCHEDULED) ti1 = dr1.get_task_instance(op1.task_id, session) ti2 = dr1.get_task_instance(op2.task_id, session) ti1.state = State.SCHEDULED ti2.state = State.RUNNING session.flush() res = self.scheduler_job._executable_task_instances_to_queued(max_tis=1, session=session) assert 1 == len(res) assert res[0].key == ti1.key session.rollback()
200
test_scheduler_job.py
Python
tests/jobs/test_scheduler_job.py
7132be2f11db24161940f57613874b4af86369c7
airflow
1
258,549
9
11
3
45
7
0
9
34
staged_predict_proba
DOC Fix incorrect heading underline length in docstrings (#22278)
https://github.com/scikit-learn/scikit-learn.git
def staged_predict_proba(self, X): for raw_predictions in self._staged_raw_predict(X): yield self._loss.predict_proba(raw_predictions)
27
gradient_boosting.py
Python
sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py
24106c2149683efeb642c8c1317152d7fe5be162
scikit-learn
2
42,746
65
17
28
235
19
0
84
334
provide_gcp_credential_file_as_context
Ensure @contextmanager decorates generator func (#23103)
https://github.com/apache/airflow.git
def provide_gcp_credential_file_as_context(self) -> Generator[Optional[str], None, None]: key_path: Optional[str] = self._get_field('key_path', None) keyfile_dict: Optional[str] = self._get_field('keyfile_dict', None) if key_path and keyfile_dict: raise AirflowException( "The `keyfile_dict` and `key_path` fields are mutually exclusive. " "Please provide only one value." ) elif key_path: if key_path.endswith('.p12'): raise AirflowException('Legacy P12 key file are not supported, use a JSON key file.') with patch_environ({CREDENTIALS: key_path}): yield key_path elif keyfile_dict: with tempfile.NamedTemporaryFile(mode='w+t') as conf_file: conf_file.write(keyfile_dict) conf_file.flush() with patch_environ({CREDENTIALS: conf_file.name}): yield conf_file.name else: # We will use the default service account credentials. yield None
133
base_google.py
Python
airflow/providers/google/common/hooks/base_google.py
e58985598f202395098e15b686aec33645a906ff
airflow
6
287,994
94
16
114
985
22
1
246
1,907
test_migration_1_1_to_1_4
Add serial_number to device registry entries (#77713)
https://github.com/home-assistant/core.git
async def test_migration_1_1_to_1_4(hass, hass_storage): hass_storage[device_registry.STORAGE_KEY] = { "version": 1, "minor_version": 1, "data": { "devices": [ { "config_entries": ["1234"], "connections": [["Zigbee", "01.23.45.67.89"]], "entry_type": "service", "id": "abcdefghijklm", "identifiers": [["serial", "12:34:56:AB:CD:EF"]], "manufacturer": "manufacturer", "model": "model", "name": "name", "sw_version": "version", }, # Invalid entry type { "config_entries": [None], "connections": [], "entry_type": "INVALID_VALUE", "id": "invalid-entry-type", "identifiers": [["serial", "mock-id-invalid-entry"]], "manufacturer": None, "model": None, "name": None, "sw_version": None, }, ], "deleted_devices": [ { "config_entries": ["123456"], "connections": [], "entry_type": "service", "id": "deletedid", "identifiers": [["serial", "12:34:56:AB:CD:FF"]], "manufacturer": "manufacturer", "model": "model", "name": "name", "sw_version": "version", } ], }, } await device_registry.async_load(hass) registry = device_registry.async_get(hass) # Test data was loaded entry = registry.async_get_or_create( config_entry_id="1234", connections={("Zigbee", "01.23.45.67.89")}, identifiers={("serial", "12:34:56:AB:CD:EF")}, ) assert entry.id == "abcdefghijklm" # Update to trigger a store entry = registry.async_get_or_create( config_entry_id="1234", connections={("Zigbee", "01.23.45.67.89")}, identifiers={("serial", "12:34:56:AB:CD:EF")}, sw_version="new_version", ) assert entry.id == "abcdefghijklm" # Check we store migrated data await flush_store(registry._store) assert hass_storage[device_registry.STORAGE_KEY] == { "version": device_registry.STORAGE_VERSION_MAJOR, "minor_version": device_registry.STORAGE_VERSION_MINOR, "key": device_registry.STORAGE_KEY, "data": { "devices": [ { "area_id": None, "config_entries": ["1234"], "configuration_url": None, "connections": [["Zigbee", "01.23.45.67.89"]], "disabled_by": None, "entry_type": "service", "hw_version": None, "id": "abcdefghijklm", "identifiers": [["serial", "12:34:56:AB:CD:EF"]], "manufacturer": "manufacturer", "model": "model", "name": "name", "name_by_user": None, "serial_number": None, "sw_version": "new_version", "via_device_id": None, }, { "area_id": None, "config_entries": [None], "configuration_url": None, "connections": [], "disabled_by": None, "entry_type": None, "hw_version": None, "id": "invalid-entry-type", "identifiers": [["serial", "mock-id-invalid-entry"]], "manufacturer": None, "model": None, "name_by_user": None, "name": None, "serial_number": None, "sw_version": None, "via_device_id": None, }, ], "deleted_devices": [ { "config_entries": ["123456"], "connections": [], "id": "deletedid", "identifiers": [["serial", "12:34:56:AB:CD:FF"]], "orphaned_timestamp": None, } ], }, } @pytest.mark.parametrize("load_registries", [False])
@pytest.mark.parametrize("load_registries", [False])
519
test_device_registry.py
Python
tests/helpers/test_device_registry.py
cba3b6ad944408b9ffd906f4da5e5f5fd615b174
core
1
248,057
67
12
21
250
18
0
97
293
test_thread_edit_latest_event
Misc. clean-ups to the relations code (#12519) * Corrects some typos / copy & paste errors in tests. * Clarifies docstrings. * Removes an unnecessary method.
https://github.com/matrix-org/synapse.git
def test_thread_edit_latest_event(self) -> None: # Create a thread and edit the last event. channel = self._send_relation( RelationTypes.THREAD, "m.room.message", content={"msgtype": "m.text", "body": "A threaded reply!"}, ) threaded_event_id = channel.json_body["event_id"] new_body = {"msgtype": "m.text", "body": "I've been edited!"} channel = self._send_relation( RelationTypes.REPLACE, "m.room.message", content={"msgtype": "m.text", "body": "foo", "m.new_content": new_body}, parent_id=threaded_event_id, ) # Fetch the thread root, to get the bundled aggregation for the thread. relations_dict = self._get_bundled_aggregations() # We expect that the edit message appears in the thread summary in the # unsigned relations section. self.assertIn(RelationTypes.THREAD, relations_dict) thread_summary = relations_dict[RelationTypes.THREAD] self.assertIn("latest_event", thread_summary) latest_event_in_thread = thread_summary["latest_event"] self.assertEqual(latest_event_in_thread["content"]["body"], "I've been edited!")
138
test_relations.py
Python
tests/rest/client/test_relations.py
185da8f0f2db8e4d502a904942cbd8a6840e27c8
synapse
1
8,573
51
14
11
217
23
0
57
117
_split
Add H&M fashion recommendation dataset (#2708) * allow individual file downloads from kaggle * pipe download_filenames to kaggle download fn * add dataset config for H&M Fashion Recommendations * add custom loader * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * use local backend instead of mock * add docstring for sample * fix titanic test * move negative_sample to ludwig.data * do not negative sample in loader Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
https://github.com/ludwig-ai/ludwig.git
def _split(df): splitter = get_splitter("datetime", column="year_month", probabilities=(0.7, 0.2, 0.1)) if not isinstance(df, pd.DataFrame): df = df.compute() train_dfs, val_dfs, test_dfs = [], [], [] for customer_id in df["customer_id"].unique(): # Split per customer_id to ensure that interactions for a customer are across all splits train_df, val_df, test_df = splitter.split(df[df["customer_id"] == customer_id], backend=LocalBackend()) train_dfs.append(train_df) val_dfs.append(val_df) test_dfs.append(test_df) return pd.concat(train_dfs), pd.concat(val_dfs), pd.concat(test_dfs)
141
hm_fashion_recommendations.py
Python
ludwig/datasets/loaders/hm_fashion_recommendations.py
abfdc05018cc4dec5a2fed20ad09e94f1749fca9
ludwig
3
132,860
40
14
18
185
20
0
67
256
_get_next_trial
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def _get_next_trial(self): no_trials_unfinished = True no_trials_pending = True for trial in self._live_trials: if not trial.is_finished(): no_trials_unfinished = False if trial.status == Trial.PENDING: no_trials_pending = False if not no_trials_unfinished and not no_trials_pending: break wait_for_trial = no_trials_unfinished and not self._search_alg.is_finished() # Only fetch a new trial if we have no pending trial if wait_for_trial or no_trials_pending: self._update_trial_queue(blocking=wait_for_trial) with warn_if_slow("choose_trial_to_run"): trial = self._scheduler_alg.choose_trial_to_run(self) if trial: logger.debug("Running trial {}".format(trial)) return trial
107
trial_runner.py
Python
python/ray/tune/trial_runner.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
10
43,207
31
13
15
152
24
0
35
192
test__build_query
Don't rely on current ORM structure for db clean command (#23574) For command DB clean, by not relying on the ORM models, we will be able to use the command even when the metadatabase is not yet upgraded to the version of Airflow you have installed. Additionally we archive all rows before deletion.
https://github.com/apache/airflow.git
def test__build_query(self, table_name, date_add_kwargs, expected_to_delete, external_trigger): base_date = pendulum.DateTime(2022, 1, 1, tzinfo=pendulum.timezone('UTC')) create_tis( base_date=base_date, num_tis=10, external_trigger=external_trigger, ) with create_session() as session: clean_before_date = base_date.add(**date_add_kwargs) query = _build_query( **config_dict[table_name].__dict__, clean_before_timestamp=clean_before_date, session=session, ) assert len(query.all()) == expected_to_delete
98
test_db_cleanup.py
Python
tests/utils/test_db_cleanup.py
95bd6b71cc9f5da377e272707f7b68000d980939
airflow
1
160,128
32
11
10
131
18
0
35
97
test_build_dir
TST: Initialize f2py2e tests of the F2PY CLI (#20668) Increases F2PY coverage by around 15 percent. For the CLI itself it covers the major features (around 70 percent), with the exception of mostly numpy.distutils stuff. More importantly, sets the groundwork for #20056, in that passing the same testsuite should indicate feature parity.
https://github.com/numpy/numpy.git
def test_build_dir(capfd, hello_world_f90, monkeypatch): ipath = Path(hello_world_f90) mname = "blah" odir = "tttmp" monkeypatch.setattr(sys, "argv", f'f2py -m {mname} {ipath} --build-dir {odir}'.split()) with util.switchdir(ipath.parent): f2pycli() out, _ = capfd.readouterr() assert f"Wrote C/API module \"{mname}\"" in out
64
test_f2py2e.py
Python
numpy/f2py/tests/test_f2py2e.py
729ad4f92420231e2a7009b3223c6c7620b8b808
numpy
1
298,212
29
11
14
127
19
0
32
115
help_test_reload_with_config
Do not depend MQTT CI tests on debug logs (#84783) * Do not depend MQTT CI tests on debug logs * Leave Clean up expire as debug message
https://github.com/home-assistant/core.git
async def help_test_reload_with_config(hass, caplog, tmp_path, config): new_yaml_config_file = tmp_path / "configuration.yaml" new_yaml_config = yaml.dump(config) new_yaml_config_file.write_text(new_yaml_config) assert new_yaml_config_file.read_text() == new_yaml_config with patch.object(hass_config, "YAML_CONFIG_FILE", new_yaml_config_file): await hass.services.async_call( "mqtt", SERVICE_RELOAD, {}, blocking=True, ) await hass.async_block_till_done()
82
test_common.py
Python
tests/components/mqtt/test_common.py
ee66ffc8deaa6d383becc60c0418f63a7cfa4dc9
core
1
171,851
4
7
46
25
4
0
4
18
get_loc
DEPR: Remove method and tolerance in Index.get_loc, bump xarray (#49630) * DEPR: Remove method and tolerance in Index.get_loc * note xarray bump * Fix tests * Fix refactor in period * Lighter parameterization * xfail xarray test * Just use get_indexer
https://github.com/pandas-dev/pandas.git
def get_loc(self, key): self._check_indexing_error(key)
308
multi.py
Python
pandas/core/indexes/multi.py
bc987e708b9856f5d5c8cf3096e1e2bcf23e1121
pandas
14
192,296
13
10
4
63
14
0
14
42
test_probe_video_from_memory
Improve test_video_reader (#5498) * Improve test_video_reader * Fix linter error
https://github.com/pytorch/vision.git
def test_probe_video_from_memory(self, test_video, config): _, video_tensor = _get_video_tensor(VIDEO_DIR, test_video) probe_result = torch.ops.video_reader.probe_video_from_memory(video_tensor) self.check_probe_result(probe_result, config)
40
test_video_reader.py
Python
test/test_video_reader.py
c50d48845f7b1ca86d6a3b7f37a59be0ae11e36b
vision
1
42,510
51
11
15
154
15
0
73
224
__lazymodule_import
Prevent LazyLoader from modifying nltk.__dict__ Allows pytest --doctest-modules nltk to be executed
https://github.com/nltk/nltk.git
def __lazymodule_import(self): # Load and register module local_name = self.__lazymodule_name # e.g. "toolbox" full_name = self.__name__ # e.g. "nltk.toolbox" if self.__lazymodule_loaded: return self.__lazymodule_locals[local_name] if _debug: print("LazyModule: Loading module %r" % full_name) self.__lazymodule_locals[local_name] = module = __import__( full_name, self.__lazymodule_locals, self.__lazymodule_globals, "*" ) # Fill namespace with all symbols from original module to # provide faster access. self.__dict__.update(module.__dict__) # Set import flag self.__dict__["__lazymodule_loaded"] = 1 if _debug: print("LazyModule: Module %r loaded" % full_name) return module
89
lazyimport.py
Python
nltk/lazyimport.py
0fbbff998ed4b91b2f640a5193161642d98898cd
nltk
4
60,450
41
16
13
193
19
0
67
163
print_table
Balanced joint maximum mean discrepancy for deep transfer learning
https://github.com/jindongwang/transferlearning.git
def print_table(table, max_width): max_widths = [max_width] * len(table[0]) column_widths = [max(printed_len(row[j]) + 1 for row in table) for j in range(len(table[0]))] column_widths = [min(w, max_w) for w, max_w in zip(column_widths, max_widths)] for row in table: row_str = '' right_col = 0 for cell, width in zip(row, column_widths): right_col += width row_str += cell + ' ' row_str += ' ' * max(right_col - printed_len(row_str), 0) print row_str
123
summarize.py
Python
code/deep/BJMMD/caffe/tools/extra/summarize.py
cc4d0564756ca067516f71718a3d135996525909
transferlearning
6
303,393
12
8
6
59
5
0
17
67
vehicle_name
Refactor volvooncall to (mostly) use DataUpdateCoordinator (#75885) Co-authored-by: Martin Hjelmare <[email protected]>
https://github.com/home-assistant/core.git
def vehicle_name(self, vehicle): if vehicle.registration_number and vehicle.registration_number != "UNKNOWN": return vehicle.registration_number if vehicle.vin: return vehicle.vin return "Volvo"
34
__init__.py
Python
homeassistant/components/volvooncall/__init__.py
b5a6ee3c567aa50633ef47d342af685fb75e5219
core
4
149,759
18
12
7
126
12
0
22
79
fill_predictions
add freqao backend machinery, user interface, documentation
https://github.com/freqtrade/freqtrade.git
def fill_predictions(self, len_dataframe): filler = np.zeros(len_dataframe -len(self.predictions)) # startup_candle_count self.predictions = np.append(filler,self.predictions) self.do_predict = np.append(filler,self.do_predict) self.target_mean = np.append(filler,self.target_mean) self.target_std = np.append(filler,self.target_std) return
80
data_handler.py
Python
freqtrade/freqai/data_handler.py
fc837c4daa27a18ff0e86128f4d52089b88fa5fb
freqtrade
1
89,350
20
12
10
84
12
0
20
126
test_get_dynamic_sampling_default_biases
fix(dyn-sampling): Backend code clean up (#42001) We are consolidating server-side-sampling and dynamic-sampling flags into only dynamic-sampling. The flag is being controlled by plan
https://github.com/getsentry/sentry.git
def test_get_dynamic_sampling_default_biases(self): with Feature( { self.new_ds_flag: True, } ): response = self.get_success_response( self.organization.slug, self.project.slug, method="get" ) assert response.data["dynamicSamplingBiases"] == DEFAULT_BIASES
50
test_project_details.py
Python
tests/sentry/api/endpoints/test_project_details.py
6fc6106b6a57149a5bae3c0f4677349cfbae1155
sentry
1
280,789
5
7
2
27
5
0
5
19
save
Add serialization support to FeatureSpace. PiperOrigin-RevId: 496914744
https://github.com/keras-team/keras.git
def save(self, filepath): saving_lib.save_model(self, filepath)
16
feature_space.py
Python
keras/utils/feature_space.py
799f70761eeb8155dc25c6afce8c1d22b38367b0
keras
1
100,441
56
15
24
294
30
0
77
360
detect_rnet
Update all Keras Imports to be conditional (#1214) * Remove custom keras importer * first round keras imports fix * launcher.py: Remove KerasFinder references * 2nd round keras imports update (lib and extract) * 3rd round keras imports update (train) * remove KerasFinder from tests * 4th round keras imports update (tests)
https://github.com/deepfakes/faceswap.git
def detect_rnet(self, images, rectangle_batch, height, width): ret = [] # TODO: batching for idx, rectangles in enumerate(rectangle_batch): if not rectangles: ret.append([]) continue image = images[idx] crop_number = 0 predict_24_batch = [] for rect in rectangles: crop_img = image[int(rect[1]):int(rect[3]), int(rect[0]):int(rect[2])] scale_img = cv2.resize(crop_img, (24, 24)) predict_24_batch.append(scale_img) crop_number += 1 predict_24_batch = np.array(predict_24_batch) output = self.rnet.predict(predict_24_batch, batch_size=128) cls_prob = output[0] cls_prob = np.array(cls_prob) roi_prob = output[1] roi_prob = np.array(roi_prob) ret.append(filter_face_24net( cls_prob, roi_prob, rectangles, width, height, self.threshold[1] )) return ret
193
mtcnn.py
Python
plugins/extract/detect/mtcnn.py
aa39234538a8f83e6aa2b60b8275a570e8876ac2
faceswap
4
81,964
50
16
11
137
13
0
60
186
extract_data
Register pages for the Instance peers and install bundle endpoints This includes exposing a new interface for Page objects, Page.bytes, to return the full bytestring contents of the response.
https://github.com/ansible/awx.git
def extract_data(self, response): try: data = response.json() except ValueError as e: # If there was no json to parse data = {} if response.text or response.status_code not in (200, 202, 204): text = response.text if len(text) > 1024: text = text[:1024] + '... <<< Truncated >>> ...' log.debug("Unable to parse JSON response ({0.status_code}): {1} - '{2}'".format(response, e, text)) return data
83
page.py
Python
awxkit/awxkit/api/pages/page.py
68a44529b6b77d2d43d7099b654560bfd8bbf518
awx
5
106,933
18
12
7
73
11
0
18
37
_isolated_tk_test
TST: Remove numpy cpu disabling from some subprocess tests This removes the NPY_DISABLE_CPU_FEATURES flag from the sphinx and tk tests as they emit warnings on CI which leads to failure from the subprocess. These don't need to be disabled on these tests, so remove them from the environment variables that are passed in.
https://github.com/matplotlib/matplotlib.git
def _isolated_tk_test(success_count, func=None): if func is None: return functools.partial(_isolated_tk_test, success_count) # Remove decorators. source = re.search(r"(?ms)^def .*", inspect.getsource(func)).group(0)
56
test_backend_tk.py
Python
lib/matplotlib/tests/test_backend_tk.py
d6f68757c234d33f341adc9e3bd65053094cf748
matplotlib
2
70,901
21
9
7
103
11
0
28
91
test_is_html_renderer
Improve asserts in wagtail. These improvements were based on flake8-assertive, which compiled an extensive list of patterns to replace with more precise assertions. This should make the error messages better in case of failures.
https://github.com/wagtail/wagtail.git
def test_is_html_renderer(self): # TableBlock with default table_options block1 = TableBlock() self.assertIs(block1.is_html_renderer(), False) # TableBlock with altered table_options new_options = self.default_table_options.copy() new_options['renderer'] = 'html' block2 = TableBlock(table_options=new_options) self.assertIs(block2.is_html_renderer(), True)
58
tests.py
Python
wagtail/contrib/table_block/tests.py
a0ef2477a68f2deb83cdc9a0bb709cb644be028b
wagtail
1
268,996
11
10
3
58
9
0
12
15
top_k_categorical_matches
making util methods for all the categorical accuracies
https://github.com/keras-team/keras.git
def top_k_categorical_matches(y_true, y_pred, k=5): y_true = tf.math.argmax(y_true, axis=-1) return sparse_top_k_categorical_matches(y_true, y_pred, k=k)
38
metrics_utils.py
Python
keras/utils/metrics_utils.py
33f395aeceaad95910ce6f0931621bb82d3e967c
keras
1
276,480
18
13
8
98
9
0
19
67
basic_sequential
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def basic_sequential(): model = keras.Sequential( [ keras.layers.Dense(3, activation="relu", input_shape=(3,)), keras.layers.Dense(2, activation="softmax"), ] ) return ModelFn(model, (None, 3), (None, 2))
64
model_architectures.py
Python
keras/tests/model_architectures.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
267,989
15
12
3
74
10
0
17
31
env_dict
ansible-test - Use more native type hints. (#78435) * ansible-test - Use more native type hints. Simple search and replace to switch from comments to native type hints for return types of functions with no arguments. * ansible-test - Use more native type hints. Conversion of simple single-line function annotation type comments to native type hints. * ansible-test - Use more native type hints. Conversion of single-line function annotation type comments with default values to native type hints. * ansible-test - Use more native type hints. Manual conversion of type annotation comments for functions which have pylint directives.
https://github.com/ansible/ansible.git
def env_dict(self) -> t.Dict[str, str]: return dict((item[0], item[1]) for item in [e.split('=', 1) for e in self.env])
49
docker_util.py
Python
test/lib/ansible_test/_internal/docker_util.py
3eb0485dd92c88cc92152d3656d94492db44b183
ansible
3