id
int64
20
338k
vocab_size
int64
2
671
ast_levels
int64
4
32
nloc
int64
1
451
n_ast_nodes
int64
12
5.6k
n_identifiers
int64
1
186
n_ast_errors
int64
0
10
n_words
int64
2
2.17k
n_whitespaces
int64
2
13.8k
fun_name
stringlengths
2
73
commit_message
stringlengths
51
15.3k
url
stringlengths
31
59
code
stringlengths
51
31k
ast_errors
stringlengths
0
1.46k
token_counts
int64
6
3.32k
file_name
stringlengths
5
56
language
stringclasses
1 value
path
stringlengths
7
134
commit_id
stringlengths
40
40
repo
stringlengths
3
28
complexity
int64
1
153
130,603
14
9
8
55
10
0
14
36
get_blocks_with_metadata
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def get_blocks_with_metadata(self) -> List[Tuple[ObjectRef[Block], BlockMetadata]]: self.get_blocks() # Force bulk evaluation in LazyBlockList. return list(self.iter_blocks_with_metadata())
33
block_list.py
Python
python/ray/data/impl/block_list.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
1
163,783
7
7
5
26
5
0
7
21
_standardize_dtype
REF: share IntegerArray/FloatingArray coerce_to_array (#45596)
https://github.com/pandas-dev/pandas.git
def _standardize_dtype(cls, dtype) -> NumericDtype: raise AbstractMethodError(cls)
15
numeric.py
Python
pandas/core/arrays/numeric.py
fbc2ab6da69bb2ba18629aa20f1f2fe5fc85d3df
pandas
1
166,972
42
12
15
198
18
1
64
146
array_likes
DOC: Added docstrings to fixtures defined in array module (#47211)
https://github.com/pandas-dev/pandas.git
def array_likes(request): # GH#24539 recognize e.g xarray, dask, ... arr = np.array([1, 2, 3], dtype=np.int64) name = request.param if name == "memoryview": data = memoryview(arr) elif name == "array": # stdlib array import array data = array.array("i", arr) elif name == "dask": import dask.array data = dask.array.array(arr) elif name == "xarray": import xarray as xr data = xr.DataArray(arr) return arr, data @pytest.mark.parametrize("dtype", ["M8[ns]", "m8[ns]"])
@pytest.mark.parametrize("dtype", ["M8[ns]", "m8[ns]"])
99
test_datetimelike.py
Python
pandas/tests/arrays/test_datetimelike.py
89be1f053b695c4ce1c0569f737caf3f03c12128
pandas
5
189,387
2
6
8
13
2
0
2
9
test_npm_login_always_auth_error_ignored
Ignore errors setting always-auth in npm config set The release of NPM 9 has dropped the support of deprecated or unsupported parameters in the `npm config set` command. This includes the `always-auth` parameter. Since `always-auth=true` is being set internally via the `aws codeartifact login --tool npm` command, this results in users of NPM 9 getting an exception. this blocks users to use login command entirely with NPM 9. This flag is ignored by NPM 7 and 8. Since the NPM login command is also used for yarn users, it was not previously removed. This change is to catch and ignore an exception while setting `always-auth` flag for the `npm config set` command. Users with NPM < 9 will still set the parameter in their `.npmrc` file, while users with NPM >= 9 will not.
https://github.com/aws/aws-cli.git
def test_npm_login_always_auth_error_ignored(self):
63
test_codeartifact_login.py
Python
tests/functional/codeartifact/test_codeartifact_login.py
bc5de54af76aca8eda1666ff274b19b31f080fde
aws-cli
1
153,230
12
11
5
59
10
0
13
56
get_path
FIX-#4177: Support read_feather from pathlike objects. (#4179) Signed-off-by: mvashishtha <[email protected]>
https://github.com/modin-project/modin.git
def get_path(cls, file_path): if isinstance(file_path, str) and S3_ADDRESS_REGEX.search(file_path): return file_path else: return os.path.abspath(file_path)
36
file_dispatcher.py
Python
modin/core/io/file_dispatcher.py
c4e419b7b8ed52f010ac74ff5db9e2e66adae2dc
modin
3
9,935
10
9
4
55
10
0
12
40
Call
feat: star routing (#3900) * feat(proto): adjust proto for star routing (#3844) * feat(proto): adjust proto for star routing * feat(proto): generate proto files * feat(grpc): refactor grpclet interface (#3846) * feat: refactor connection pool for star routing (#3872) * feat(k8s): add more labels to k8s deployments * feat(network): refactor connection pool * feat(network): refactor k8s pool * feat: star routing graph gateway (#3877) * feat: star routing - refactor grpc data runtime (#3887) * feat(runtimes): refactor grpc dataruntime * fix(tests): adapt worker runtime tests * fix(import): fix import * feat(proto): enable sending multiple lists (#3891) * feat: star routing gateway (#3893) * feat: star routing gateway all protocols (#3897) * test: add streaming and prefetch tests (#3901) * feat(head): new head runtime for star routing (#3899) * feat(head): new head runtime * feat(head): new head runtime * style: fix overload and cli autocomplete * feat(network): improve proto comments Co-authored-by: Jina Dev Bot <[email protected]> * feat(worker): merge docs in worker runtime (#3905) * feat(worker): merge docs in worker runtime * feat(tests): assert after clean up * feat(tests): star routing runtime integration tests (#3908) * fix(tests): fix integration tests * test: test runtimes fast slow request (#3910) * feat(zmq): purge zmq, zed, routing_table (#3915) * feat(zmq): purge zmq, zed, routing_table * style: fix overload and cli autocomplete * feat(zmq): adapt comment in dependency list * style: fix overload and cli autocomplete * fix(tests): fix type tests Co-authored-by: Jina Dev Bot <[email protected]> * test: add test gateway to worker connection (#3921) * feat(pea): adapt peas for star routing (#3918) * feat(pea): adapt peas for star routing * style: fix overload and cli autocomplete * feat(pea): add tests * feat(tests): add failing head pea test Co-authored-by: Jina Dev Bot <[email protected]> * feat(tests): integration tests for peas (#3923) * feat(tests): integration tests for peas * feat(pea): remove _inner_pea function * feat: star routing container pea (#3922) * test: rescue tests (#3942) * fix: fix streaming tests (#3945) * refactor: move docker run to run (#3948) * feat: star routing pods (#3940) * feat(pod): adapt pods for star routing * feat(pods): adapt basepod to star routing * feat(pod): merge pod and compound pod * feat(tests): fix tests * style: fix overload and cli autocomplete * feat(test): add container pea int test * feat(ci): remove more unnecessary tests * fix(tests): remove jinad runtime * feat(ci): remove latency tracking * fix(ci): fix ci def * fix(runtime): enable runtime to be exited * fix(tests): wrap runtime test in process * fix(runtimes): remove unused runtimes * feat(runtimes): improve cancel wait * fix(ci): build test pip again in ci * fix(tests): fix a test * fix(test): run async in its own process * feat(pod): include shard in activate msg * fix(pea): dont join * feat(pod): more debug out * feat(grpc): manage channels properly * feat(pods): remove exitfifo * feat(network): add simple send retry mechanism * fix(network): await pool close * fix(test): always close grpc server in worker * fix(tests): remove container pea from tests * fix(tests): reorder tests * fix(ci): split tests * fix(ci): allow alias setting * fix(test): skip a test * feat(pods): address comments Co-authored-by: Jina Dev Bot <[email protected]> * test: unblock skipped test (#3957) * feat: jinad pea (#3949) * feat: jinad pea * feat: jinad pea * test: remote peas * test: toplogy tests with jinad * ci: parallel jobs * feat(tests): add pod integration tests (#3958) * feat(tests): add pod integration tests * fix(tests): make tests less flaky * fix(test): fix test * test(pea): remote pea topologies (#3961) * test(pea): remote pea simple topology * test: remote pea topologies * refactor: refactor streamer result handling (#3960) * feat(k8s): adapt K8s Pod for StarRouting (#3964) * test: optimize k8s test * test: increase timeout and use different namespace * test: optimize k8s test * test: build and load image when needed * test: refactor k8s test * test: fix image name error * test: fix k8s image load * test: fix typoe port expose * test: update tests in connection pool and handling * test: remove unused fixture * test: parameterize docker images * test: parameterize docker images * test: parameterize docker images * feat(k8s): adapt k8s pod for star routing * fix(k8s): dont overwrite add/remove function in pool * fix(k8s): some fixes * fix(k8s): some more fixes * fix(k8s): linting * fix(tests): fix tests * fix(tests): fix k8s unit tests * feat(k8s): complete k8s integration test * feat(k8s): finish k8s tests * feat(k8s): fix test * fix(tests): fix test with no name * feat(k8s): unify create/replace interface * feat(k8s): extract k8s port constants * fix(tests): fix tests * fix(tests): wait for runtime being ready in tests * feat(k8s): address comments Co-authored-by: bwanglzu <[email protected]> * feat(flow): adapt Flow for StarRouting (#3986) * feat(flow): add routes * feat(flow): adapt flow to star routing * style: fix overload and cli autocomplete * feat(flow): handle empty topologies * feat(k8s): allow k8s pool disabling * style: fix overload and cli autocomplete * fix(test): fix test with mock * fix(tests): fix more tests * feat(flow): clean up tests * style: fix overload and cli autocomplete * fix(tests): fix more tests * feat: add plot function (#3994) * fix(tests): avoid hanging tests * feat(flow): add type hinting * fix(test): fix duplicate exec name in test * fix(tests): fix more tests * fix(tests): enable jinad test again * fix(tests): random port fixture * fix(style): replace quotes Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Joan Fontanals <[email protected]> * feat(ci): bring back ci (#3997) * feat(ci): enable ci again * style: fix overload and cli autocomplete * feat(ci): add latency tracking * feat(ci): bring back some tests * fix(tests): remove invalid port test * feat(ci): disable daemon and distributed tests * fix(tests): fix entrypoint in hub test * fix(tests): wait for gateway to be ready * fix(test): fix more tests * feat(flow): do rolling update and scale sequentially * fix(tests): fix more tests * style: fix overload and cli autocomplete * feat: star routing hanging pods (#4011) * fix: try to handle hanging pods better * test: hanging pods test work * fix: fix topology graph problem * test: add unit test to graph * fix(tests): fix k8s tests * fix(test): fix k8s test * fix(test): fix k8s pool test * fix(test): fix k8s test * fix(test): fix k8s connection pool setting * fix(tests): make runtime test more reliable * fix(test): fix routes test * fix(tests): make rolling update test less flaky * feat(network): gurantee unique ports * feat(network): do round robin for shards * fix(ci): increase pytest timeout to 10 min Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Joan Fontanals <[email protected]> * fix(ci): fix ci file * feat(daemon): jinad pod for star routing * Revert "feat(daemon): jinad pod for star routing" This reverts commit ed9b37ac862af2e2e8d52df1ee51c0c331d76f92. * feat(daemon): remote jinad pod support (#4042) * feat(daemon): add pod tests for star routing * feat(daemon): add remote pod test * test(daemon): add remote pod arguments test * test(daemon): add async scale test * test(daemon): add rolling update test * test(daemon): fix host * feat(proto): remove message proto (#4051) * feat(proto): remove message proto * fix(tests): fix tests * fix(tests): fix some more tests * fix(tests): fix more tests * fix(tests): fix more tests * fix(tests): fix more tests * fix(tests): fix more tests * feat(proto): put docs back in data * fix(proto): clean up * feat(proto): clean up * fix(tests): skip latency tracking * fix(test): fix hub test * fix(tests): fix k8s test * fix(test): some test clean up * fix(style): clean up style issues * feat(proto): adjust for rebase * fix(tests): bring back latency tracking * fix(tests): fix merge accident * feat(proto): skip request serialization (#4074) * feat: add reduce to star routing (#4070) * feat: add reduce on shards to head runtime * test: add reduce integration tests with fixed order * feat: add reduce on needs * chore: get_docs_matrix_from_request becomes public * style: fix overload and cli autocomplete * docs: remove undeterministic results warning * fix: fix uses_after * test: assert correct num docs after reducing in test_external_pod * test: correct asserts after reduce in test_rolling_update * fix: no reduce if uses_after_address is set * fix: get_docs_from_request only if needed * fix: fix tests after merge * refactor: move reduce from data_request_handler to head * style: fix overload and cli autocomplete * chore: apply suggestions * fix: fix asserts * chore: minor test fix * chore: apply suggestions * test: remove flow tests with external executor (pea) * fix: fix test_expected_messages_routing * fix: fix test_func_joiner * test: adapt k8s test Co-authored-by: Jina Dev Bot <[email protected]> * fix(k8s): fix static pool config * fix: use custom protoc doc generator image (#4088) * fix: use custom protoc doc generator image * fix(docs): minor doc improvement * fix(docs): use custom image * fix(docs): copy docarray * fix: doc building local only * fix: timeout doc building * fix: use updated args when building ContainerPea * test: add container PeaFactory test * fix: force pea close on windows (#4098) * fix: dont reduce if uses exist (#4099) * fix: dont use reduce if uses exist * fix: adjust reduce tests * fix: adjust more reduce tests * fix: fix more tests * fix: adjust more tests * fix: ignore non jina resources (#4101) * feat(executor): enable async executors (#4102) * feat(daemon): daemon flow on star routing (#4096) * test(daemon): add remote flow test * feat(daemon): call scale in daemon * feat(daemon): remove tail args and identity * test(daemon): rename scalable executor * test(daemon): add a small delay in async test * feat(daemon): scale partial flow only * feat(daemon): call scale directly in partial flow store * test(daemon): use asyncio sleep * feat(daemon): enable flow level distributed tests * test(daemon): fix jinad env workspace config * test(daemon): fix pod test use new port rolling update * feat(daemon): enable distribuetd tests * test(daemon): remove duplicate tests and zed runtime test * test(daemon): fix stores unit test * feat(daemon): enable part of distributed tests * feat(daemon): enable part of distributed tests * test: correct test paths * test(daemon): add client test for remote flows * test(daemon): send a request with jina client * test(daemon): assert async generator * test(daemon): small interval between tests * test(daemon): add flow test for container runtime * test(daemon): add flow test for container runtime * test(daemon): fix executor name * test(daemon): fix executor name * test(daemon): use async client fetch result * test(daemon): finish container flow test * test(daemon): enable distributed in ci * test(daemon): enable distributed in ci * test(daemon): decare flows and pods * test(daemon): debug ci if else * test(daemon): debug ci if else * test(daemon): decare flows and pods * test(daemon): correct test paths * test(daemon): add small delay for async tests * fix: star routing fixes (#4100) * docs: update docs * fix: fix Request.__repr__ * docs: update flow remarks * docs: fix typo * test: add non_empty_fields test * chore: remove non_empty_fields test * feat: polling per endpoint (#4111) * feat(polling): polling per endpoint configurable * fix: adjust tests * feat(polling): extend documentation * style: fix overload and cli autocomplete * fix: clean up * fix: adjust more tests * fix: remove repeat from flaky test * fix: k8s test * feat(polling): address pr feedback * feat: improve docs Co-authored-by: Jina Dev Bot <[email protected]> * feat(grpc): support connect grpc server via ssl tunnel (#4092) * feat(grpc): support ssl grpc connect if port is 443 * fix(grpc): use https option instead of detect port automatically * chore: fix typo * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <[email protected]> * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <[email protected]> * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <[email protected]> * test(networking): add test for peapods networking * fix: address comments Co-authored-by: Joan Fontanals <[email protected]> * feat(polling): unify polling args (#4113) * fix: several issues for jinad pods (#4119) * fix: activate for jinad pods * fix: dont expose worker pod in partial daemon * fix: workspace setting * fix: containerized flows * fix: hub test * feat(daemon): remote peas on star routing (#4112) * test(daemon): fix request in peas * test(daemon): fix request in peas * test(daemon): fix sync async client test * test(daemon): enable remote peas test * test(daemon): replace send message to send request * test(daemon): declare pea tests in ci * test(daemon): use pea args fixture * test(daemon): head pea use default host * test(daemon): fix peas topologies * test(daemon): fix pseudo naming * test(daemon): use default host as host * test(daemon): fix executor path * test(daemon): add remote worker back * test(daemon): skip local remote remote topology * fix: jinad pea test setup * fix: jinad pea tests * fix: remove invalid assertion Co-authored-by: jacobowitz <[email protected]> * feat: enable daemon tests again (#4132) * feat: enable daemon tests again * fix: remove bogy empty script file * fix: more jinad test fixes * style: fix overload and cli autocomplete * fix: scale and ru in jinad * fix: fix more jinad tests Co-authored-by: Jina Dev Bot <[email protected]> * fix: fix flow test * fix: improve pea tests reliability (#4136) Co-authored-by: Joan Fontanals <[email protected]> Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Deepankar Mahapatro <[email protected]> Co-authored-by: bwanglzu <[email protected]> Co-authored-by: AlaeddineAbdessalem <[email protected]> Co-authored-by: Zhaofeng Miao <[email protected]>
https://github.com/jina-ai/jina.git
def Call(self, request_iterator, context): context.set_code(grpc.StatusCode.UNIMPLEMENTED) context.set_details('Method not implemented!') raise NotImplementedError('Method not implemented!')
31
jina_pb2_grpc.py
Python
jina/proto/jina_pb2_grpc.py
933415bfa1f9eb89f935037014dfed816eb9815d
jina
1
46,560
26
14
7
122
13
0
28
57
pool_import
More explicit messages for pools and exceptions (#22569)
https://github.com/apache/airflow.git
def pool_import(args): if not os.path.exists(args.file): raise SystemExit(f"Missing pools file {args.file}") pools, failed = pool_import_helper(args.file) if len(failed) > 0: raise SystemExit(f"Failed to update pool(s): {', '.join(failed)}") print(f"Uploaded {len(pools)} pool(s)")
54
pool_command.py
Python
airflow/cli/commands/pool_command.py
7418720ce173ca5d0c5f5197c168e43258af8cc3
airflow
3
209,109
19
9
2
44
8
0
20
29
calc_tcpao_traffic_key
Support TCP-MD5 and TCP-AO (#3358) Support TCP-MD5 and TCP-AO
https://github.com/secdev/scapy.git
def calc_tcpao_traffic_key(p, alg, master_key, sisn, disn): # type: (Packet, TCPAOAlg, bytes, int, int) -> bytes return alg.kdf(master_key, build_context_from_packet(p, sisn, disn))
30
tcpao.py
Python
scapy/contrib/tcpao.py
20ac1d00389d0735e6d8cd1347f0a53f478144ba
scapy
1
281,462
24
10
37
220
15
0
63
126
print_help
Terminal Wide Rich (#1161) * My idea for how we handle Rich moving forward * remove independent consoles * FIxed pylint issues * add a few vars * Switched print to console * More transitions * Changed more prints * Replaced all prints * Fixing tabulate * Finished replace tabulate * Finished removing rich from Tabulate * add Panel around menu * add GST watermark under feature flag * Fixed 46 tests * Delete test_screener[False].yaml * Delete test_screener[True].yaml * Fixed the rest of the tests * add help and source color vars and use rgb * rich on stocks/options * update rich on disc, dps, sia * rich in gov, ins and scr menus * ba and ca menus with rich * Fixed import issue * Fixed some tests * removed termcolor * Removed prettytable * add rich to remaining stocks menus * FIxed linting issue * Added James' changes * Updated dependencies * Add rich to cryptocurrency menu * refactor economy and forex * refactor etf with rich * refactor mfunds * refactor rich rest * not specify style so default color works well on any background * Fixing mypy issues * Updated tests * More test fixes * James' test fixes * Updating tests : stocks/screener - fix cassettes using BR * Updating tests : crypto * Updating tests : disable DEBUG_MODE * Updating tests : stocks/fa/yfinance * minor fixes that escape * Improve the rich table function (that replaces tabulate :D ) * Fixed bad code * delete rogue file + dcf fix + NoConsole * sia mypy * fuck you linter * fuck you linter pt 2 * skip hehe * i hate the black linter * ubuntu mypy attempt * Update : rich_config + gtff * Updating tests : conftest * Updating tests : stocks * Update : rich_config * Updating : rich_config * make panel configurable for Theodore :b * colors update * Merged * Updating : rich_config + feature_flags * Updating : rich_config * Updating tests : stocks * Updating : feature_flags Co-authored-by: DidierRLopes <[email protected]> Co-authored-by: Chavithra PARANA <[email protected]> Co-authored-by: james <[email protected]> Co-authored-by: jose-donato <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def print_help(self): has_account_start = "[unvl]" if self.address_type != "account" else "" has_account_end = "[unvl]" if self.address_type != "account" else "" has_token_start = "[unvl]" if self.address_type != "token" else "" has_token_end = "[unvl]" if self.address_type != "token" else "" has_tx_start = "[unvl]" if self.address_type != "tx" else "" has_tx_end = "[unvl]" if self.address_type != "tx" else "" help_text = f console.print(text=help_text, menu="Cryptocurrency - Onchain")
88
onchain_controller.py
Python
gamestonk_terminal/cryptocurrency/onchain/onchain_controller.py
82747072c511beb1b2672846ae2ee4aec53eb562
OpenBBTerminal
7
278,615
6
7
2
32
4
1
6
11
preprocess_input
Remove pylint comments. PiperOrigin-RevId: 452353044
https://github.com/keras-team/keras.git
def preprocess_input(x, data_format=None): return x @keras_export("keras.applications.efficientnet.decode_predictions")
@keras_export("keras.applications.efficientnet.decode_predictions")
12
efficientnet.py
Python
keras/applications/efficientnet.py
3613c3defc39c236fb1592c4f7ba1a9cc887343a
keras
1
160,873
50
11
17
181
17
0
78
258
round
ENH: Adding __array_ufunc__ capability to MaskedArrays. This enables any ufunc numpy operations that are called on a MaskedArray to use the masked version of that function automatically without needing to resort to np.ma.func() calls.
https://github.com/numpy/numpy.git
def round(self, decimals=0, out=None): stored_out = None if isinstance(out, MaskedArray): stored_out = out out = getdata(out) result = self._data.round(decimals=decimals, out=out).view(type(self)) if result.ndim > 0: result._mask = self._mask result._update_from(self) elif self._mask: # Return masked when the scalar is masked result = masked # No explicit output: we're done if out is None: return result if stored_out is not None: # We got in a masked array originally, so we need to return one out = stored_out out.__setmask__(self._mask) return out
112
core.py
Python
numpy/ma/core.py
6d77c591c59b5678f14ae5af2127eebb7d2415bc
numpy
6
107,368
4
9
2
30
4
0
4
18
minorformatter
MNT: make colorbars locators and formatters properties
https://github.com/matplotlib/matplotlib.git
def minorformatter(self): return self._long_axis().get_minor_formatter()
16
colorbar.py
Python
lib/matplotlib/colorbar.py
6010bb43ed01c48c7c403569dd210490b236a853
matplotlib
1
320,084
38
13
13
141
15
0
39
157
test_basic_parse_docx
Enables some basic live testing against a tika server with actual sample documents to catch some more errors mocking won't catch
https://github.com/paperless-ngx/paperless-ngx.git
def test_basic_parse_docx(self): test_file = self.SAMPLE_DIR / Path("sample.docx") self.parser.parse( test_file, "application/vnd.openxmlformats-officedocument.wordprocessingml.document", ) self.assertEqual( self.parser.text, "This is an DOCX test document, also made September 14, 2022", ) self.assertIsNotNone(self.parser.archive_path) with open(self.parser.archive_path, "rb") as f: self.assertTrue(b"PDF-" in f.read()[:10]) # self.assertEqual(self.parser.date, datetime.datetime(2022, 9, 14))
81
test_live_tika.py
Python
src/paperless_tika/tests/test_live_tika.py
9c0c734b34881e23ffe31cfd7bc6cc1606cca400
paperless-ngx
1
55,959
4
7
2
22
3
0
4
18
BlockDocumentReference
Nested Block Schemas (PrefectHQ/orion#1846) * Adds models and migration for block schema and block document references * Adds customization to the generation of a block schema's fields * Adds ability to reconstruct block schema fields on read * Adds ability to reconstruct block schema when read by checksum * Adds schema reconstruction when reading multiple block schemas * Adds ordering to query of recursive CTE * Refactors to make code path and purpose easier to follow
https://github.com/PrefectHQ/prefect.git
def BlockDocumentReference(self): return self.orm.BlockDocumentReference
12
interface.py
Python
src/prefect/orion/database/interface.py
a05e44c89acf0b6073ac876479be24a5e51d7754
prefect
1
107,160
30
11
9
127
16
1
34
93
test_constrained_layout3
ENH: implement and use base layout_engine for more flexible layout.
https://github.com/matplotlib/matplotlib.git
def test_constrained_layout3(): fig, axs = plt.subplots(2, 2, layout="constrained") for nn, ax in enumerate(axs.flat): pcm = example_pcolor(ax, fontsize=24) if nn == 3: pad = 0.08 else: pad = 0.02 # default fig.colorbar(pcm, ax=ax, pad=pad) @image_comparison(['constrained_layout4.png'])
@image_comparison(['constrained_layout4.png'])
74
test_constrainedlayout.py
Python
lib/matplotlib/tests/test_constrainedlayout.py
ec4dfbc3c83866f487ff0bc9c87b0d43a1c02b22
matplotlib
3
167,071
6
8
5
33
7
0
6
20
_is_boolean
ENH: Incorproate ArrowDtype into ArrowExtensionArray (#47034)
https://github.com/pandas-dev/pandas.git
def _is_boolean(self) -> bool: return pa.types.is_boolean(self.pyarrow_dtype)
19
dtype.py
Python
pandas/core/arrays/arrow/dtype.py
f30c7d78237007c2a4ee30e78d4cdcac7caa83f6
pandas
1
174,310
21
16
13
130
13
0
25
101
_iter_egg_info_extras
An importlib.metadata-based backend This is not tested at all, but passed Mypy.
https://github.com/pypa/pip.git
def _iter_egg_info_extras(self) -> Iterable[str]: requires_txt = self._dist.read_text("requires.txt") if requires_txt is None: return for line in requires_txt.splitlines(): line = line.strip() if line.startswith("[") and line.endswith("]"): yield line.strip("[]").partition(":")[0]
73
importlib.py
Python
src/pip/_internal/metadata/importlib.py
60a7ad3a276d568d056dafc38e95e0d5f6ff2c7c
pip
5
249,092
22
11
14
160
13
0
23
112
test_limit
Use literals in place of `HTTPStatus` constants in tests (#13469)
https://github.com/matrix-org/synapse.git
def test_limit(self) -> None: channel = self.make_request( "GET", self.url + "?limit=5", access_token=self.admin_user_tok, ) self.assertEqual(200, channel.code, msg=channel.json_body) self.assertEqual(channel.json_body["total"], 20) self.assertEqual(len(channel.json_body["event_reports"]), 5) self.assertEqual(channel.json_body["next_token"], 5) self._check_fields(channel.json_body["event_reports"])
98
test_event_reports.py
Python
tests/rest/admin/test_event_reports.py
c97042f7eef3748e17c90e48a4122389a89c4735
synapse
1
20,361
9
8
2
47
9
0
12
26
_draw_text
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def _draw_text(self, pos, text, font, text_fg, text_bg): self.drawables.append((pos, text, font, text_fg, text_bg))
34
img.py
Python
pipenv/patched/notpip/_vendor/pygments/formatters/img.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
1
167,015
54
16
26
213
23
0
75
368
_get_data_from_filepath
Raise `FileNotFoundError` in `read_json` if input looks like file path but file is missing (#46718) * raise FileNotFoundError in _get_data_from_filepath() * update tests test_read_non_existent + test_read_expands_user_home_dir * add changelog entry in doc/source/whatsnew/v1.5.0.rst * use pandas.io.common._compression_to_extension instead of hard-coded extensions * move changelog entry from IO to other API changes * fix ImportError from _compression_to_extension -> _extension_to_compression rename * add test read_json very long file path * remove extra period in extension checking Co-authored-by: Matthew Roeschke <[email protected]>
https://github.com/pandas-dev/pandas.git
def _get_data_from_filepath(self, filepath_or_buffer): # if it is a string but the file does not exist, it might be a JSON string filepath_or_buffer = stringify_path(filepath_or_buffer) if ( not isinstance(filepath_or_buffer, str) or is_url(filepath_or_buffer) or is_fsspec_url(filepath_or_buffer) or file_exists(filepath_or_buffer) ): self.handles = get_handle( filepath_or_buffer, "r", encoding=self.encoding, compression=self.compression, storage_options=self.storage_options, errors=self.encoding_errors, ) filepath_or_buffer = self.handles.handle elif ( isinstance(filepath_or_buffer, str) and filepath_or_buffer.lower().endswith( (".json",) + tuple(f".json{c}" for c in _extension_to_compression) ) and not file_exists(filepath_or_buffer) ): raise FileNotFoundError(f"File {filepath_or_buffer} does not exist") return filepath_or_buffer
130
_json.py
Python
pandas/io/json/_json.py
67045903306ac4a1cab108177e92df30d99912b4
pandas
9
189,698
6
9
18
29
4
0
6
20
get_lines
Improved structure of the :mod:`.mobject` module (#2476) * group graphing and update its references * group text and update its references * group opengl and update its references * group three_d and update its references * group geometry and update (most) references * move some chaning.py + updater files into animation * refactor arc.py * refactor line.py * refactor polygram.py * refactor tips.py * black + isort * import new files in __init__.py * refactor places where geometry was used * black + isort again * remove unused imports * update reference.rst * add descriptions to files * fix circular imports * forgot ArrowTip * fix tests * fix doctests * satisfy mypy? * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix ALL merge conflicts * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * one VMobject import slipped through * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * re-add imports to `manim/opengl/__init__.py` * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix reference manual * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * ignore unknown directive type * fix arrow tip imports in docstrings Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Benjamin Hackl <[email protected]>
https://github.com/ManimCommunity/manim.git
def get_lines(self) -> VGroup: return VGroup(*self.lines)
16
line.py
Python
manim/mobject/geometry/line.py
e040bcacd38378386749db18aeba575b93f4ebca
manim
1
107,140
52
12
24
213
15
0
80
400
set_layout_engine
ENH: implement and use base layout_engine for more flexible layout.
https://github.com/matplotlib/matplotlib.git
def set_layout_engine(self, layout=None, **kwargs): if layout is None: if mpl.rcParams['figure.autolayout']: layout = 'tight' elif mpl.rcParams['figure.constrained_layout.use']: layout = 'constrained' else: self._layout_engine = None return if layout == 'tight': new_layout_engine = TightLayoutEngine(**kwargs) elif layout == 'constrained': new_layout_engine = ConstrainedLayoutEngine(**kwargs) elif isinstance(layout, LayoutEngine): new_layout_engine = layout else: raise ValueError(f"Invalid value for 'layout': {layout!r}") if self._check_layout_engines_compat(self._layout_engine, new_layout_engine): self._layout_engine = new_layout_engine else: raise RuntimeError('Colorbar layout of new layout engine not ' 'compatible with old engine, and a colorbar ' 'has been created. Engine not changed.')
117
figure.py
Python
lib/matplotlib/figure.py
ec4dfbc3c83866f487ff0bc9c87b0d43a1c02b22
matplotlib
8
83,021
23
11
12
157
16
0
26
89
test_subscribe_to_stream_post_policy_admins_stream
streams: Add notifications for posting policy changes. An explanatory note on the changes in zulip.yaml and curl_param_value_generators is warranted here. In our automated tests for our curl examples, the test for the API endpoint that changes the posting permissions of a stream comes before our existing curl test for adding message reactions. Since there is an extra notification message due to the change in posting permissions, the message IDs used in tests that come after need to be incremented by 1. This is a part of #20289.
https://github.com/zulip/zulip.git
def test_subscribe_to_stream_post_policy_admins_stream(self) -> None: member = self.example_user("AARON") stream = self.make_stream("stream1") do_change_stream_post_policy(stream, Stream.STREAM_POST_POLICY_ADMINS, acting_user=member) result = self.common_subscribe_to_streams(member, ["stream1"]) self.assert_json_success(result) json = result.json() self.assertEqual(json["subscribed"], {member.email: ["stream1"]}) self.assertEqual(json["already_subscribed"], {})
92
test_subs.py
Python
zerver/tests/test_subs.py
c30458e1740c7878e436037f61431884e54b349d
zulip
1
48,676
13
11
6
60
7
0
14
68
__getattr__
Fix infinite recursion with deepcopy on Request (#8684)
https://github.com/encode/django-rest-framework.git
def __getattr__(self, attr): try: _request = self.__getattribute__("_request") return getattr(_request, attr) except AttributeError: return self.__getattribute__(attr)
35
request.py
Python
rest_framework/request.py
d507cd851c4b1185e3dc720de9ba6a642459d738
django-rest-framework
2
88,596
35
12
12
171
16
0
55
165
test_file_not_found_error
ref(stacktrace_link): Add more than one code mapping in the tests (#41409) Include more than one code mapping in the setup code. Cleaning up a bit how we tag the transactions. This makes the PR for WOR-2395 a little easier to read.
https://github.com/getsentry/sentry.git
def test_file_not_found_error(self): response = self.get_success_response( self.organization.slug, self.project.slug, qs_params={"file": self.filepath} ) assert response.data["config"] == self.expected_configurations(self.code_mapping1) assert not response.data["sourceUrl"] # XXX: This depends on what was the last attempted code mapping assert response.data["error"] == "stack_root_mismatch" assert response.data["integrations"] == [serialized_integration(self.integration)] # XXX: This depends on what was the last attempted code mapping assert ( response.data["attemptedUrl"] == f"https://example.com/{self.repo.name}/blob/master/src/sentry/src/sentry/utils/safe.py" )
95
test_project_stacktrace_link.py
Python
tests/sentry/api/endpoints/test_project_stacktrace_link.py
2e0d2c856eb17a842c67d88363bed92c99578c20
sentry
1
291,219
10
7
4
35
5
0
10
32
state
Add type hints to template states (#82582) * Add type hints to template states * Undo rename * Remove invalid mypy issue link
https://github.com/home-assistant/core.git
def state(self) -> str: # type: ignore[override] self._collect_state() return self._state.state
19
template.py
Python
homeassistant/helpers/template.py
aa02a53ac667d08c66a536baf139993bcfe4d7d6
core
1
249,509
23
14
17
161
25
0
24
275
test_need_validated_email
Support enabling/disabling pushers (from MSC3881) (#13799) Partial implementation of MSC3881
https://github.com/matrix-org/synapse.git
def test_need_validated_email(self): with self.assertRaises(SynapseError) as cm: self.get_success_or_raise( self.hs.get_pusherpool().add_or_update_pusher( user_id=self.user_id, access_token=self.token_id, kind="email", app_id="m.email", app_display_name="Email Notifications", device_display_name="[email protected]", pushkey="[email protected]", lang=None, data={}, ) ) self.assertEqual(400, cm.exception.code) self.assertEqual(Codes.THREEPID_NOT_FOUND, cm.exception.errcode)
99
test_email.py
Python
tests/push/test_email.py
8ae42ab8fa3c6b52d74c24daa7ca75a478fa4fbb
synapse
1
6,454
37
17
12
211
21
0
49
133
flatten_dict
Fix python 3 compatibility issues (#1838) * Use time.process_time instead of time.clock for Python 3.8 compatibility. * Import ABC from collections.abc for Python 3.10 compatibility
https://github.com/ludwig-ai/ludwig.git
def flatten_dict(d, parent_key="", sep="."): items = [] for k, v in d.items(): new_key = parent_key + sep + k if parent_key else k if isinstance(v, collections.abc.MutableMapping): items.extend(flatten_dict(v, new_key, sep=sep).items()) elif isinstance(v, list): list_mapping = {str(i): item for i, item in enumerate(v)} items.extend(flatten_dict(list_mapping, new_key, sep=sep).items()) else: items.append((new_key, v)) return dict(items)
134
data_utils.py
Python
ludwig/utils/data_utils.py
6a63fd367b20c90206212241eed27616bf28a3b8
ludwig
6
176,647
11
13
6
69
7
0
13
48
is_weakly_connected
Added examples in weakly_connected.py (#5593) * documentation * added examples * Update branchings.py * Update branchings.py * simplified
https://github.com/networkx/networkx.git
def is_weakly_connected(G): if len(G) == 0: raise nx.NetworkXPointlessConcept( ) return len(list(weakly_connected_components(G))[0]) == len(G)
40
weakly_connected.py
Python
networkx/algorithms/components/weakly_connected.py
680557b4fe8a86a36288e3cf89bbc52728f496fa
networkx
2
269,957
14
14
8
75
9
0
17
65
_is_generator_like
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _is_generator_like(data): return ( hasattr(data, "__next__") or hasattr(data, "next") or isinstance( data, (Sequence, tf.compat.v1.data.Iterator, tf.data.Iterator) ) )
47
callbacks.py
Python
keras/callbacks.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
3
178,524
47
12
16
170
15
0
59
219
onModuleSourceCode
Plugins: Slight more helpful error message in case tensorflow works
https://github.com/Nuitka/Nuitka.git
def onModuleSourceCode(self, module_name, source_code): if module_name != "tensorflow": return source_code source_lines = source_code.splitlines() found_insert = False for i, l in enumerate(source_lines): if l.startswith("def ") and "_running_from_pip_package():" in l: source_lines.insert(i, "_site_packages_dirs = []") source_lines.insert(i, "from tensorflow.python import keras") found_insert = True break if found_insert is True: self.info("Patched 'running-from-pip' path magic.") else: self.sysexit("Did not find 'running-from-pip' path magic code.") return "\n".join(source_lines)
95
TensorflowPlugin.py
Python
nuitka/plugins/standard/TensorflowPlugin.py
ab7014c6457b2b65010aea41512ca75d93847c9a
Nuitka
6
297,431
64
13
33
273
24
1
103
507
_async_update_input_state
Fix HomeKit media players when entity has duplicate sources (#83890) fixes #83852 fixes #83698
https://github.com/home-assistant/core.git
def _async_update_input_state(self, hk_state, new_state): # Set active input if not self.support_select_source or not self.sources: return source = new_state.attributes.get(self.source_key) source_name = cleanup_name_for_homekit(source) _LOGGER.debug("%s: Set current input to %s", self.entity_id, source_name) if source_name in self.sources: index = self.sources.index(source_name) self.char_input_source.set_value(index) return possible_sources = self._get_ordered_source_list_from_state(new_state) if source in possible_sources: index = possible_sources.index(source) if index >= MAXIMUM_SOURCES: _LOGGER.debug( "%s: Source %s and above are not supported", self.entity_id, MAXIMUM_SOURCES, ) else: _LOGGER.debug( "%s: Sources out of sync. Rebuilding Accessory", self.entity_id, ) # Sources are out of sync, recreate the accessory self.async_reset() return _LOGGER.debug( "%s: Source %s does not exist the source list: %s", self.entity_id, source, possible_sources, ) self.char_input_source.set_value(0) @TYPES.register("ActivityRemote")
@TYPES.register("ActivityRemote")
159
type_remotes.py
Python
homeassistant/components/homekit/type_remotes.py
692a73255520997a99124e33f1b20b8c99fa40cd
core
6
317,252
13
10
6
64
12
0
13
52
async_save_entity_map
Restore accessory state into pairing using new HKC methods (#75276)
https://github.com/home-assistant/core.git
def async_save_entity_map(self) -> None: entity_storage: EntityMapStorage = self.hass.data[ENTITY_MAP] entity_storage.async_create_or_update_map( self.unique_id, self.config_num, self.entity_map.serialize() )
40
connection.py
Python
homeassistant/components/homekit_controller/connection.py
b9c8d65940ec47a82332b8b1a67301da018ccadf
core
1
189,458
29
12
17
155
23
0
34
193
_gen_line_numbers
Hide more private methods from the docs. (#2468) * hide privs from text_mobject.py * hide privs from tex_mobject.py * hide privs from code_mobject.py * hide privs from svg_mobject.py * remove SVGPath and utils from __init__.py * don't import string_to_numbers * hide privs from geometry.py * hide privs from matrix.py * hide privs from numbers.py * hide privs from three_dimensions.py * forgot underscore under set_stroke_width_from_length * there were more i missed * unhidea method that was used in docs * forgot other text2hash * remove svg_path from docs
https://github.com/ManimCommunity/manim.git
def _gen_line_numbers(self): line_numbers_array = [] for line_no in range(0, self.code_json.__len__()): number = str(self.line_no_from + line_no) line_numbers_array.append(number) line_numbers = Paragraph( *list(line_numbers_array), line_spacing=self.line_spacing, alignment="right", font_size=self.font_size, font=self.font, disable_ligatures=True, stroke_width=self.stroke_width, ) for i in line_numbers: i.set_color(self.default_color) return line_numbers
100
code_mobject.py
Python
manim/mobject/svg/code_mobject.py
902e7eb4f0147b5882a613b67467e38a1d47f01e
manim
3
310,110
20
11
12
105
13
0
22
82
test_setup_no_systems_recognized
Get rid of name collision in iaqualink tests (#63642)
https://github.com/home-assistant/core.git
async def test_setup_no_systems_recognized(hass, config_entry): config_entry.add_to_hass(hass) with patch( "homeassistant.components.iaqualink.AqualinkClient.login", return_value=None, ), patch( "homeassistant.components.iaqualink.AqualinkClient.get_systems", return_value={}, ): await hass.config_entries.async_setup(config_entry.entry_id) await hass.async_block_till_done() assert config_entry.state is ConfigEntryState.SETUP_ERROR
61
test_init.py
Python
tests/components/iaqualink/test_init.py
7520a3fd01c025a89d85ec996a8b39ff501a3eb0
core
1
48,246
11
7
11
55
5
1
11
16
prepare_build_cache_command
Improve caching for multi-platform images. (#23562) This is another attempt to improve caching performance for multi-platform images as the previous ones were undermined by a bug in buildx multiplatform cache-to implementattion that caused the image cache to be overwritten between platforms, when multiple images were build. The bug is created for the buildx behaviour at https://github.com/docker/buildx/issues/1044 and until it is fixed we have to prpare separate caches for each platform and push them to separate tags. That adds a bit overhead on the building step, but for now it is the simplest way we can workaround the bug if we do not want to manually manipulate manifests and images.
https://github.com/apache/airflow.git
def prepare_build_cache_command() -> List[str]: return ["buildx", "build", "--builder", "airflow_cache", "--progress=tty"] @lru_cache(maxsize=None)
@lru_cache(maxsize=None)
22
run_utils.py
Python
dev/breeze/src/airflow_breeze/utils/run_utils.py
9a6baab5a271b28b6b3cbf96ffa151ac7dc79013
airflow
1
138,050
21
12
8
140
14
1
23
62
test_strategy
[air/tune] Internal resource management 1 - Ray AIR resource manager implementation (#30777) Prerequisite to #30016 This PR adds a new Ray AIR resource manager to replace the PlacementGroupManager of Ray Tune. Details can be found in #30016. Specifically, this PR - Adds the main resource manager abstractions - Renames (and moves) PlacementGroupFactory to ResourceRequest - Adds implementations and tests for a placement group based manager and a budget based manager Signed-off-by: Kai Fricke <[email protected]> Signed-off-by: Kai Fricke <[email protected]> Co-authored-by: matthewdeng <[email protected]>
https://github.com/ray-project/ray.git
def test_strategy(ray_start_4_cpus, strategy): manager = FixedResourceManager() req = ResourceRequest([{"CPU": 2}], strategy=strategy) if strategy.startswith("STRICT_"): with pytest.raises(RuntimeError): manager.request_resources(req) else: manager.request_resources(req) @pytest.mark.parametrize("strategy", ["STRICT_PACK", "PACK", "SPREAD", "STRICT_SPREAD"])
@pytest.mark.parametrize("strategy", ["STRICT_PACK", "PACK", "SPREAD", "STRICT_SPREAD"])
59
test_resource_manager_fixed.py
Python
python/ray/air/tests/test_resource_manager_fixed.py
edb17fd2069844f12237c85ba6607afae536401d
ray
2
20,586
49
16
15
155
11
0
61
236
__getitem__
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def __getitem__(self, key): # convert single arg keys to tuples try: if isinstance(key, str_type): key = (key,) iter(key) except TypeError: key = (key, key) if len(key) > 2: raise TypeError( "only 1 or 2 index arguments supported ({}{})".format( key[:5], "... [{}]".format(len(key)) if len(key) > 5 else "" ) ) # clip to 2 elements ret = self * tuple(key[:2]) return ret
93
core.py
Python
pipenv/patched/notpip/_vendor/pyparsing/core.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
5
63,937
44
20
23
257
14
0
88
65
set_payment_terms_statuses
feat: Payment Terms Status report - calculate status at runtime for payment terms based on invoices - invoices are used in FIFO method
https://github.com/frappe/erpnext.git
def set_payment_terms_statuses(sales_orders, invoices): for so in sales_orders: for inv in [x for x in invoices if x.sales_order == so.name and x.invoice_amount > 0]: if so.payment_amount - so.paid_amount > 0: amount = so.payment_amount - so.paid_amount if inv.invoice_amount >= amount: inv.invoice_amount -= amount so.paid_amount += amount if so.invoices: so.invoices = so.invoices + "," + inv.invoice else: so.invoices = inv.invoice so.status = "Completed" break else: so.paid_amount += inv.invoice_amount inv.invoice_amount = 0 if so.invoices: so.invoices = so.invoices + "," + inv.invoice else: so.invoices = inv.invoice so.status = "Partly Paid" return sales_orders, invoices
158
payment_terms_status_for_sales_order.py
Python
erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.py
1bac7930834d6f688950e836c45305a62e7ecb3f
erpnext
10
177,600
10
12
5
76
11
0
11
54
completed_annotations
feat: DEV-469: Skip queue (#1693) * DEV-469 Skip queue project setting * DEV-469 review fixes * Merge migrations (DEV-469) * Update requirements-test.txt * Update requirements-test.txt * Update test_exception.py * Revert "Update test_exception.py" This reverts commit b9c686c9bacaf298bafe3a207352cc5260fef737. * Revert "Update requirements-test.txt" This reverts commit 3704d29978761089bcd008506f9e1c30a162bb3a. * Revert "Update requirements-test.txt" This reverts commit 50273847ae2872b31bccc376d04a3afff0efcf21. * Recalc is_labeled after skip_queue change (DEV-469) * Fix migrations (DEV-469) Co-authored-by: Max Tkachenko <[email protected]> Co-authored-by: niklub <[email protected]> Co-authored-by: nik <[email protected]>
https://github.com/heartexlabs/label-studio.git
def completed_annotations(self): if self.project.skip_queue == self.project.SkipQueue.IGNORE_SKIPPED: return self.annotations.filter(Q(ground_truth=False)) else: return self.annotations.filter(Q_finished_annotations)
46
models.py
Python
label_studio/tasks/models.py
074af782e6f351c711f18d8ad6a05aa4f632339c
label-studio
2
291,939
9
7
4
35
5
0
9
30
async_post_interval_update
Add Connectivity sensor to SIA (#64305) * implemented connectivity sensor * further cleanup off update code * cleanup and tighter behaviour for attributes * added seperate connectivity class to binary sensor * callbacks and keys * redid name and unique_id logic, non-breaking result * using entry more in inits * Fix import * fix ping_interval in sia_entity_base * added ping_interval default to next * fixed next Co-authored-by: Martin Hjelmare <[email protected]>
https://github.com/home-assistant/core.git
def async_post_interval_update(self, _) -> None: self._attr_is_on = False self.async_write_ha_state()
20
binary_sensor.py
Python
homeassistant/components/sia/binary_sensor.py
af4e37339a39badd5596e8bc9ba86d6c1994aa1b
core
1
186,372
32
17
9
130
18
0
38
191
_verify_no_matching_http_header
Various clean-ups in certbot-apache. Use f-strings. (#9132) * Various clean-ups in certbot-apache. Use f-strings. * Smaller tweaks
https://github.com/certbot/certbot.git
def _verify_no_matching_http_header(self, ssl_vhost, header_substring): header_path = self.parser.find_dir("Header", None, start=ssl_vhost.path) if header_path: # "Existing Header directive for virtualhost" pat = '(?:[ "]|^)(%s)(?:[ "]|$)' % (header_substring.lower()) for match in header_path: if re.search(pat, self.parser.aug.get(match).lower()): raise errors.PluginEnhancementAlreadyPresent( "Existing %s header" % header_substring)
79
configurator.py
Python
certbot-apache/certbot_apache/_internal/configurator.py
eeca208c8f57304590ac1af80b496e61021aaa45
certbot
4
129,112
3
6
3
15
3
0
3
6
unsquash_action
[RLlib] Issue 21109: Action unsquashing causes inf/NaN actions for unbounded action spaces. (#21110)
https://github.com/ray-project/ray.git
def unsquash_action(action, action_space_struct):
21
space_utils.py
Python
rllib/utils/spaces/space_utils.py
35af30a4461410a9031f61c6a1d5a14d84706eda
ray
1
247,426
7
7
8
24
3
0
7
21
mark_new_data
Spread out sending device lists to remote hosts (#12132)
https://github.com/matrix-org/synapse.git
def mark_new_data(self) -> None: self._new_data_to_send = True
13
per_destination_queue.py
Python
synapse/federation/sender/per_destination_queue.py
423cca9efe06d78aaca5f62fb74ee7e5bceebe49
synapse
1
279,091
47
13
14
100
14
0
53
168
_on_gcp
Adding tf.distribtue.experimental.PreemptionCheckpointHandler related util. PiperOrigin-RevId: 456391378
https://github.com/keras-team/keras.git
def _on_gcp(): gce_metadata_endpoint = "http://" + os.environ.get( _GCE_METADATA_URL_ENV_VARIABLE, "metadata.google.internal" ) try: # Timeout in 5 seconds, in case the test environment has connectivity # issue. There is not default timeout, which means it might block # forever. response = requests.get( "%s/computeMetadata/v1/%s" % (gce_metadata_endpoint, "instance/hostname"), headers=GCP_METADATA_HEADER, timeout=5, ) return response.status_code except requests.exceptions.RequestException: return False
57
distributed_file_utils.py
Python
keras/distribute/distributed_file_utils.py
dee67880be30eb1a5c2dfc7b4e60b32cd4789fbe
keras
2
127,553
90
17
42
513
33
0
126
579
testClusterAutoscaling
Migrate the deprecated placement_group option to PlacementGroupSchedulingStrategy (#28437) placement_group option is deprecated, use PlacementGroupSchedulingStrategy instead.
https://github.com/ray-project/ray.git
def testClusterAutoscaling(self): self.cluster.update_config( { "provider": {"head_resources": {"CPU": 4, "GPU": 0}}, } ) self.cluster.start() self.cluster.connect(client=True, timeout=120) self.assertGreater(ray.cluster_resources().get("CPU", 0), 0) # Trigger autoscaling pg = ray.util.placement_group([{"CPU": 1, "GPU": 1}] * 2) timeout = time.monotonic() + 120 while ray.cluster_resources().get("GPU", 0) < 2: if time.monotonic() > timeout: raise RuntimeError("Autoscaling failed or too slow.") time.sleep(1) # Schedule task with resources self.assertEquals( 5, ray.get( remote_task.options( num_cpus=1, num_gpus=1, scheduling_strategy=PlacementGroupSchedulingStrategy( placement_group=pg ), ).remote(5) ), ) print("Autoscaling worked") ray.util.remove_placement_group(pg) time.sleep(2) # Give some time so nodes.json is updated self.cluster.kill_node(num=2) print("Killed GPU node.") pg = ray.util.placement_group([{"CPU": 1, "GPU": 1}] * 2) table = ray.util.placement_group_table(pg) assert table["state"] == "PENDING" timeout = time.monotonic() + 180 while table["state"] != "CREATED": if time.monotonic() > timeout: raise RuntimeError("Re-starting killed node failed or too slow.") time.sleep(1) table = ray.util.placement_group_table(pg) print("Node was restarted.")
300
test_multinode_sync.py
Python
python/ray/tune/tests/test_multinode_sync.py
57cdbb1769a9c32972ba0ec9e7e857eeea961869
ray
5
259,219
52
10
7
185
17
1
80
108
test_ohe_infrequent_mixed
ENH Adds infrequent categories to OneHotEncoder (#16018) * ENH Completely adds infrequent categories * STY Linting * STY Linting * DOC Improves wording * DOC Lint * BUG Fixes * CLN Address comments * CLN Address comments * DOC Uses math to description float min_frequency * DOC Adds comment regarding drop * BUG Fixes method name * DOC Clearer docstring * TST Adds more tests * FIX Fixes mege * CLN More pythonic * CLN Address comments * STY Flake8 * CLN Address comments * DOC Fix * MRG * WIP * ENH Address comments * STY Fix * ENH Use functiion call instead of property * ENH Adds counts feature * CLN Rename variables * DOC More details * CLN Remove unneeded line * CLN Less lines is less complicated * CLN Less diffs * CLN Improves readiabilty * BUG Fix * CLN Address comments * TST Fix * CLN Address comments * CLN Address comments * CLN Move docstring to userguide * DOC Better wrapping * TST Adds test to handle_unknown='error' * ENH Spelling error in docstring * BUG Fixes counter with nan values * BUG Removes unneeded test * BUG Fixes issue * ENH Sync with main * DOC Correct settings * DOC Adds docstring * DOC Immprove user guide * DOC Move to 1.0 * DOC Update docs * TST Remove test * DOC Update docstring * STY Linting * DOC Address comments * ENH Neater code * DOC Update explaination for auto * Update sklearn/preprocessing/_encoders.py Co-authored-by: Roman Yurchak <[email protected]> * TST Uses docstring instead of comments * TST Remove call to fit * TST Spelling error * ENH Adds support for drop + infrequent categories * ENH Adds infrequent_if_exist option * DOC Address comments for user guide * DOC Address comments for whats_new * DOC Update docstring based on comments * CLN Update test with suggestions * ENH Adds computed property infrequent_categories_ * DOC Adds where the infrequent column is located * TST Adds more test for infrequent_categories_ * DOC Adds docstring for _compute_drop_idx * CLN Moves _convert_to_infrequent_idx into its own method * TST Increases test coverage * TST Adds failing test * CLN Careful consideration of dropped and inverse_transform * STY Linting * DOC Adds docstrinb about dropping infrequent * DOC Uses only * DOC Numpydoc * TST Includes test for get_feature_names_out * DOC Move whats new * DOC Address docstring comments * DOC Docstring changes * TST Better comments * TST Adds check for handle_unknown='ignore' for infrequent * CLN Make _infrequent_indices private * CLN Change min_frequency default to None * DOC Adds comments * ENH adds support for max_categories=1 * ENH Describe lexicon ordering for ties * DOC Better docstring * STY Fix * CLN Error when explicity dropping an infrequent category * STY Grammar Co-authored-by: Joel Nothman <[email protected]> Co-authored-by: Roman Yurchak <[email protected]> Co-authored-by: Guillaume Lemaitre <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def test_ohe_infrequent_mixed(): # X[:, 0] 1 and 2 are infrequent # X[:, 1] nothing is infrequent X = np.c_[[0, 1, 3, 3, 3, 3, 2, 0, 3], [0, 0, 0, 0, 1, 1, 1, 1, 1]] ohe = OneHotEncoder(max_categories=3, drop="if_binary", sparse=False) ohe.fit(X) X_test = [[3, 0], [1, 1]] X_trans = ohe.transform(X_test) # feature 1 is binary so it drops a category 0 assert_allclose(X_trans, [[0, 1, 0, 0], [0, 0, 1, 1]]) # TODO(1.2): Remove filterwarning when get_feature_names is removed. @pytest.mark.filterwarnings("ignore::FutureWarning:sklearn")
@pytest.mark.filterwarnings("ignore::FutureWarning:sklearn")
122
test_encoders.py
Python
sklearn/preprocessing/tests/test_encoders.py
7f0006c8aad1a09621ad19c3db19c3ff0555a183
scikit-learn
1
192,152
86
14
19
350
40
0
115
215
store_model_weights
Implement is_qat in TorchVision (#5299) * Add is_qat support using a method getter * Switch to an internal _fuse_modules * Fix linter. * Pass is_qat=False on PTQ * Fix bug on ra_sampler flag. * Set is_qat=True for QAT
https://github.com/pytorch/vision.git
def store_model_weights(model, checkpoint_path, checkpoint_key="model", strict=True): # Store the new model next to the checkpoint_path checkpoint_path = os.path.abspath(checkpoint_path) output_dir = os.path.dirname(checkpoint_path) # Deep copy to avoid side-effects on the model object. model = copy.deepcopy(model) checkpoint = torch.load(checkpoint_path, map_location="cpu") # Load the weights to the model to validate that everything works # and remove unnecessary weights (such as auxiliaries, etc) if checkpoint_key == "model_ema": del checkpoint[checkpoint_key]["n_averaged"] torch.nn.modules.utils.consume_prefix_in_state_dict_if_present(checkpoint[checkpoint_key], "module.") model.load_state_dict(checkpoint[checkpoint_key], strict=strict) tmp_path = os.path.join(output_dir, str(model.__hash__())) torch.save(model.state_dict(), tmp_path) sha256_hash = hashlib.sha256() with open(tmp_path, "rb") as f: # Read and update hash string value in blocks of 4K for byte_block in iter(lambda: f.read(4096), b""): sha256_hash.update(byte_block) hh = sha256_hash.hexdigest() output_path = os.path.join(output_dir, "weights-" + str(hh[:8]) + ".pth") os.replace(tmp_path, output_path) return output_path
211
utils.py
Python
references/classification/utils.py
8a16e12f3a7f10d124b26aeb7975cd2bf4a81695
vision
3
198,088
15
11
5
53
6
0
17
37
_imaginary_unit_as_coefficient
add fast path for as_coefficient for imaginary unit
https://github.com/sympy/sympy.git
def _imaginary_unit_as_coefficient(arg): if getattr(arg, 'is_real', True): return None else: return arg.as_coefficient(S.ImaginaryUnit) ############################################################################### ########################## TRIGONOMETRIC FUNCTIONS ############################ ###############################################################################
29
trigonometric.py
Python
sympy/functions/elementary/trigonometric.py
91dac5cfd79ca65e7bcffd4af710efea8f7408d1
sympy
2
21,829
8
7
5
25
4
0
8
22
is_kv_sep
Update tomlkit==0.9.2 Used: python -m invoke vendoring.update --package=tomlkit
https://github.com/pypa/pipenv.git
def is_kv_sep(self) -> bool: return self in self.KV
14
toml_char.py
Python
pipenv/vendor/tomlkit/toml_char.py
8faa74cdc9da20cfdcc69f5ec29b91112c95b4c9
pipenv
1
27,992
16
12
6
69
5
0
18
68
preprocess_GIF
Better media thumbnails including WebP support (#9988) * Add thumbnail app * Update get_thumbnail_size method and add tests * Add logic for creating thumbnails * Update logic for getting thumbnail * Allow defining format for tumbnail generation * Clear handle_thumbnail views * Add prepare_image_proxy_url method * Use ImageField for user avatar * Allow defining thumbnail format when querying user avatar * Use ImageField for category backgound_image * Use ImageField for Collection backgound_image * Use ImageField for ProductMedia image * Ensure that thumbnails are deleted when category background_image is changed or deleted * Ensure that thumbnails are deleted when collection background_image is changed or deleted * Update product media deleteion task and failing tests * Delete thumbnail from storage when thumbnail objects is deleted * Fix import in product test_bulk_delete * Drop create_thumbnails command * Update Product.thumbnail resolver * Update OrderLine thumbnail resolver * Add missing ADDED_IN_35 and PREVIEW_FEATURE labels * Update account and product signals - ensure the image is deleted from storage * Refactor product_images methods * Add signal for product media image delete * Drop create_thumbnails method and not longer valid settings fields * Clean the ProcessedImage class * Drop versatileimagefield from INSTALLED_APPS * Update changelog * Drop comments from ThumbnailFormat * Add get_image_or_proxy_url method * Apply reiew suggestions - add ThumbnailField and use get_image_or_proxy_ur when it's possible * Update changelog * Replace ADDED_IN_35 with ADDED_IN_36 label * Update changelog Co-authored-by: Marcin Gębala <[email protected]>
https://github.com/saleor/saleor.git
def preprocess_GIF(self, image): if "transparency" in image.info: save_kwargs = {"transparency": image.info["transparency"]} else: save_kwargs = {} return (image, save_kwargs)
39
utils.py
Python
saleor/thumbnail/utils.py
5d1a36b9aaf408016957db04f86397b2e53c2500
saleor
2
186,359
37
12
13
154
15
0
48
170
_get_vhost_names
Various clean-ups in certbot-apache. Use f-strings. (#9132) * Various clean-ups in certbot-apache. Use f-strings. * Smaller tweaks
https://github.com/certbot/certbot.git
def _get_vhost_names(self, path): servername_match = self.parser.find_dir( "ServerName", None, start=path, exclude=False) serveralias_match = self.parser.find_dir( "ServerAlias", None, start=path, exclude=False) serveraliases = [] for alias in serveralias_match: serveralias = self.parser.get_arg(alias) serveraliases.append(serveralias) servername = None if servername_match: # Get last ServerName as each overwrites the previous servername = self.parser.get_arg(servername_match[-1]) return servername, serveraliases
97
configurator.py
Python
certbot-apache/certbot_apache/_internal/configurator.py
eeca208c8f57304590ac1af80b496e61021aaa45
certbot
3
133,655
16
9
5
64
7
0
21
56
forward
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def forward(self, x, sample_theta=False): x = self._check_inputs(x) theta = self.sample_theta() if sample_theta else self.theta scores = x @ theta return scores
40
bandit_torch_model.py
Python
rllib/agents/bandit/bandit_torch_model.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
2
108,793
11
11
5
71
9
0
13
64
disconnect
Support multi-figure MultiCursor; prepare improving its signature. Support MultiCursor with Axes spread over different figures. As a consequence, the first parameter of MultiCursor (`canvas`) has become meaningless (requiring the user to pass in `[ax.figure.canvas for ax in axes]` seems pointless); just ignore that argument. While we're at it, also move some parameters of MultiCursor towards being keyword-only, to prepare for a hopefully better signature without the `canvas` parameter at all.
https://github.com/matplotlib/matplotlib.git
def disconnect(self): for canvas, info in self._canvas_infos.items(): for cid in info["cids"]: canvas.mpl_disconnect(cid) info["cids"].clear()
41
widgets.py
Python
lib/matplotlib/widgets.py
c978fac5c540da07b573fd677bc90224bdbac48f
matplotlib
3
30,925
33
13
10
158
13
0
55
101
get_tree_starting_at
Clean imports to fix test_fetcher (#17531) * Clean imports to fix test_fetcher * Add dependencies printer * Update utils/tests_fetcher.py Co-authored-by: lewtun <[email protected]> * Fix Perceiver import Co-authored-by: lewtun <[email protected]>
https://github.com/huggingface/transformers.git
def get_tree_starting_at(module, edges): vertices_seen = [module] new_edges = [edge for edge in edges if edge[0] == module and edge[1] != module] tree = [module] while len(new_edges) > 0: tree.append(new_edges) final_vertices = list(set(edge[1] for edge in new_edges)) vertices_seen.extend(final_vertices) new_edges = [edge for edge in edges if edge[0] in final_vertices and edge[1] not in vertices_seen] return tree
103
tests_fetcher.py
Python
utils/tests_fetcher.py
c4e58cd8bac8f68a1dca0e2e5a2d167111f6a652
transformers
9
101,381
23
12
15
143
17
0
36
92
_get_io_sizes
Bugfix: convert - Gif Writer - Fix non-launch error on Gif Writer - convert plugins - linting - convert/fs_media/preview/queue_manager - typing - Change convert items from dict to Dataclass
https://github.com/deepfakes/faceswap.git
def _get_io_sizes(self) -> Dict[str, int]: input_shape = self._model.model.input_shape input_shape = [input_shape] if not isinstance(input_shape, list) else input_shape output_shape = self._model.model.output_shape output_shape = [output_shape] if not isinstance(output_shape, list) else output_shape retval = dict(input=input_shape[0][1], output=output_shape[-1][1]) logger.debug(retval) return retval
94
convert.py
Python
scripts/convert.py
1022651eb8a7741014f5d2ec7cbfe882120dfa5f
faceswap
3
294,874
21
12
8
79
11
0
22
94
update_zones
Add overlay options to Tado (#65886) Co-authored-by: Paulus Schoutsen <[email protected]>
https://github.com/home-assistant/core.git
def update_zones(self): try: zone_states = self.tado.getZoneStates()["zoneStates"] except RuntimeError: _LOGGER.error("Unable to connect to Tado while updating zones") return for zone in zone_states: self.update_zone(int(zone))
44
__init__.py
Python
homeassistant/components/tado/__init__.py
e76170fbfd691432e51a4e37235a5300cf741749
core
3
100,816
6
6
3
22
4
0
6
20
session_id
Refactoring and TravisCI to Github Actions (#1239) * refactor training * travis to actions
https://github.com/deepfakes/faceswap.git
def session_id(self) -> int: return self._session_id
12
model.py
Python
plugins/train/model/_base/model.py
ff6b0209dd5ad57b81b0aca570df7f39a7119bfb
faceswap
1
81,164
17
10
5
76
8
0
17
56
get_delayed_update_fields
Delay update of artifacts and error fields until final job save (#11832) * Delay update of artifacts until final job save Save tracebacks from receptor module to callback object Move receptor traceback check up to be more logical Use new mock_me fixture to avoid DB call with me method Update the special runner message to the delay_update pattern * Move special runner message into post-processing of callback fields
https://github.com/ansible/awx.git
def get_delayed_update_fields(self): self.extra_update_fields['emitted_events'] = self.event_ct if 'got an unexpected keyword argument' in self.extra_update_fields.get('result_traceback', ''): self.delay_update(result_traceback=ANSIBLE_RUNNER_NEEDS_UPDATE_MESSAGE) return self.extra_update_fields
42
callback.py
Python
awx/main/tasks/callback.py
452744b67e02823879e722fe574984a2d760ed60
awx
2
22,867
32
15
16
214
10
0
60
239
search_in_toc
VoiceAssistant This is Voice Assistant coded using Python which can do the following: - 1. Speak Text entered by User. 2. Search anything on Google. 3. Search anything on Wikipedia. 4. Read an MS Word(docx) document. 5. Read a book(PDF). 6. Can be used as a Dictator.
https://github.com/geekcomputers/Python.git
def search_in_toc(toc, key, totalpg): for i in range(len(toc) - 1): topic = toc[i] if i != len(toc) - 2: if topic[1] == key: nexttopic = toc[i + 1] return (topic[2], nexttopic[2]) elif topic[1].lower() == key: nexttopic = toc[i + 1] return (topic[2], nexttopic[2]) else: if topic[1] == key: return (topic[2], totalpg) elif topic[1].lower() == key: return (topic[2], totalpg) return None,None
143
textRead.py
Python
VoiceAssistant/Project_Basic_struct/textRead.py
39c49e07066b2a53e176d555af6a7bf8aabb8a9c
Python
7
122,423
61
12
13
192
34
0
74
145
global_array_to_host_local_array
[JAX] Add RunTimeError to host_local_array_to_global_array PiperOrigin-RevId: 485657586
https://github.com/google/jax.git
def global_array_to_host_local_array(global_inputs, global_mesh, pspecs): if not jax.config.jax_array: raise RuntimeError( "Please enable `jax_array` to use `global_array_to_host_local_array`. " "You can use jax.config.update('jax_array', True) or set the " "environment variable JAX_ARRAY=1 , or set the `jax_array` boolean " "flag to something true-like.") def _convert(arr, pspec): local_aval = global_mesh._global_to_local( pxla._get_array_mapping(pspec), arr.aval) return array.ArrayImpl( local_aval, jax.sharding.MeshPspecSharding(global_mesh.local_mesh, pspec), arr._arrays, committed=True) flattened_inps, out_tree = tree_flatten(global_inputs) out_pspecs = pjit_lib.flatten_axis_resources( 'output pspecs', out_tree, pspecs, tupled_args=True) out = tree_map(_convert, tuple(flattened_inps), out_pspecs) return tree_unflatten(out_tree, out)
72
multihost_utils.py
Python
jax/experimental/multihost_utils.py
b467feb250c8920850118ebec19e6eff4634d5f9
jax
2
101,624
6
6
3
26
5
0
6
20
bin_names
Overhaul sort: - Standardize image data reading and writing - Optimize loading (just one pass required) - Make all sort groups binnable (to greater or lesser results) - Add sort by pitch - Deprecate multiple options - linting, docs + locales
https://github.com/deepfakes/faceswap.git
def bin_names(self) -> List[str]: return self._bin_names
15
sort_methods.py
Python
tools/sort/sort_methods.py
98d01760e469fd2108eed8d0b0a1ba6297c3177c
faceswap
1
81,239
33
12
6
111
11
0
38
64
to_host_path
Only do substitutions for container path conversions with resolved paths (#12313) * Resolve paths as much as possible before doing replacements * Move unused method out of main code, test symlink
https://github.com/ansible/awx.git
def to_host_path(path, private_data_dir): if not os.path.isabs(private_data_dir): raise RuntimeError('The private_data_dir path must be absolute') if CONTAINER_ROOT != path and Path(CONTAINER_ROOT) not in Path(path).resolve().parents: raise RuntimeError(f'Cannot convert path {path} unless it is a subdir of {CONTAINER_ROOT}') return path.replace(CONTAINER_ROOT, private_data_dir, 1)
63
test_tasks.py
Python
awx/main/tests/unit/test_tasks.py
4543f6935f4d2e00af20fff6a973cd4e3fa61524
awx
4
267,795
31
12
7
93
11
0
38
95
completions
ansible-test - Use more native type hints. (#78435) * ansible-test - Use more native type hints. Simple search and replace to switch from comments to native type hints for return types of functions with no arguments. * ansible-test - Use more native type hints. Conversion of simple single-line function annotation type comments to native type hints. * ansible-test - Use more native type hints. Conversion of single-line function annotation type comments with default values to native type hints. * ansible-test - Use more native type hints. Manual conversion of type annotation comments for functions which have pylint directives.
https://github.com/ansible/ansible.git
def completions(self) -> t.List[str]: completions = self.matches continuation = '' if self.list_mode else self.continuation if not self.preserve: # include the existing prefix to avoid rewriting the word undergoing completion completions = [f'{self.consumed}{completion}{continuation}' for completion in completions] return completions
47
parsers.py
Python
test/lib/ansible_test/_internal/cli/argparsing/parsers.py
3eb0485dd92c88cc92152d3656d94492db44b183
ansible
4
95,947
26
14
13
173
13
0
53
176
test_quantize_time_jitter
fix(sessions): Prevent query mutation behavior in `_prepare_query_params` (#31422) * fix(sessions): Prevent query mutation behavior in `_prepare_query_params` Fixes the behavior of `_prepare_query_params` that mutates the conditions passed from the query. * Add test that validates the change
https://github.com/getsentry/sentry.git
def test_quantize_time_jitter(self): i = j = None starting_key = quantize_time(self.now, 0, duration=10) for i in range(11): current_key = quantize_time(self.now + timedelta(seconds=i), 0, duration=10) if current_key != starting_key: break other_key = quantize_time(self.now, 5, duration=10) for j in range(11): current_key = quantize_time(self.now + timedelta(seconds=j), 5, duration=10) if current_key != other_key: break assert i != j
113
test_snuba.py
Python
tests/sentry/utils/test_snuba.py
51403cc4c85c9c595a3b2d0ab5c2c1c4e33a3a1e
sentry
5
105,528
5
6
19
19
5
0
5
8
_parse_parallel_sentences
Fix: wmt datasets - fix CWMT zh subsets (#4871) fix cwmt zh subsets
https://github.com/huggingface/datasets.git
def _parse_parallel_sentences(f1, f2, filename1, filename2):
178
wmt_utils.py
Python
datasets/wmt14/wmt_utils.py
54b532a8a2f5353fdb0207578162153f7b2da2ec
datasets
5
247,241
11
10
5
65
8
0
11
43
test_join_local_ratelimit
Add type hints to `tests/rest/client` (#12108) * Add type hints to `tests/rest/client` * newsfile * fix imports * add `test_account.py` * Remove one type hint in `test_report_event.py` * change `on_create_room` to `async` * update new functions in `test_third_party_rules.py` * Add `test_filter.py` * add `test_rooms.py` * change to `assertEquals` to `assertEqual` * lint
https://github.com/matrix-org/synapse.git
def test_join_local_ratelimit(self) -> None: for _ in range(3): self.helper.create_room_as(self.user_id) self.helper.create_room_as(self.user_id, expect_code=429)
40
test_rooms.py
Python
tests/rest/client/test_rooms.py
2ffaf30803f93273a4d8a65c9e6c3110c8433488
synapse
2
91,255
33
12
16
166
14
0
46
202
_get_crash_rate_alert_metrics_aggregation_value_v2
fix(cra-metrics): Count all users in metrics alerts (#34957) Use conditional aggregates in order to get both the total user count and the number of crashed users in the same snuba query. To maintain compatibility until existing subscriptions have been migrated, make the subscription processor able to handle both the old and the new format. The actual migration of existing subscriptions will be in a separate PR.
https://github.com/getsentry/sentry.git
def _get_crash_rate_alert_metrics_aggregation_value_v2(self, subscription_update): row = subscription_update["values"]["data"][0] total_session_count = row["count"] crash_count = row["crashed"] if total_session_count == 0: self.reset_trigger_counts() metrics.incr("incidents.alert_rules.ignore_update_no_session_data") return if CRASH_RATE_ALERT_MINIMUM_THRESHOLD is not None: min_threshold = int(CRASH_RATE_ALERT_MINIMUM_THRESHOLD) if total_session_count < min_threshold: self.reset_trigger_counts() metrics.incr("incidents.alert_rules.ignore_update_count_lower_than_min_threshold") return aggregation_value = round((1 - crash_count / total_session_count) * 100, 3) return aggregation_value
96
subscription_processor.py
Python
src/sentry/incidents/subscription_processor.py
65f43fd4e0f1821b468547fc08136bbad9cd8446
sentry
4
243,958
61
9
9
132
18
0
76
158
_do_evaluate
[Fix] cannot to save the best checkpoint when the key_score is None (#7101)
https://github.com/open-mmlab/mmdetection.git
def _do_evaluate(self, runner): if not self._should_evaluate(runner): return from mmdet.apis import single_gpu_test results = single_gpu_test(runner.model, self.dataloader, show=False) runner.log_buffer.output['eval_iter_num'] = len(self.dataloader) key_score = self.evaluate(runner, results) # the key_score may be `None` so it needs to skip the action to save # the best checkpoint if self.save_best and key_score: self._save_ckpt(runner, key_score) # Note: Considering that MMCV's EvalHook updated its interface in V1.3.16, # in order to avoid strong version dependency, we did not directly # inherit EvalHook but BaseDistEvalHook.
80
eval_hooks.py
Python
mmdet/core/evaluation/eval_hooks.py
82e5cce9ad550571f3a1c55c29203c09682a0079
mmdetection
4
101,266
49
16
17
340
34
0
57
413
_show_mesh
lib.align updates: - alignments.py - Add typed dicts for imported alignments - Explicitly check for presence of thumb value in alignments dict - linting - detected_face.py - Typing - Linting - Legacy support for pre-aligned face - Update dependencies to new property names
https://github.com/deepfakes/faceswap.git
def _show_mesh(self, mesh_ids, face_index, detected_face, top_left): state = "normal" if (self._tk_vars["selected_editor"].get() != "Mask" or self._optional_annotations["mesh"]) else "hidden" kwargs = dict(polygon=dict(fill="", width=2, outline=self._canvas.control_colors["Mesh"]), line=dict(fill=self._canvas.control_colors["Mesh"], width=2)) edited = (self._tk_vars["edited"].get() and self._tk_vars["selected_editor"].get() not in ("Mask", "View")) landmarks = self._viewport.get_landmarks(self.frame_index, face_index, detected_face, top_left, edited) for key, kwarg in kwargs.items(): for idx, mesh_id in enumerate(mesh_ids[key]): self._canvas.coords(mesh_id, *landmarks[key][idx].flatten()) self._canvas.itemconfig(mesh_id, state=state, **kwarg) self._canvas.addtag_withtag(f"active_mesh_{key}", mesh_id)
212
viewport.py
Python
tools/manual/faceviewer/viewport.py
5e73437be47f2410439a3c6716de96354e6a0c94
faceswap
6
189,920
3
9
2
31
5
0
3
17
_update_colors
Fix :meth:`BarChart.change_bar_values` not updating when height is 0. (#2638) * Refactor bar creation into a separate function The function is _create_bar. * Add insert methods for OpenGLMobject/Mobject * Add self._update_colors, improve docs and fix bug Create temporary bar, replace the previous location in the self.bars vgroup with the new bar. Then re-initalize the colours depending on a flag. Also refactor out colour-setting to a method * Apply black and isort * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix flake8 error, double '#' before comment. * Refactor bar creation into a separate function The function is _create_bar. * Add insert methods for OpenGLMobject/Mobject * Add self._update_colors, improve docs and fix bug Create temporary bar, replace the previous location in the self.bars vgroup with the new bar. Then re-initalize the colours depending on a flag. Also refactor out colour-setting to a method * Apply black and isort * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix flake8 error, double '#' before comment. * Implement None-initialization suggestion Removed setting self.bars in `BarChart._add_bars` * Change `temp` to `tmp` and use Type * Use `type` instead of `Type`. - Also, fix setting config.frame_width in __init__, since this causes BarChart's `y_length` to not be re-evaluated if the dimensions of the scene are changed. * Force `bar_colors` to be a list via deprecation - Also update params where `len` is taken to be a Sequence, not iterable * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix docs / some typing, remove try/except - MutableSequence instead of Sequence for self.values, since BarChart.change_bar_values adjusts the value of the sequence, requiring __get_item__. - Use try/except based on reviewer's reccomendation. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Tristan Schulz <[email protected]>
https://github.com/ManimCommunity/manim.git
def _update_colors(self): self.bars.set_color_by_gradient(*self.bar_colors)
17
probability.py
Python
manim/mobject/graphing/probability.py
4cb43a2dc8d871ec8c8af0645eba5da00d3b2d4f
manim
1
125,824
41
12
9
196
20
0
48
75
test_one_hot_encoder_with_max_categories
[AIR] Rename `limit` parameter as `max_categories` (#26977)
https://github.com/ray-project/ray.git
def test_one_hot_encoder_with_max_categories(): col_a = ["red", "green", "blue", "red"] col_b = ["warm", "cold", "hot", "cold"] col_c = [1, 10, 5, 10] in_df = pd.DataFrame.from_dict({"A": col_a, "B": col_b, "C": col_c}) ds = ray.data.from_pandas(in_df) encoder = OneHotEncoder(["B", "C"], max_categories={"B": 2}) ds_out = encoder.fit_transform(ds) assert len(ds_out.to_pandas().columns) == 1 + 2 + 3
113
test_preprocessors.py
Python
python/ray/data/tests/test_preprocessors.py
55988992b9814352150d253eaedcdfe73c96dc41
ray
1
310,282
8
7
3
25
4
0
8
22
is_on
Add siren platform to devolo Home Control (#53400) * Rework mocking * Add siren platform * Rebase with dev * React on change of default tone * Fix linting error
https://github.com/home-assistant/core.git
def is_on(self) -> bool: return self._value != 0
14
siren.py
Python
homeassistant/components/devolo_home_control/siren.py
144371d8437b7288288c4c421804f1fc52e500fd
core
1
244,084
18
13
7
106
12
0
21
54
get_uncertainty
[Enhance] Take point sample related functions out of mask_point_head (#7353) add point sample replace function in mask_point_head
https://github.com/open-mmlab/mmdetection.git
def get_uncertainty(mask_pred, labels): if mask_pred.shape[1] == 1: gt_class_logits = mask_pred.clone() else: inds = torch.arange(mask_pred.shape[0], device=mask_pred.device) gt_class_logits = mask_pred[inds, labels].unsqueeze(1) return -torch.abs(gt_class_logits)
67
point_sample.py
Python
mmdet/models/utils/point_sample.py
c576e5d570bf64a99e2c6817ed7b5c0084a44a55
mmdetection
2
176,536
97
13
206
478
30
0
161
523
draw_networkx
Change default value of arrowstyle for undirected graphs (#5514) * Change default value of arrowstyle for undirected graphs * Update networkx/drawing/nx_pylab.py Co-authored-by: Jarrod Millman <[email protected]> Co-authored-by: Jarrod Millman <[email protected]>
https://github.com/networkx/networkx.git
def draw_networkx(G, pos=None, arrows=None, with_labels=True, **kwds): r import matplotlib.pyplot as plt valid_node_kwds = ( "nodelist", "node_size", "node_color", "node_shape", "alpha", "cmap", "vmin", "vmax", "ax", "linewidths", "edgecolors", "label", ) valid_edge_kwds = ( "edgelist", "width", "edge_color", "style", "alpha", "arrowstyle", "arrowsize", "edge_cmap", "edge_vmin", "edge_vmax", "ax", "label", "node_size", "nodelist", "node_shape", "connectionstyle", "min_source_margin", "min_target_margin", ) valid_label_kwds = ( "labels", "font_size", "font_color", "font_family", "font_weight", "alpha", "bbox", "ax", "horizontalalignment", "verticalalignment", ) valid_kwds = valid_node_kwds + valid_edge_kwds + valid_label_kwds if any([k not in valid_kwds for k in kwds]): invalid_args = ", ".join([k for k in kwds if k not in valid_kwds]) raise ValueError(f"Received invalid argument(s): {invalid_args}") node_kwds = {k: v for k, v in kwds.items() if k in valid_node_kwds} edge_kwds = {k: v for k, v in kwds.items() if k in valid_edge_kwds} label_kwds = {k: v for k, v in kwds.items() if k in valid_label_kwds} if pos is None: pos = nx.drawing.spring_layout(G) # default to spring layout draw_networkx_nodes(G, pos, **node_kwds) draw_networkx_edges(G, pos, arrows=arrows, **edge_kwds) if with_labels: draw_networkx_labels(G, pos, **label_kwds) plt.draw_if_interactive()
284
nx_pylab.py
Python
networkx/drawing/nx_pylab.py
7c6524c9e9f91d2e0ecd18389677f6f43651411d
networkx
13
126,725
22
12
7
123
18
0
26
87
resolve_async_tasks
[Serve] Use Async Handle for DAG Execution (#27411)
https://github.com/ray-project/ray.git
async def resolve_async_tasks(self): scanner = _PyObjScanner(source_type=asyncio.Task) tasks = scanner.find_nodes((self.args, self.kwargs)) if len(tasks) > 0: resolved = await asyncio.gather(*tasks) replacement_table = dict(zip(tasks, resolved)) self.args, self.kwargs = scanner.replace_nodes(replacement_table)
75
router.py
Python
python/ray/serve/_private/router.py
efee158cecc49fbec5527e75e17faadba4fac48d
ray
2
137,985
22
15
14
115
18
0
23
145
test_gymnasium_old_api_but_wrapped
[RLlib] gymnasium support (new `Env.reset()/step()/seed()/render()` APIs). (#28369)
https://github.com/ray-project/ray.git
def test_gymnasium_old_api_but_wrapped(self): from gymnasium.wrappers import EnvCompatibility register_env( "test", lambda env_ctx: EnvCompatibility(GymnasiumOldAPI(env_ctx)), ) algo = ( PPOConfig() .environment(env="test") .rollouts(num_envs_per_worker=2, num_rollout_workers=2) .build() ) algo.train() algo.stop()
67
test_gym_env_apis.py
Python
rllib/tests/backward_compat/test_gym_env_apis.py
8e680c483ce326cefc62e44f68ab1a6948b1c3d2
ray
1
293,173
19
12
6
75
6
0
24
74
get_suggested
Add config flow for switch.light (#67447) * Add config flow for switch.light * Refactor according to code review * Setup light switch from config entry * Improve async_resolve_entity * Prepare for multiple steps * Remove name and options flow from switch light * Check type before adding description to schema keys * Remove options flow enabler * Copy name from the switch * Move helper flows to new file * Improve test coverage * Fix name * Remove dead code from abstract method * Remove manifest 'helper' option * Validate registry entry id before forwarding to light platform * Improve test * Add translations * Improve config entry setup * Log when config entry fails setup * Update homeassistant/components/switch/__init__.py Co-authored-by: Paulus Schoutsen <[email protected]> Co-authored-by: Paulus Schoutsen <[email protected]>
https://github.com/home-assistant/core.git
def get_suggested(schema, key): for k in schema.keys(): if k == key: if k.description is None or "suggested_value" not in k.description: return None return k.description["suggested_value"]
45
test_config_flow.py
Python
tests/components/switch/test_config_flow.py
0c129145486764ca481d167ccdd97c97f775c200
core
5
294,633
29
13
11
123
16
1
31
95
test_form_no_route_to_host
Generic IP Camera configflow 2 (#52360) Co-authored-by: J. Nick Koston <[email protected]>
https://github.com/home-assistant/core.git
async def test_form_no_route_to_host(hass, fakeimg_png, user_flow): with patch( "homeassistant.components.generic.config_flow.av.open", side_effect=OSError(errno.EHOSTUNREACH, "No route to host"), ): result2 = await hass.config_entries.flow.async_configure( user_flow["flow_id"], TESTDATA, ) assert result2["type"] == "form" assert result2["errors"] == {"stream_source": "stream_no_route_to_host"} @respx.mock
@respx.mock
65
test_config_flow.py
Python
tests/components/generic/test_config_flow.py
c1a2be72fc8b76b55cfde1823c5688100e397369
core
1
101,218
13
13
4
73
9
0
14
42
frame_has_faces
lib.align updates: - alignments.py - Add typed dicts for imported alignments - Explicitly check for presence of thumb value in alignments dict - linting - detected_face.py - Typing - Linting - Legacy support for pre-aligned face - Update dependencies to new property names
https://github.com/deepfakes/faceswap.git
def frame_has_faces(self, frame_name): retval = bool(self._data.get(frame_name, {}).get("faces", [])) logger.trace("'%s': %s", frame_name, retval) return retval
44
alignments.py
Python
lib/align/alignments.py
5e73437be47f2410439a3c6716de96354e6a0c94
faceswap
1
186,612
10
7
8
50
7
0
10
24
parsing_hooks
Fully type certbot-nginx module (#9124) * Work in progress * Fix type * Work in progress * Work in progress * Work in progress * Work in progress * Work in progress * Oups. * Fix typing in UnspacedList * Fix logic * Finish typing * List certbot-nginx as fully typed in tox * Fix lint * Fix checks * Organize imports * Fix typing for Python 3.6 * Fix checks * Fix lint * Update certbot-nginx/certbot_nginx/_internal/configurator.py Co-authored-by: alexzorin <[email protected]> * Update certbot-nginx/certbot_nginx/_internal/configurator.py Co-authored-by: alexzorin <[email protected]> * Fix signature of deploy_cert regarding the installer interface * Update certbot-nginx/certbot_nginx/_internal/obj.py Co-authored-by: alexzorin <[email protected]> * Fix types * Update certbot-nginx/certbot_nginx/_internal/parser.py Co-authored-by: alexzorin <[email protected]> * Precise type * Precise _coerce possible inputs/outputs * Fix type * Update certbot-nginx/certbot_nginx/_internal/http_01.py Co-authored-by: ohemorange <[email protected]> * Fix type * Remove an undesirable implementation. * Fix type Co-authored-by: alexzorin <[email protected]> Co-authored-by: ohemorange <[email protected]>
https://github.com/certbot/certbot.git
def parsing_hooks(cls) -> Tuple[Type["Block"], Type["Sentence"], Type["Statements"]]: return Block, Sentence, Statements
30
parser_obj.py
Python
certbot-nginx/certbot_nginx/_internal/parser_obj.py
16aad35d31a887dab157f9d4f5e0fe9218d06064
certbot
1
247,207
34
11
25
227
22
0
45
303
test_delete_email
Add type hints to `tests/rest/client` (#12108) * Add type hints to `tests/rest/client` * newsfile * fix imports * add `test_account.py` * Remove one type hint in `test_report_event.py` * change `on_create_room` to `async` * update new functions in `test_third_party_rules.py` * Add `test_filter.py` * add `test_rooms.py` * change to `assertEquals` to `assertEqual` * lint
https://github.com/matrix-org/synapse.git
def test_delete_email(self) -> None: # Add a threepid self.get_success( self.store.user_add_threepid( user_id=self.user_id, medium="email", address=self.email, validated_at=0, added_at=0, ) ) channel = self.make_request( "POST", b"account/3pid/delete", {"medium": "email", "address": self.email}, access_token=self.user_id_tok, ) self.assertEqual(200, channel.code, msg=channel.result["body"]) # Get user channel = self.make_request( "GET", self.url_3pid, access_token=self.user_id_tok, ) self.assertEqual(200, channel.code, msg=channel.result["body"]) self.assertFalse(channel.json_body["threepids"])
142
test_account.py
Python
tests/rest/client/test_account.py
2ffaf30803f93273a4d8a65c9e6c3110c8433488
synapse
1
261,904
77
18
26
344
23
0
112
298
split_dataset
Fix the bug in split dataset function (#1251) * Fix the bug in split_dataset * Make eval_split_size configurable * Change test_loader to use load_tts_samples function * Change eval_split_portion to eval_split_size and permits to set the absolute number of samples in eval * Fix samplers unit test * Add data unit test on GitHub workflow
https://github.com/coqui-ai/TTS.git
def split_dataset(items, eval_split_max_size=None, eval_split_size=0.01): speakers = [item["speaker_name"] for item in items] is_multi_speaker = len(set(speakers)) > 1 if eval_split_size > 1: eval_split_size = int(eval_split_size) else: if eval_split_max_size: eval_split_size = min(eval_split_max_size, int(len(items) * eval_split_size)) else: eval_split_size = int(len(items) * eval_split_size) assert eval_split_size > 0, " [!] You do not have enough samples for the evaluation set. You can work around this setting the 'eval_split_size' parameter to a minimum of {}".format(1/len(items)) np.random.seed(0) np.random.shuffle(items) if is_multi_speaker: items_eval = [] speakers = [item["speaker_name"] for item in items] speaker_counter = Counter(speakers) while len(items_eval) < eval_split_size: item_idx = np.random.randint(0, len(items)) speaker_to_be_removed = items[item_idx]["speaker_name"] if speaker_counter[speaker_to_be_removed] > 1: items_eval.append(items[item_idx]) speaker_counter[speaker_to_be_removed] -= 1 del items[item_idx] return items_eval, items return items[:eval_split_size], items[eval_split_size:]
217
__init__.py
Python
TTS/tts/datasets/__init__.py
28a746497560dc3f1f3415827ef38d7d9d72dbbf
TTS
8
23,119
44
14
23
287
29
0
63
340
newShape
Code formate, Make it more standardized. Delete useless file.
https://github.com/PaddlePaddle/PaddleOCR.git
def newShape(self, value=True): if len(self.labelHist) > 0: self.labelDialog = LabelDialog( parent=self, listItem=self.labelHist) if value: text = self.labelDialog.popUp(text=self.prevLabelText) self.lastLabel = text else: text = self.prevLabelText if text is not None: self.prevLabelText = self.stringBundle.getString('tempLabel') # generate_color = generateColorByText(text) shape = self.canvas.setLastLabel(text, None, None) # generate_color, generate_color self.addLabel(shape) if self.beginner(): # Switch to edit mode. self.canvas.setEditing(True) self.actions.create.setEnabled(True) self.actions.undoLastPoint.setEnabled(False) self.actions.undo.setEnabled(True) else: self.actions.editMode.setEnabled(True) self.setDirty() else: # self.canvas.undoLastLine() self.canvas.resetAllLines()
174
PPOCRLabel.py
Python
PPOCRLabel/PPOCRLabel.py
5fbe11905fa37407f6436200a6dacf46faf7138c
PaddleOCR
5
288,708
5
8
2
41
7
1
5
10
mock_api_get_fixture
Move AirNow test fixtures to `conftest.py` (#79902) * Move AirNow test fixtures to `conftest.py` * Unnecessary fixture * Better * Linting
https://github.com/home-assistant/core.git
def mock_api_get_fixture(data): return AsyncMock(return_value=data) @pytest.fixture(name="setup_airnow")
@pytest.fixture(name="setup_airnow")
13
conftest.py
Python
tests/components/airnow/conftest.py
8471a71b60de3fa5974a8871e359ace96ade82f3
core
1
111,793
12
9
4
53
9
0
13
45
_resample
Lightning implementation for retiarii oneshot nas (#4479)
https://github.com/microsoft/nni.git
def _resample(self): result = self.controller.resample() for name, module in self.nas_modules: module.sampled = result[name]
32
sampling.py
Python
nni/retiarii/oneshot/pytorch/sampling.py
8b2eb425274cdb4537fbce4a315aec12a378d6db
nni
2
127,998
112
23
60
465
43
0
186
1,257
to_dict
[AIR] Maintain checkpoint type information during serialization (#28387) These changes are needed to fix the errors described in #28134. Signed-off-by: Balaji Veeramani <[email protected]> Co-authored-by: Amog Kamsetty <[email protected]>
https://github.com/ray-project/ray.git
def to_dict(self) -> dict: if self._data_dict: # If the checkpoint data is already a dict, return checkpoint_data = self._data_dict elif self._obj_ref: # If the checkpoint data is an object reference, resolve checkpoint_data = ray.get(self._obj_ref) elif self._local_path or self._uri: # Else, checkpoint is either on FS or external storage with self.as_directory() as local_path: checkpoint_data_path = os.path.join( local_path, _DICT_CHECKPOINT_FILE_NAME ) if os.path.exists(checkpoint_data_path): # If we are restoring a dict checkpoint, load the dict # from the checkpoint file. with open(checkpoint_data_path, "rb") as f: checkpoint_data = pickle.load(f) # If there are additional files in the directory, add them as # _DICT_CHECKPOINT_ADDITIONAL_FILE_KEY additional_files = {} for file_or_dir in os.listdir(local_path): if file_or_dir in [ ".", "..", _DICT_CHECKPOINT_FILE_NAME, _CHECKPOINT_METADATA_FILE_NAME, ]: continue additional_files[file_or_dir] = _pack( os.path.join(local_path, file_or_dir) ) if additional_files: checkpoint_data[ _DICT_CHECKPOINT_ADDITIONAL_FILE_KEY ] = additional_files else: files = [ f for f in os.listdir(local_path) if os.path.isfile(os.path.join(local_path, f)) and f.endswith(_METADATA_CHECKPOINT_SUFFIX) ] metadata = {} for file in files: with open(os.path.join(local_path, file), "rb") as f: key = file[: -len(_METADATA_CHECKPOINT_SUFFIX)] value = pickle.load(f) metadata[key] = value data = _pack(local_path) checkpoint_data = { _FS_CHECKPOINT_KEY: data, } checkpoint_data.update(metadata) else: raise RuntimeError(f"Empty data for checkpoint {self}") return _CheckpointDict(checkpoint_data, metadata=self._metadata)
280
checkpoint.py
Python
python/ray/air/checkpoint.py
5034544d5df5d7d9596b261d7bdffdd28e76fe2b
ray
13
31,031
7
7
4
38
5
0
9
37
as_target_processor
M-CTC-T Model (#16402) * added cbs to notebooks, made copy-paste error fix in generation_utils * initial push for mctc model * mctc feature extractor done * added processor, tokenizer and their tests for MCTC. Have added an MCTC modeling test, adjusting model code accordingly. * added processor, tokenizer and their tests for MCTC. Have added an MCTC modeling test, adjusting model code accordingly. * passing attention, now struggling to figure out how attention masks make sense here * works when excluding attention masks. ask later how one would integrate attention maskshere * bizarre configuration error (model prefix comes first in config dict json and messes up the order) * all passing but bizzarre config dict ordering issue when to_dict * passing all major tests * feature extraction, processor, tokenizer added & tests passing * style & consistency & other logistical fixes * copy paste fix * model after feature extraction working * commiting final feature extraction results; need to fix normalization * feature extraction passing tests; probably should add tests on the specific flashlight-copied functions? * delete print ; format code a bit * fixing tests * passing major tests * fixing styles * completed tokenization test with real example; not sure if these values are entirely correct. * last test fixes from local * reverting accidentally included custom setup configs * remove load tf weights; fix config error * testing couldnt import featureextractor * fix docs * fix docs * resolving comments * style fixes * style fixes * Update to MCTCConv1dSubSampler Co-authored-by: Patrick von Platen <[email protected]> * relposemb fixes * conv1d name issue; expecting config fail with paraentheses * fix config issue * fix config issue * fix config issue * change everything to MCTCT * fixing naming change errors * archive list * copyrights and docs * copyrights and docs * copyrights and docs * merge resolution * move tests, fix to changed optionaldependency structure * test directories changed * fixing tests * how to avoid tf tests? * how to avoid tf tests? * tests passing locally * allow mctctprocessor imported any env * allow mctctprocessor imported any env * fixed second round of feedback, need to fix docs * doc changes not being applied * all fixed * style fix * feedback fixes * fix copies and feature extraction style fix * Update tests/models/visual_bert/test_modeling_visual_bert.py Co-authored-by: Sylvain Gugger <[email protected]> * copy paste huggingface:main visual bert * added eof newline to visual bert; all tests are passing otherwise * fix slow tests by adding attention mask * change model id to speechbrain * make fix-copies * fix readme unwanted deletes * fixing readmes, make fix-copies * consistent M-CTC-T naming * Update src/transformers/models/mctct/__init__.py Co-authored-by: Patrick von Platen <[email protected]> * all fixed but variable naming * adjust double quotes * fixed variable names * copyright and mr quilter * Apply suggestions from code review Co-authored-by: Sylvain Gugger <[email protected]> * correct slow tests * make fix-copies * Update src/transformers/models/mctct/configuration_mctct.py Co-authored-by: Sylvain Gugger <[email protected]> * Update src/transformers/models/mctct/configuration_mctct.py Co-authored-by: Sylvain Gugger <[email protected]> * m-ctc-t not mctct Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: Sylvain Gugger <[email protected]>
https://github.com/huggingface/transformers.git
def as_target_processor(self): self.current_processor = self.tokenizer yield self.current_processor = self.feature_extractor
21
processing_mctct.py
Python
src/transformers/models/mctct/processing_mctct.py
119e3c0fc83db5803d20d0749eef1220f27cfdc8
transformers
1
88,586
27
12
10
200
26
0
35
113
test_get_replays_activity_field
bug(replays): Add activity to valid field set (#41388) closes: https://github.com/getsentry/replay-backend/issues/204
https://github.com/getsentry/sentry.git
def test_get_replays_activity_field(self): project = self.create_project(teams=[self.team]) replay1_id = uuid.uuid4().hex seq1_timestamp = datetime.datetime.now() - datetime.timedelta(seconds=22) seq2_timestamp = datetime.datetime.now() - datetime.timedelta(seconds=5) self.store_replays(mock_replay(seq1_timestamp, project.id, replay1_id)) self.store_replays(mock_replay(seq2_timestamp, project.id, replay1_id)) with self.feature(REPLAYS_FEATURES): response = self.client.get(self.url + "?field=activity") assert response.status_code == 200
123
test_organization_replay_index.py
Python
tests/sentry/replays/test_organization_replay_index.py
212f58bf21b08b9c8bb8b231307f1790591cfaf9
sentry
1
250,121
7
8
6
36
3
0
7
21
test_request_from_getPeer
Require types in tests.storage. (#14646) Adds missing type hints to `tests.storage` package and does not allow untyped definitions.
https://github.com/matrix-org/synapse.git
def test_request_from_getPeer(self) -> None: self._runtest({}, "127.0.0.1", {})
20
test_client_ips.py
Python
tests/storage/test_client_ips.py
3ac412b4e2f8c5ba11dc962b8a9d871c1efdce9b
synapse
1
68,832
27
12
7
105
12
0
32
25
is_negative_with_precision
fix: Respect system precision for user facing balance qty values (#30837) * fix: Respect system precision for user facing balance qty values - `get_precision` -> `set_precision` - Use system wide currency precision for `stock_value` - Round of qty defiiciency as per user defined precision (system flt precision), so that it is WYSIWYG for users * fix: Consider system precision when validating future negative qty * test: Immediate Negative Qty precision test - Test for Immediate Negative Qty precision - Stock Entry Negative Qty message: Format available qty in system precision - Pass `stock_uom` as confugrable option in `make_item` * test: Future Negative Qty validation with precision * fix: Use `get_field_precision` for currency precision as it used to - `get_field_precision` defaults to number format for precision (maintain old behaviour) - Don't pass `currency` to `get_field_precision` as its not used anymore
https://github.com/frappe/erpnext.git
def is_negative_with_precision(neg_sle, is_batch=False): if not neg_sle: return False field = "cumulative_total" if is_batch else "qty_after_transaction" precision = cint(frappe.db.get_default("float_precision")) or 2 qty_deficit = flt(neg_sle[0][field], precision) return qty_deficit < 0 and abs(qty_deficit) > 0.0001
65
stock_ledger.py
Python
erpnext/stock/stock_ledger.py
d6078aa911a135ab7bfd99c3246c44f992225ce8
erpnext
5
118,509
6
6
3
19
3
0
6
20
clear
st.memo/singleton: cache-specific clear() functionality (#4184) Gives `@st.memo` and `@st.singleton` a per-cache `clear()` function, similar to Python's [functools.lru_cache API](https://docs.python.org/3/library/functools.html#functools.lru_cache). You can use it like this: ```python @st.experimental_memo def foo(val): return val foo(1), foo(2), foo(3) foo.clear() # Clear foo's cache *only* ``` (This PR also bundles in a few very minor cleanups and missing docstrings for memo + singleton).
https://github.com/streamlit/streamlit.git
def clear(self) -> None: raise NotImplementedError
10
cache_utils.py
Python
lib/streamlit/caching/cache_utils.py
b7f417f86ed4ca12c522d8ae5c147f932ffc7d40
streamlit
1
37,476
20
12
5
79
9
0
23
42
require_torch_up_to_2_gpus
Update all require decorators to use skipUnless when possible (#16999)
https://github.com/huggingface/transformers.git
def require_torch_up_to_2_gpus(test_case): if not is_torch_available(): return unittest.skip("test requires PyTorch")(test_case) import torch return unittest.skipUnless(torch.cuda.device_count() < 3, "test requires 0 or 1 or 2 GPUs")(test_case)
44
testing_utils.py
Python
src/transformers/testing_utils.py
57e6464ac9a31156f1c93e59107323e6ec01309e
transformers
2
30,537
25
11
9
86
11
1
26
59
invalid_tess_config
tests: Extract some test fixtures for better clarity
https://github.com/ocrmypdf/OCRmyPDF.git
def invalid_tess_config(outdir): cfg_file = outdir / 'test.cfg' with cfg_file.open('w') as f: f.write( ) yield cfg_file @pytest.mark.slow # This test sometimes times out in CI @pytest.mark.parametrize('renderer', RENDERERS)
@pytest.mark.slow # This test sometimes times out in CI @pytest.mark.parametrize('renderer', RENDERERS)
28
test_main.py
Python
tests/test_main.py
5d0cc0a092f93640e1d83baaf1c738768481d208
OCRmyPDF
1
320,926
10
13
4
72
13
0
10
26
test_message_hiding
Add a MessageInfo data class Preparation for #7246
https://github.com/qutebrowser/qutebrowser.git
def test_message_hiding(qtbot, view): with qtbot.wait_signal(view._clear_timer.timeout): view.show_message(message.MessageInfo(usertypes.MessageLevel.info, 'test')) assert not view._messages
42
test_messageview.py
Python
tests/unit/mainwindow/test_messageview.py
5616a99eff34f7074641d1391ed77d6b4b743529
qutebrowser
1
42,796
26
12
12
117
16
0
33
133
test_pod_template_file_system
Use KubernetesHook to create api client in KubernetesPodOperator (#20578) Add support for k8s hook in KPO; use it always (even when no conn id); continue to consider the core k8s settings that KPO already takes into account but emit deprecation warning about them. KPO historically takes into account a few settings from core airflow cfg (e.g. verify ssl, tcp keepalive, context, config file, and in_cluster). So to use the hook to generate the client, somehow the hook has to take these settings into account. But we don't want the hook to consider these settings in general. So we read them in KPO and if necessary patch the hook and warn.
https://github.com/apache/airflow.git
def test_pod_template_file_system(self): fixture = sys.path[0] + '/tests/kubernetes/basic_pod.yaml' k = KubernetesPodOperator( task_id="task" + self.get_current_task_name(), in_cluster=False, pod_template_file=fixture, do_xcom_push=True, ) context = create_context(k) result = k.execute(context) assert result is not None assert result == {"hello": "world"}
70
test_kubernetes_pod_operator.py
Python
kubernetes_tests/test_kubernetes_pod_operator.py
60eb9e106f5915398eafd6aa339ec710c102dc09
airflow
1
3,894
24
12
11
93
18
0
24
63
stream_slices
🎉 New Source: Orb (#9985) * V1 of source_orb connector * add boostrap.md file * add clause on Pagination to bootstrap.md * add SUMMARY documentation * add lookback_window_days connector parameter * Add support for start_date parameter * Add ability to transform record in order to un-nest IDs * Add support for extracting event properties based on connector configuration
https://github.com/airbytehq/airbyte.git
def stream_slices(self, **kwargs) -> Iterable[Optional[Mapping[str, Any]]]: # TODO: self.authenticator should optionally pull from self._session.auth customers_stream = Customers(authenticator=self._session.auth) for customer in customers_stream.read_records(sync_mode=SyncMode.full_refresh): yield {"customer_id": customer["id"]}
57
source.py
Python
airbyte-integrations/connectors/source-orb/source_orb/source.py
1e0ac30ebdcfce55a5644bcd486044da45c93dd6
airbyte
2
293,855
160
17
156
1,276
62
0
355
1,697
test_delete_duplicates_non_identical
Simplify time zone setting in tests (#68330) * Simplify timezone setting in tests * Fix typo * Adjust caldav tests * Adjust input_datetime tests * Adjust time_date tests * Adjust tod tests * Adjust helper tests * Adjust recorder tests * Adjust risco tests * Adjust aemet tests * Adjust flux tests * Adjust forecast_solar tests * Revert unnecessary change in forecast_solar test * Adjust climacell tests * Adjust google tests * Adjust sensor tests * Adjust sonarr tests * Adjust template tests * Adjust zodiac tests Co-authored-by: Martin Hjelmare <[email protected]>
https://github.com/home-assistant/core.git
def test_delete_duplicates_non_identical(caplog, tmpdir): test_db_file = tmpdir.mkdir("sqlite").join("test_run_info.db") dburl = f"{SQLITE_URL_PREFIX}//{test_db_file}" module = "tests.components.recorder.models_schema_23" importlib.import_module(module) old_models = sys.modules[module] period1 = dt_util.as_utc(dt_util.parse_datetime("2021-09-01 00:00:00")) period2 = dt_util.as_utc(dt_util.parse_datetime("2021-09-30 23:00:00")) period3 = dt_util.as_utc(dt_util.parse_datetime("2021-10-01 00:00:00")) period4 = dt_util.as_utc(dt_util.parse_datetime("2021-10-31 23:00:00")) external_energy_statistics_1 = ( { "start": period1, "last_reset": None, "state": 0, "sum": 2, }, { "start": period2, "last_reset": None, "state": 1, "sum": 3, }, { "start": period3, "last_reset": None, "state": 2, "sum": 4, }, { "start": period4, "last_reset": None, "state": 3, "sum": 5, }, { "start": period4, "last_reset": None, "state": 3, "sum": 6, }, ) external_energy_metadata_1 = { "has_mean": False, "has_sum": True, "name": "Total imported energy", "source": "test", "statistic_id": "test:total_energy_import_tariff_1", "unit_of_measurement": "kWh", } external_energy_statistics_2 = ( { "start": period1, "last_reset": None, "state": 0, "sum": 20, }, { "start": period2, "last_reset": None, "state": 1, "sum": 30, }, { "start": period3, "last_reset": None, "state": 2, "sum": 40, }, { "start": period4, "last_reset": None, "state": 3, "sum": 50, }, { "start": period4, "last_reset": None, "state": 3, "sum": 50, }, ) external_energy_metadata_2 = { "has_mean": False, "has_sum": True, "name": "Total imported energy", "source": "test", "statistic_id": "test:total_energy_import_tariff_2", "unit_of_measurement": "kWh", } # Create some duplicated statistics with schema version 23 with patch.object(recorder, "models", old_models), patch.object( recorder.migration, "SCHEMA_VERSION", old_models.SCHEMA_VERSION ), patch( "homeassistant.components.recorder.create_engine", new=_create_engine_test ): hass = get_test_home_assistant() setup_component(hass, "recorder", {"recorder": {"db_url": dburl}}) wait_recording_done(hass) wait_recording_done(hass) with session_scope(hass=hass) as session: session.add( recorder.models.StatisticsMeta.from_meta(external_energy_metadata_1) ) session.add( recorder.models.StatisticsMeta.from_meta(external_energy_metadata_2) ) with session_scope(hass=hass) as session: for stat in external_energy_statistics_1: session.add(recorder.models.Statistics.from_stats(1, stat)) for stat in external_energy_statistics_2: session.add(recorder.models.Statistics.from_stats(2, stat)) hass.stop() dt_util.DEFAULT_TIME_ZONE = ORIG_TZ # Test that the duplicates are removed during migration from schema 23 hass = get_test_home_assistant() hass.config.config_dir = tmpdir setup_component(hass, "recorder", {"recorder": {"db_url": dburl}}) hass.start() wait_recording_done(hass) wait_recording_done(hass) hass.stop() dt_util.DEFAULT_TIME_ZONE = ORIG_TZ assert "Deleted 2 duplicated statistics rows" in caplog.text assert "Deleted 1 non identical" in caplog.text assert "Found duplicated" not in caplog.text isotime = dt_util.utcnow().isoformat() backup_file_name = f".storage/deleted_statistics.{isotime}.json" with open(hass.config.path(backup_file_name)) as backup_file: backup = json.load(backup_file) assert backup == [ { "duplicate": { "created": "2021-08-01T00:00:00", "id": 4, "last_reset": None, "max": None, "mean": None, "metadata_id": 1, "min": None, "start": "2021-10-31T23:00:00", "state": 3.0, "sum": 5.0, }, "original": { "created": "2021-08-01T00:00:00", "id": 5, "last_reset": None, "max": None, "mean": None, "metadata_id": 1, "min": None, "start": "2021-10-31T23:00:00", "state": 3.0, "sum": 6.0, }, } ]
730
test_statistics.py
Python
tests/components/recorder/test_statistics.py
cf4033b1bc853fc70828c6128ac91cdfb1d5bdaf
core
3
308,983
11
12
7
48
8
0
11
65
on_value_update
Handle zwave_js metadata/value updates when the unit changes (#63579) * Handle zwave_js metadata updates when the unit changes * Use value updated event instead of metadata updated event so we don't get an invalid state value * update comments and formatting * simplify * Update tests/components/zwave_js/test_sensor.py Co-authored-by: Martin Hjelmare <[email protected]> * Update tests/components/zwave_js/test_sensor.py Co-authored-by: Martin Hjelmare <[email protected]> * Update tests/components/zwave_js/test_sensor.py Co-authored-by: Martin Hjelmare <[email protected]> * fix tests * add additional assertions * Add unit checks and simplify logic * Add unit assertion Co-authored-by: Martin Hjelmare <[email protected]>
https://github.com/home-assistant/core.git
def on_value_update(self) -> None: self._attr_native_unit_of_measurement = ( NumericSensorDataTemplate() .resolve_data(self.info.primary_value) .unit_of_measurement )
28
sensor.py
Python
homeassistant/components/zwave_js/sensor.py
9e4e43cf77a210edd046a1476ece99854763e1e8
core
1
270,347
30
11
7
82
12
0
36
83
_copy_weights_to_distributed_model
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _copy_weights_to_distributed_model(original_model, mode): strategy = original_model._distribution_strategy distributed_model = get_distributed_model(original_model, mode) if strategy: # Copy the weights from the original model to each of the replicated # models. orig_model_weights = original_model.get_weights() first_model = strategy.unwrap(distributed_model)[0] set_weights(strategy, first_model, orig_model_weights)
50
distributed_training_utils_v1.py
Python
keras/distribute/distributed_training_utils_v1.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2