id
int64
20
338k
vocab_size
int64
2
671
ast_levels
int64
4
32
nloc
int64
1
451
n_ast_nodes
int64
12
5.6k
n_identifiers
int64
1
186
n_ast_errors
int64
0
10
n_words
int64
2
2.17k
n_whitespaces
int64
2
13.8k
fun_name
stringlengths
2
73
commit_message
stringlengths
51
15.3k
url
stringlengths
31
59
code
stringlengths
51
31k
ast_errors
stringlengths
0
1.46k
token_counts
int64
6
3.32k
file_name
stringlengths
5
56
language
stringclasses
1 value
path
stringlengths
7
134
commit_id
stringlengths
40
40
repo
stringlengths
3
28
complexity
int64
1
153
19,350
134
17
47
721
49
0
192
552
astar_torus
docs: Fix a few typos (#695) There are small typos in: - ArmNavigation/arm_obstacle_navigation/arm_obstacle_navigation.py - ArmNavigation/arm_obstacle_navigation/arm_obstacle_navigation_2.py - docs/modules/slam/FastSLAM1/FastSLAM1_main.rst - docs/modules/slam/ekf_slam/ekf_slam_main.rst Fixes: - Should read `configuration` rather than `configuation`. - Should read `trajectory` rather than `tracjectory`. - Should read `prediction` rather than `prediciton`. Signed-off-by: Tim Gates <[email protected]>
https://github.com/AtsushiSakai/PythonRobotics.git
def astar_torus(grid, start_node, goal_node): colors = ['white', 'black', 'red', 'pink', 'yellow', 'green', 'orange'] levels = [0, 1, 2, 3, 4, 5, 6, 7] cmap, norm = from_levels_and_colors(levels, colors) grid[start_node] = 4 grid[goal_node] = 5 parent_map = [[() for _ in range(M)] for _ in range(M)] heuristic_map = calc_heuristic_map(M, goal_node) explored_heuristic_map = np.full((M, M), np.inf) distance_map = np.full((M, M), np.inf) explored_heuristic_map[start_node] = heuristic_map[start_node] distance_map[start_node] = 0 while True: grid[start_node] = 4 grid[goal_node] = 5 current_node = np.unravel_index( np.argmin(explored_heuristic_map, axis=None), explored_heuristic_map.shape) min_distance = np.min(explored_heuristic_map) if (current_node == goal_node) or np.isinf(min_distance): break grid[current_node] = 2 explored_heuristic_map[current_node] = np.inf i, j = current_node[0], current_node[1] neighbors = find_neighbors(i, j) for neighbor in neighbors: if grid[neighbor] == 0 or grid[neighbor] == 5: distance_map[neighbor] = distance_map[current_node] + 1 explored_heuristic_map[neighbor] = heuristic_map[neighbor] parent_map[neighbor[0]][neighbor[1]] = current_node grid[neighbor] = 3 if np.isinf(explored_heuristic_map[goal_node]): route = [] print("No route found.") else: route = [goal_node] while parent_map[route[0][0]][route[0][1]] != (): route.insert(0, parent_map[route[0][0]][route[0][1]]) print("The route found covers %d grid cells." % len(route)) for i in range(1, len(route)): grid[route[i]] = 6 plt.cla() # for stopping simulation with the esc key. plt.gcf().canvas.mpl_connect('key_release_event', lambda event: [exit(0) if event.key == 'escape' else None]) plt.imshow(grid, cmap=cmap, norm=norm, interpolation=None) plt.show() plt.pause(1e-2) return route
475
arm_obstacle_navigation.py
Python
ArmNavigation/arm_obstacle_navigation/arm_obstacle_navigation.py
c6bdd48715adcbe17c4146b7cae3b0fc569f7bde
PythonRobotics
13
10,180
3
6
13
15
3
0
3
6
test_bad_flow_skip_handle_join
feat: star routing (#3900) * feat(proto): adjust proto for star routing (#3844) * feat(proto): adjust proto for star routing * feat(proto): generate proto files * feat(grpc): refactor grpclet interface (#3846) * feat: refactor connection pool for star routing (#3872) * feat(k8s): add more labels to k8s deployments * feat(network): refactor connection pool * feat(network): refactor k8s pool * feat: star routing graph gateway (#3877) * feat: star routing - refactor grpc data runtime (#3887) * feat(runtimes): refactor grpc dataruntime * fix(tests): adapt worker runtime tests * fix(import): fix import * feat(proto): enable sending multiple lists (#3891) * feat: star routing gateway (#3893) * feat: star routing gateway all protocols (#3897) * test: add streaming and prefetch tests (#3901) * feat(head): new head runtime for star routing (#3899) * feat(head): new head runtime * feat(head): new head runtime * style: fix overload and cli autocomplete * feat(network): improve proto comments Co-authored-by: Jina Dev Bot <[email protected]> * feat(worker): merge docs in worker runtime (#3905) * feat(worker): merge docs in worker runtime * feat(tests): assert after clean up * feat(tests): star routing runtime integration tests (#3908) * fix(tests): fix integration tests * test: test runtimes fast slow request (#3910) * feat(zmq): purge zmq, zed, routing_table (#3915) * feat(zmq): purge zmq, zed, routing_table * style: fix overload and cli autocomplete * feat(zmq): adapt comment in dependency list * style: fix overload and cli autocomplete * fix(tests): fix type tests Co-authored-by: Jina Dev Bot <[email protected]> * test: add test gateway to worker connection (#3921) * feat(pea): adapt peas for star routing (#3918) * feat(pea): adapt peas for star routing * style: fix overload and cli autocomplete * feat(pea): add tests * feat(tests): add failing head pea test Co-authored-by: Jina Dev Bot <[email protected]> * feat(tests): integration tests for peas (#3923) * feat(tests): integration tests for peas * feat(pea): remove _inner_pea function * feat: star routing container pea (#3922) * test: rescue tests (#3942) * fix: fix streaming tests (#3945) * refactor: move docker run to run (#3948) * feat: star routing pods (#3940) * feat(pod): adapt pods for star routing * feat(pods): adapt basepod to star routing * feat(pod): merge pod and compound pod * feat(tests): fix tests * style: fix overload and cli autocomplete * feat(test): add container pea int test * feat(ci): remove more unnecessary tests * fix(tests): remove jinad runtime * feat(ci): remove latency tracking * fix(ci): fix ci def * fix(runtime): enable runtime to be exited * fix(tests): wrap runtime test in process * fix(runtimes): remove unused runtimes * feat(runtimes): improve cancel wait * fix(ci): build test pip again in ci * fix(tests): fix a test * fix(test): run async in its own process * feat(pod): include shard in activate msg * fix(pea): dont join * feat(pod): more debug out * feat(grpc): manage channels properly * feat(pods): remove exitfifo * feat(network): add simple send retry mechanism * fix(network): await pool close * fix(test): always close grpc server in worker * fix(tests): remove container pea from tests * fix(tests): reorder tests * fix(ci): split tests * fix(ci): allow alias setting * fix(test): skip a test * feat(pods): address comments Co-authored-by: Jina Dev Bot <[email protected]> * test: unblock skipped test (#3957) * feat: jinad pea (#3949) * feat: jinad pea * feat: jinad pea * test: remote peas * test: toplogy tests with jinad * ci: parallel jobs * feat(tests): add pod integration tests (#3958) * feat(tests): add pod integration tests * fix(tests): make tests less flaky * fix(test): fix test * test(pea): remote pea topologies (#3961) * test(pea): remote pea simple topology * test: remote pea topologies * refactor: refactor streamer result handling (#3960) * feat(k8s): adapt K8s Pod for StarRouting (#3964) * test: optimize k8s test * test: increase timeout and use different namespace * test: optimize k8s test * test: build and load image when needed * test: refactor k8s test * test: fix image name error * test: fix k8s image load * test: fix typoe port expose * test: update tests in connection pool and handling * test: remove unused fixture * test: parameterize docker images * test: parameterize docker images * test: parameterize docker images * feat(k8s): adapt k8s pod for star routing * fix(k8s): dont overwrite add/remove function in pool * fix(k8s): some fixes * fix(k8s): some more fixes * fix(k8s): linting * fix(tests): fix tests * fix(tests): fix k8s unit tests * feat(k8s): complete k8s integration test * feat(k8s): finish k8s tests * feat(k8s): fix test * fix(tests): fix test with no name * feat(k8s): unify create/replace interface * feat(k8s): extract k8s port constants * fix(tests): fix tests * fix(tests): wait for runtime being ready in tests * feat(k8s): address comments Co-authored-by: bwanglzu <[email protected]> * feat(flow): adapt Flow for StarRouting (#3986) * feat(flow): add routes * feat(flow): adapt flow to star routing * style: fix overload and cli autocomplete * feat(flow): handle empty topologies * feat(k8s): allow k8s pool disabling * style: fix overload and cli autocomplete * fix(test): fix test with mock * fix(tests): fix more tests * feat(flow): clean up tests * style: fix overload and cli autocomplete * fix(tests): fix more tests * feat: add plot function (#3994) * fix(tests): avoid hanging tests * feat(flow): add type hinting * fix(test): fix duplicate exec name in test * fix(tests): fix more tests * fix(tests): enable jinad test again * fix(tests): random port fixture * fix(style): replace quotes Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Joan Fontanals <[email protected]> * feat(ci): bring back ci (#3997) * feat(ci): enable ci again * style: fix overload and cli autocomplete * feat(ci): add latency tracking * feat(ci): bring back some tests * fix(tests): remove invalid port test * feat(ci): disable daemon and distributed tests * fix(tests): fix entrypoint in hub test * fix(tests): wait for gateway to be ready * fix(test): fix more tests * feat(flow): do rolling update and scale sequentially * fix(tests): fix more tests * style: fix overload and cli autocomplete * feat: star routing hanging pods (#4011) * fix: try to handle hanging pods better * test: hanging pods test work * fix: fix topology graph problem * test: add unit test to graph * fix(tests): fix k8s tests * fix(test): fix k8s test * fix(test): fix k8s pool test * fix(test): fix k8s test * fix(test): fix k8s connection pool setting * fix(tests): make runtime test more reliable * fix(test): fix routes test * fix(tests): make rolling update test less flaky * feat(network): gurantee unique ports * feat(network): do round robin for shards * fix(ci): increase pytest timeout to 10 min Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Joan Fontanals <[email protected]> * fix(ci): fix ci file * feat(daemon): jinad pod for star routing * Revert "feat(daemon): jinad pod for star routing" This reverts commit ed9b37ac862af2e2e8d52df1ee51c0c331d76f92. * feat(daemon): remote jinad pod support (#4042) * feat(daemon): add pod tests for star routing * feat(daemon): add remote pod test * test(daemon): add remote pod arguments test * test(daemon): add async scale test * test(daemon): add rolling update test * test(daemon): fix host * feat(proto): remove message proto (#4051) * feat(proto): remove message proto * fix(tests): fix tests * fix(tests): fix some more tests * fix(tests): fix more tests * fix(tests): fix more tests * fix(tests): fix more tests * fix(tests): fix more tests * feat(proto): put docs back in data * fix(proto): clean up * feat(proto): clean up * fix(tests): skip latency tracking * fix(test): fix hub test * fix(tests): fix k8s test * fix(test): some test clean up * fix(style): clean up style issues * feat(proto): adjust for rebase * fix(tests): bring back latency tracking * fix(tests): fix merge accident * feat(proto): skip request serialization (#4074) * feat: add reduce to star routing (#4070) * feat: add reduce on shards to head runtime * test: add reduce integration tests with fixed order * feat: add reduce on needs * chore: get_docs_matrix_from_request becomes public * style: fix overload and cli autocomplete * docs: remove undeterministic results warning * fix: fix uses_after * test: assert correct num docs after reducing in test_external_pod * test: correct asserts after reduce in test_rolling_update * fix: no reduce if uses_after_address is set * fix: get_docs_from_request only if needed * fix: fix tests after merge * refactor: move reduce from data_request_handler to head * style: fix overload and cli autocomplete * chore: apply suggestions * fix: fix asserts * chore: minor test fix * chore: apply suggestions * test: remove flow tests with external executor (pea) * fix: fix test_expected_messages_routing * fix: fix test_func_joiner * test: adapt k8s test Co-authored-by: Jina Dev Bot <[email protected]> * fix(k8s): fix static pool config * fix: use custom protoc doc generator image (#4088) * fix: use custom protoc doc generator image * fix(docs): minor doc improvement * fix(docs): use custom image * fix(docs): copy docarray * fix: doc building local only * fix: timeout doc building * fix: use updated args when building ContainerPea * test: add container PeaFactory test * fix: force pea close on windows (#4098) * fix: dont reduce if uses exist (#4099) * fix: dont use reduce if uses exist * fix: adjust reduce tests * fix: adjust more reduce tests * fix: fix more tests * fix: adjust more tests * fix: ignore non jina resources (#4101) * feat(executor): enable async executors (#4102) * feat(daemon): daemon flow on star routing (#4096) * test(daemon): add remote flow test * feat(daemon): call scale in daemon * feat(daemon): remove tail args and identity * test(daemon): rename scalable executor * test(daemon): add a small delay in async test * feat(daemon): scale partial flow only * feat(daemon): call scale directly in partial flow store * test(daemon): use asyncio sleep * feat(daemon): enable flow level distributed tests * test(daemon): fix jinad env workspace config * test(daemon): fix pod test use new port rolling update * feat(daemon): enable distribuetd tests * test(daemon): remove duplicate tests and zed runtime test * test(daemon): fix stores unit test * feat(daemon): enable part of distributed tests * feat(daemon): enable part of distributed tests * test: correct test paths * test(daemon): add client test for remote flows * test(daemon): send a request with jina client * test(daemon): assert async generator * test(daemon): small interval between tests * test(daemon): add flow test for container runtime * test(daemon): add flow test for container runtime * test(daemon): fix executor name * test(daemon): fix executor name * test(daemon): use async client fetch result * test(daemon): finish container flow test * test(daemon): enable distributed in ci * test(daemon): enable distributed in ci * test(daemon): decare flows and pods * test(daemon): debug ci if else * test(daemon): debug ci if else * test(daemon): decare flows and pods * test(daemon): correct test paths * test(daemon): add small delay for async tests * fix: star routing fixes (#4100) * docs: update docs * fix: fix Request.__repr__ * docs: update flow remarks * docs: fix typo * test: add non_empty_fields test * chore: remove non_empty_fields test * feat: polling per endpoint (#4111) * feat(polling): polling per endpoint configurable * fix: adjust tests * feat(polling): extend documentation * style: fix overload and cli autocomplete * fix: clean up * fix: adjust more tests * fix: remove repeat from flaky test * fix: k8s test * feat(polling): address pr feedback * feat: improve docs Co-authored-by: Jina Dev Bot <[email protected]> * feat(grpc): support connect grpc server via ssl tunnel (#4092) * feat(grpc): support ssl grpc connect if port is 443 * fix(grpc): use https option instead of detect port automatically * chore: fix typo * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <[email protected]> * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <[email protected]> * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <[email protected]> * test(networking): add test for peapods networking * fix: address comments Co-authored-by: Joan Fontanals <[email protected]> * feat(polling): unify polling args (#4113) * fix: several issues for jinad pods (#4119) * fix: activate for jinad pods * fix: dont expose worker pod in partial daemon * fix: workspace setting * fix: containerized flows * fix: hub test * feat(daemon): remote peas on star routing (#4112) * test(daemon): fix request in peas * test(daemon): fix request in peas * test(daemon): fix sync async client test * test(daemon): enable remote peas test * test(daemon): replace send message to send request * test(daemon): declare pea tests in ci * test(daemon): use pea args fixture * test(daemon): head pea use default host * test(daemon): fix peas topologies * test(daemon): fix pseudo naming * test(daemon): use default host as host * test(daemon): fix executor path * test(daemon): add remote worker back * test(daemon): skip local remote remote topology * fix: jinad pea test setup * fix: jinad pea tests * fix: remove invalid assertion Co-authored-by: jacobowitz <[email protected]> * feat: enable daemon tests again (#4132) * feat: enable daemon tests again * fix: remove bogy empty script file * fix: more jinad test fixes * style: fix overload and cli autocomplete * fix: scale and ru in jinad * fix: fix more jinad tests Co-authored-by: Jina Dev Bot <[email protected]> * fix: fix flow test * fix: improve pea tests reliability (#4136) Co-authored-by: Joan Fontanals <[email protected]> Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Deepankar Mahapatro <[email protected]> Co-authored-by: bwanglzu <[email protected]> Co-authored-by: AlaeddineAbdessalem <[email protected]> Co-authored-by: Zhaofeng Miao <[email protected]>
https://github.com/jina-ai/jina.git
def test_bad_flow_skip_handle_join(mocker, protocol):
98
test_flow_skip.py
Python
tests/unit/flow-construct/test_flow_skip.py
933415bfa1f9eb89f935037014dfed816eb9815d
jina
1
308,747
55
16
26
253
30
0
68
269
test_button_uid
Add unique id to flic buttons (#61496) * Use bluetooth address as unique id for flic buttons. * Always lower case address for uid and add tests. * Update test to set up component. * Use format_mac(addr) as unique id. * Only patch pyflic objects and use query entity registry for buttons. * Replace ExitStack with patch.multiple, remove assert_setup_component. * Test binary sensor is present in state machine.
https://github.com/home-assistant/core.git
async def test_button_uid(hass): address_to_name = { "80:e4:da:78:6e:11": "binary_sensor.flic_80e4da786e11", # Uppercase address should not change uid. "80:E4:DA:78:6E:12": "binary_sensor.flic_80e4da786e12", } flic_client = _MockFlicClient(tuple(address_to_name)) with mock.patch.multiple( "pyflic", FlicClient=lambda _, __: flic_client, ButtonConnectionChannel=mock.DEFAULT, ScanWizard=mock.DEFAULT, ): assert await async_setup_component( hass, "binary_sensor", {"binary_sensor": [{"platform": "flic"}]}, ) await hass.async_block_till_done() entity_registry = er.async_get(hass) for address, name in address_to_name.items(): state = hass.states.get(name) assert state assert state.attributes.get("address") == address entry = entity_registry.async_get(name) assert entry assert entry.unique_id == address.lower()
148
test_binary_sensor.py
Python
tests/components/flic/test_binary_sensor.py
ce138dd30e7262fc71253ab5c2936f869b891fda
core
2
212,929
59
14
13
125
10
0
76
256
theme_global
Better error checking/reporting in theme_global. NEW THEME DarkGrey15
https://github.com/PySimpleGUI/PySimpleGUI.git
def theme_global(new_theme=None): if new_theme is not None: if new_theme not in theme_list(): popup_error_with_traceback('Cannot use custom themes with theme_global call', 'Your request to use theme {} cannot be performed.'.format(new_theme), 'The PySimpleGUI Global User Settings are meant for PySimpleGUI standard items, not user config items', 'You can use any of the many built-in themes instead or use your own UserSettings file to store your custom theme') return pysimplegui_user_settings.get('-theme-', CURRENT_LOOK_AND_FEEL) pysimplegui_user_settings.set('-theme-', new_theme) theme(new_theme) return new_theme else: return pysimplegui_user_settings.get('-theme-', CURRENT_LOOK_AND_FEEL)
71
PySimpleGUI.py
Python
PySimpleGUI.py
dfad2e3b7671b7128895c8a0e29fff38d7efe6e9
PySimpleGUI
3
130,602
32
12
13
113
18
0
39
128
ensure_schema_for_first_block
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def ensure_schema_for_first_block(self) -> Optional[Union["pyarrow.Schema", type]]: get_schema = cached_remote_fn(_get_schema) try: block = next(self.iter_blocks()) except (StopIteration, ValueError): # Dataset is empty (no blocks) or was manually cleared. return None schema = ray.get(get_schema.remote(block)) # Set the schema. self._metadata[0].schema = schema return schema
68
block_list.py
Python
python/ray/data/impl/block_list.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
2
167,478
41
11
19
171
23
0
49
181
test_css_excel_cell_cache
PERF: Improve Styler `to_excel` Performance (#47371) * Move CSS expansion lookup to dictionary * Implement simple CSSToExcelConverter cache * Eliminate list -> str -> list in CSSResolver * Allow for resolution of duplicate properties * Add performance benchmark for styled Excel * CLN: Clean up PEP8 issues * DOC: Update PR documentation * CLN: Clean up PEP8 issues * Fixes from pre-commit [automated commit] * Make Excel CSS case-insensitive * Test for ordering and caching * Pre-commit fixes * Remove built-in filter * Increase maxsize of Excel cache Co-authored-by: Thomas Hunter <[email protected]>
https://github.com/pandas-dev/pandas.git
def test_css_excel_cell_cache(styles, cache_hits, cache_misses): # See GH 47371 converter = CSSToExcelConverter() converter.__call__.cache_clear() css_styles = {(0, i): _style for i, _style in enumerate(styles)} for css_row, css_col in css_styles: CssExcelCell( row=0, col=0, val="", style=None, css_styles=css_styles, css_row=css_row, css_col=css_col, css_converter=converter, ) cache_info = converter.__call__.cache_info() converter.__call__.cache_clear() assert cache_info.hits == cache_hits assert cache_info.misses == cache_misses
112
test_to_excel.py
Python
pandas/tests/io/formats/test_to_excel.py
ad842d36bb62a6c7a5e8f93a9594129ff46cc5cb
pandas
3
320,832
76
11
62
479
21
0
88
912
migrate
Add setting to allow pasting from clipboard Closes #5256 Supersedes and closes #6315
https://github.com/qutebrowser/qutebrowser.git
def migrate(self) -> None: self._migrate_configdata() self._migrate_bindings_default() self._migrate_font_default_family() self._migrate_font_replacements() self._migrate_bool('tabs.favicons.show', 'always', 'never') self._migrate_bool('scrolling.bar', 'always', 'overlay') self._migrate_bool('qt.force_software_rendering', 'software-opengl', 'none') self._migrate_renamed_bool( old_name='content.webrtc_public_interfaces_only', new_name='content.webrtc_ip_handling_policy', true_value='default-public-interface-only', false_value='all-interfaces') self._migrate_renamed_bool( old_name='tabs.persist_mode_on_change', new_name='tabs.mode_on_change', true_value='persist', false_value='normal') self._migrate_renamed_bool( old_name='statusbar.hide', new_name='statusbar.show', true_value='never', false_value='always') self._migrate_renamed_bool( old_name='content.ssl_strict', new_name='content.tls.certificate_errors', true_value='block', false_value='load-insecurely', ask_value='ask', ) self._migrate_renamed_bool( old_name='content.javascript.can_access_clipboard', new_name='content.javascript.clipboard', true_value='access', false_value='none', ) for setting in ['colors.webpage.force_dark_color_scheme', 'colors.webpage.prefers_color_scheme_dark']: self._migrate_renamed_bool( old_name=setting, new_name='colors.webpage.preferred_color_scheme', true_value='dark', false_value='auto', ) for setting in ['tabs.title.format', 'tabs.title.format_pinned', 'window.title_format']: self._migrate_string_value(setting, r'(?<!{)\{title\}(?!})', r'{current_title}') self._migrate_to_multiple('fonts.tabs', ('fonts.tabs.selected', 'fonts.tabs.unselected')) self._migrate_to_multiple('content.media_capture', ('content.media.audio_capture', 'content.media.audio_video_capture', 'content.media.video_capture')) # content.headers.user_agent can't be empty to get the default anymore. setting = 'content.headers.user_agent' self._migrate_none(setting, configdata.DATA[setting].default) self._remove_empty_patterns()
266
configfiles.py
Python
qutebrowser/config/configfiles.py
4a6df3a8e84780e9f58dbda31c3a9bfa1e35cebe
qutebrowser
3
312,158
14
10
5
53
6
0
16
48
is_closed
Enable strict typing for isy994 (#65439) Co-authored-by: Martin Hjelmare <[email protected]>
https://github.com/home-assistant/core.git
def is_closed(self) -> bool | None: if self._node.status == ISY_VALUE_UNKNOWN: return None return bool(self._node.status == 0)
32
cover.py
Python
homeassistant/components/isy994/cover.py
6c38a6b5697bcf4587e00101771001bf596974f9
core
2
262,777
44
13
17
159
17
0
52
153
build_script
splash: add always_on_top option for behavior configuration Allow user to enable or disable the always-on-top behavior at build time via always_on_top boolean argument to Splash(). By default, the always-on-top behavior is enabled for the sake of consistency with previous releases.
https://github.com/pyinstaller/pyinstaller.git
def build_script(text_options=None, always_on_top=False): # Order is important! script = [ ipc_script, image_script, splash_canvas_setup, ] if text_options: # If the default font is used we need a different syntax if text_options['font'] == "TkDefaultFont": script.append(splash_canvas_default_font % text_options) else: script.append(splash_canvas_custom_font % text_options) script.append(splash_canvas_text % text_options) script.append(transparent_setup) script.append(pack_widgets) script.append(position_window_on_top if always_on_top else position_window) script.append(raise_window) return '\n'.join(script)
94
splash_templates.py
Python
PyInstaller/building/splash_templates.py
bfd5c729919b16e5e9457fca28e931b0e897a60a
pyinstaller
4
107,634
41
12
14
203
22
0
51
172
set_rgrids
Clarify error message for bad keyword arguments. `plot([], [], foo=42)` previously emitted ``` 'Line2D' object has no property 'foo' ``` which refers to the Matplotlib-specific concept of "properties". It now instead emits ``` Line2D.set() got an unexpected keyword argument 'foo' ``` which is modeled after the standard error message for unknown keyword arguments. (To maximize backcompat, the implementation goes through a new _internal_update, which does *not* error when the same prop is passed under different aliases. This could be changed later, but is not the goal of this PR.)
https://github.com/matplotlib/matplotlib.git
def set_rgrids(self, radii, labels=None, angle=None, fmt=None, **kwargs): # Make sure we take into account unitized data radii = self.convert_xunits(radii) radii = np.asarray(radii) self.set_yticks(radii) if labels is not None: self.set_yticklabels(labels) elif fmt is not None: self.yaxis.set_major_formatter(mticker.FormatStrFormatter(fmt)) if angle is None: angle = self.get_rlabel_position() self.set_rlabel_position(angle) for t in self.yaxis.get_ticklabels(): t._internal_update(kwargs) return self.yaxis.get_gridlines(), self.yaxis.get_ticklabels()
127
polar.py
Python
lib/matplotlib/projections/polar.py
d69be2554cf6d1ac711bf433b1d6f176e3290d4f
matplotlib
5
47,864
28
11
20
153
18
1
29
134
test_check_docker_version_unknown
Fix and improve consistency of checking command return code (#23189) This is an aftermath of #23104 after switchig to docs building by breeze, failure of build documentation did not trigger failure of the docs build (but it did trigger main failure of pushing the documentation). This change improves and simplifies the return code processing and propagation in the commands executed by breeze - thanks to common returncode, stdout, stderr available in both CompletedProcess and CalledProcessError and returning fake CompletedProcess in dry_run mode, we can also satisfy MyPy type check by returning non-optional Union of those two types which simplifies returncode processing. This change fixes the error in the docs (lack of empty lines before auto-generated extras). All commands have been reviewed to see if the returncode is correctly handled where needed.
https://github.com/apache/airflow.git
def test_check_docker_version_unknown(mock_console, mock_run_command, mock_check_docker_permission_denied): mock_check_docker_permission_denied.return_value = False check_docker_version(verbose=True) expected_run_command_calls = [ call( ['docker', 'version', '--format', '{{.Client.Version}}'], verbose=True, no_output_dump_on_exception=True, capture_output=True, text=True, check=False, ), ] mock_run_command.assert_has_calls(expected_run_command_calls) mock_console.print.assert_called_with( ) @mock.patch('airflow_breeze.utils.docker_command_utils.check_docker_permission_denied') @mock.patch('airflow_breeze.utils.docker_command_utils.run_command') @mock.patch('airflow_breeze.utils.docker_command_utils.console')
@mock.patch('airflow_breeze.utils.docker_command_utils.check_docker_permission_denied') @mock.patch('airflow_breeze.utils.docker_command_utils.run_command') @mock.patch('airflow_breeze.utils.docker_command_utils.console')
72
test_docker_command_utils.py
Python
dev/breeze/tests/test_docker_command_utils.py
be51aece54ef98a8868845ad8033f08689dd7ad1
airflow
1
266,443
10
7
2
27
2
0
11
17
_ansible_finalize
Attach concat func to an environment class (#76282) * Attach concat func to an environment class ci_complete * clog and docstrings
https://github.com/ansible/ansible.git
def _ansible_finalize(thing): return thing if thing is not None else ''
15
__init__.py
Python
lib/ansible/template/__init__.py
8febd37f325b049afe448af689064ee019d1099c
ansible
2
269,496
6
8
2
31
5
0
6
12
_constant_to_tensor
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _constant_to_tensor(x, dtype): return tf.constant(x, dtype=dtype)
19
backend.py
Python
keras/backend.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
272,025
8
9
3
44
6
0
8
17
_sanitize_column_name_for_variable_scope
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _sanitize_column_name_for_variable_scope(name): invalid_char = re.compile("[^A-Za-z0-9_.\\-]") return invalid_char.sub("_", name)
23
base_feature_layer.py
Python
keras/feature_column/base_feature_layer.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
249,339
24
10
13
115
14
0
25
129
test_create_token_invalid_chars
Use literals in place of `HTTPStatus` constants in tests (#13488) * Use literals in place of `HTTPStatus` constants in tests * newsfile * code style * code style
https://github.com/matrix-org/synapse.git
def test_create_token_invalid_chars(self) -> None: data = { "token": "abc/def", } channel = self.make_request( "POST", self.url + "/new", data, access_token=self.admin_user_tok, ) self.assertEqual(400, channel.code, msg=channel.json_body) self.assertEqual(channel.json_body["errcode"], Codes.INVALID_PARAM)
70
test_registration_tokens.py
Python
tests/rest/admin/test_registration_tokens.py
2281427175e4c93a30c39607fb4ac23c2a1f399f
synapse
1
45,940
31
13
25
235
33
0
41
158
test_dagrun_callbacks_commited_before_sent
Store callbacks in database if standalone_dag_processor config is True. (#21731)
https://github.com/apache/airflow.git
def test_dagrun_callbacks_commited_before_sent(self, dag_maker): with dag_maker(dag_id='test_dagrun_callbacks_commited_before_sent'): DummyOperator(task_id='dummy') self.scheduler_job = SchedulerJob(subdir=os.devnull) self.scheduler_job.processor_agent = mock.Mock() self.scheduler_job._send_dag_callbacks_to_processor = mock.Mock() self.scheduler_job._schedule_dag_run = mock.Mock() dr = dag_maker.create_dagrun() session = settings.Session() ti = dr.get_task_instance('dummy') ti.set_state(State.SUCCESS, session) with mock.patch.object(settings, "USE_JOB_SCHEDULE", False), mock.patch( "airflow.jobs.scheduler_job.prohibit_commit" ) as mock_guard: mock_guard.return_value.__enter__.return_value.commit.side_effect = session.commit
188
test_scheduler_job.py
Python
tests/jobs/test_scheduler_job.py
5ace37a16d1773adb71c684450838e4c8e69b581
airflow
1
147,478
10
10
4
52
9
0
13
45
is_ready
[tune] Use new Checkpoint interface internally (#22801) Follow up from #22741, also use the new checkpoint interface internally. This PR is low friction and just replaces some internal bookkeeping methods. With the new Checkpoint interface, there is no need to revamp the save/restore APIs completely. Instead, we will focus on the bookkeeping part, which takes place in the Ray Tune's and Ray Train's checkpoint managers. These will be consolidated in a future PR.
https://github.com/ray-project/ray.git
def is_ready(self): if self.storage == _TuneCheckpoint.PERSISTENT: return isinstance(self.value, str) return self.storage == _TuneCheckpoint.MEMORY
32
checkpoint_manager.py
Python
python/ray/tune/checkpoint_manager.py
1465eaa30634c189fe3ebc9db8609f47d19a78cc
ray
2
320,735
27
14
12
156
16
0
38
152
sizeHint
mypy: Upgrade to PyQt5-stubs 5.15.6.0 For some unknown reason, those new stubs cause a *lot* of things now to be checked by mypy which formerly probably got skipped due to Any being implied somewhere. The stubs themselves mainly improved, with a couple of regressions too. In total, there were some 337 (!) new mypy errors. This commit fixes almost all of them, and the next commit improves a fix to get things down to 0 errors again. Overview of the changes: ==== qutebrowser/app.py - Drop type ignore due to improved stubs. ==== qutebrowser/browser/browsertab.py - Specify the type of _widget members more closely than just QWidget. This is debatable: I suppose the abstract stuff shouldn't need to know anything about the concrete backends at all. But it seems like we cut some corners when initially implementing things, and put some code in browsertab.py just because the APIs of both backends happened to be compatible. Perhaps something to reconsider once we drop QtWebKit and hopefully implement a dummy backend. - Add an additional assertion in AbstractAction.run_string. This is already covered by the isinstance(member, self.action_base) above it, but that's too dynamic for mypy to understand. - Fix the return type of AbstractScroller.pos_px, which is a QPoint (with x and y components), not a single int. - Fix the return type of AbstractScroller.pos_perc, which is a Tuple (with x and y components), not a single int. - Fix the argument types of AbstractScroller.to_perc, as it's possible to pass fractional percentages too. - Specify the type for AbstractHistoryPrivate._history. See above (_widget) re this being debatable. - Fix the return type of AbstractTabPrivate.event_target(), which can be None (see #3888). - Fix the return type of AbstractTabPrivate.run_js_sync, which is Any (the JS return value), not None. - Fix the argument type for AbstractTabPrivate.toggle_inspector: position can be None to use the last used position. - Declare the type of sub-objects of AbstractTab. - Fix the return value of AbstractTab.icon(), which is the QIcon, not None. ==== qutebrowser/browser/commands.py - Make sure the active window is a MainWindow (with a .win_id attribute). ==== qutebrowser/browser/downloadview.py - Add _model() which makes sure that self.model() is a DownloadModel, not None or any other model. This is needed because other methods access a variety of custom attributes on it, e.g. last_index(). ==== qutebrowser/browser/greasemonkey.py - Add an ignore for AbstractDownload.requested_url which we patch onto the downloads. Probably would be nicer to add it as a proper attribute which always gets set by the DownloadManager. ==== qutebrowser/browser/hints.py - Remove type ignores for QUrl.toString(). - Add a new type ignore for combining different URL flags (which works, but is not exactly type safe... still probably a regression in the stubs). - Make sure the things we get back from self._get_keyparser are what we actually expect. Probably should introduce a TypedDict (and/or overloads for _get_keyparser with typing.Literal) to teach mypy about the exact return value. See #7098. This is needed because we access Hint/NormalKeyParser-specific attributes such as .set_inhibited_timout() or .update_bindings(). ==== qutebrowser/browser/inspector.py - Similar changes than in browsertab.py to make some types where we share API (e.g. .setPage()) more concrete. Didn't work out unfortunately, see next commit. ==== qutebrowser/browser/network/pac.py - Remove now unneeded type ignore for signal. ==== qutebrowser/browser/qtnetworkdownloads.py - Make sure that downloads is a qtnetworkdownloads.DownloadItem (rather than an AbstractDownload), so that we can call ._uses_nam() on it. ==== qutebrowser/browser/qutescheme.py - Remove now unneeded type ignore for QUrl flags. ==== qutebrowser/browser/urlmarks.py - Specify the type of UrlMarkManager._lineparser, as those only get initialized in _init_lineparser of subclasses, so mypy doesn't know it's supposed to exist. ==== qutebrowser/browser/webelem.py - New casts to turn single KeyboardModifier (enum) entries into KeyboardModifiers (flags). Might not be needed anymore with Qt 6. - With that, casting the final value is now unneeded. ==== qutebrowser/browser/webengine/notification.py - Remove now unneeded type ignore for signal. - Make sure the self.sender() we get in HerbeNotificationAdapter._on_finished() is a QProcess, not just any QObject. ==== qutebrowser/browser/webengine/webenginedownloads.py - Remove now unneeded type ignores for signals. ==== qutebrowser/browser/webengine/webengineelem.py - Specify the type of WebEngineElement._tab. - Remove now unneeded type ignore for mixed flags. ==== qutebrowser/browser/webengine/webengineinspector.py - See changes to inspector.py and next commit. - Remove now unneeded type ignore for signal. ==== qutebrowser/browser/webengine/webenginequtescheme.py - Remove now unneeded type ignore for mixed flags. ==== qutebrowser/browser/webengine/webenginesettings.py - Ignore access of .setter attribute which we patch onto QWebEngineProfile. Would be nice to have a subclass or wrapper-class instead. ==== qutebrowser/browser/webengine/webenginetab.py - Specified the type of _widget members more closely than just QWidget. See browsertab.py changes for details. - Remove some now-unneeded type ignores for creating FindFlags. - Specify more concrete types for WebEngineTab members where we actually need to access WebEngine-specific attributes. - Make sure the page we get is our custom WebEnginePage subclass, not just any QWebEnginePage. This is needed because we access custom attributes on it. ==== qutebrowser/browser/webengine/webview.py - Make sure the page we get is our custom WebEnginePage subclass, not just any QWebEnginePage. This is needed because we access custom attributes on it. ==== qutebrowser/browser/webkit/network/networkreply.py - Remove now unneeded type ignores for signals. ==== qutebrowser/browser/webkit/webkitinspector.py - See changes to inspector.py and next commit. ==== qutebrowser/browser/webkit/webkittab.py - Specify the type of _widget members more closely than just QWidget. See browsertab.py changes for details. - Add a type ignore for WebKitAction because our workaround needs to treat them as ints (which is allowed by PyQt, even if not type-safe). - Add new ignores for findText calls: The text is a QString and can be None; the flags are valid despite mypy thinking they aren't (stubs regression?). - Specify the type for WebKitHistoryPrivate._history, because we access WebKit-specific attributes. See above (_widget) re this being debatable. - Make mypy aware that .currentFrame() and .frameAt() can return None (stubs regression?). - Make sure the .page() and .page().networkAccessManager() are our subclasses rather than the more generic QtWebKit objects, as we use custom attributes. - Add new type ignores for signals (stubs regression!) ==== qutebrowser/browser/webkit/webpage.py - Make sure the .networkAccessManager() is our subclass rather than the more generic QtWebKit object, as we use custom attributes. - Replace a cast by a type ignore. The cast didn't work anymore. ==== qutebrowser/browser/webkit/webview.py - Make sure the .page() is our subclass rather than the more generic QtWebKit object, as we use custom attributes. ==== qutebrowser/commands/userscripts.py - Remove now unneeded type ignore for signal. ==== qutebrowser/completion/completer.py - Add a new _completion() getter (which ensures it actually gets the completion view) rather than accessing the .parent() directly (which could be any QObject). ==== qutebrowser/completion/completiondelegate.py - Make sure self.parent() is a CompletionView (no helper method as there is only one instance). - Remove a now-unneeded type ignore for adding QSizes. ==== qutebrowser/completion/completionwidget.py - Add a ._model() getter which ensures that we get a CompletionModel (with custom attributes) rather than Qt's .model() which can be any QAbstractItemModel (or None). - Removed a now-unneeded type ignore for OR-ing flags. ==== qutebrowser/completion/models/completionmodel.py - Remove now unneeded type ignores for signals. - Ignore a complaint about .set_pattern() not being defined. Completion categories don't share any common parent class, so it would be good to introduce a typing.Protocol for this. See #7098. ==== qutebrowser/components/misccommands.py - Removed a now-unneeded type ignore for OR-ing flags. ==== qutebrowser/components/readlinecommands.py - Make sure QApplication.instance() is a QApplication (and not just a QCoreApplication). This includes the former "not None" check. ==== qutebrowser/components/scrollcommands.py - Add basic annotation for "funcs" dict. Could have a callable protocol to specify it needs a count kwarg, see #7098. ==== qutebrowser/config/stylesheet.py - Correctly specify that stylesheet apply to QWidgets, not any QObject. - Ignore an attr-defined for obj.STYLESHEET. Perhaps could somehow teach mypy about this with overloads and protocols (stylesheet for set_register being None => STYLESHEET needs to be defined, otherwise anything goes), but perhaps not worth the troble. See #7098. ==== qutebrowser/keyinput/keyutils.py - Remove some now-unneeded type ignores and add a cast for using a single enum value as flags. Might need to look at this again with Qt 6 support. ==== qutebrowser/keyinput/modeman.py - Add a FIXME for using a TypedDict, see comments for hints.py above. ==== qutebrowser/mainwindow/mainwindow.py - Remove now-unneeded type ignores for calling with OR-ed flags. - Improve where we cast from WindowType to WindowFlags, no int needed - Use new .tab_bar() getter, see below. ==== qutebrowser/mainwindow/prompt.py - Remove now-unneeded type ignores for calling with OR-ed flags. ==== qutebrowser/mainwindow/statusbar/bar.py - Adjust type ignores around @pyqtProperty. The fact one is still needed seems like a stub regression. ==== qutebrowser/mainwindow/statusbar/command.py - Fix type for setText() override (from QLineEdit): text can be None (QString in C++). ==== qutebrowser/mainwindow/statusbar/url.py - Adjust type ignores around @pyqtProperty. The fact one is still needed seems like a stub regression. ==== qutebrowser/mainwindow/tabbedbrowser.py - Specify that TabDeque manages browser tabs, not any QWidgets. It accesses AbstractTab-specific attributes. - Make sure that the .tabBar() we get is a tabwidget.TabBar, as we access .maybe_hide. - Fix the annotations for stored marks: Scroll positions are a QPoint, not int. - Add _current_tab() and _tab_by_idx() wrappers for .currentWidget() and .widget(), which ensures that the return values are valid AbstractTabs (or None for _tab_by_idx). This is needed because we access AbstractTab-specific attributes. - For some places, where the tab can be None, continue using .currentTab() but add asserts. - Remove some now-unneeded [unreachable] ignores, as mypy knows about the None possibility now. ==== qutebrowser/mainwindow/tabwidget.py - Add new tab_bar() and _tab_by_idx() helpers which check that the .tabBar() and .widget() are of type TabBar and AbstractTab, respectively. - Add additional assertions where we expect ._tab_by_idx() to never be None. - Remove dead code in get_tab_fields for handling a None y scroll position. I was unable to find any place in the code where this could be set to None. - Remove some now-unneeded type ignores and casts, as mypy now knows that _type_by_idx() could be None. - Work around a strange instance where mypy complains about not being able to find the type of TabBar.drag_in_progress from TabWidget._toggle_visibility, despite it clearly being shown as a bool *inside* that class without any annotation. - Add a ._tab_widget() getter in TabBar which ensures that the .parent() is in fact a TabWidget. ==== qutebrowser/misc/crashsignal.py - Remove now unneeded type ignores for signals. ==== qutebrowser/misc/editor.py - Remove now unneeded type ignores for signals. ==== qutebrowser/misc/ipc.py - Remove now unneeded type ignores for signals. - Add new type ignores for .error() which is both a signal and a getter (stub regression?). Won't be relevant for Qt 6 anymore, as the signal was renamed to errorOccurred in 5.15. ==== qutebrowser/misc/objects.py - Make sure mypy knows that objects.app is our custom Application (with custom attributes) rather than any QApplication. ==== qutebrowser/utils/objreg.py - Ignore attr-defined for .win_id attributes. Maybe could add a typing.Protocol, but ideally, the whole objreg stuff should die one day anyways. ==== tests/unit/completion/test_completer.py - Make CompletionWidgetStub inherit from CompletionView so that it passes the new isinstance() asserts in completer.py (see above).
https://github.com/qutebrowser/qutebrowser.git
def sizeHint(self): idx = self._model().last_index() bottom = self.visualRect(idx).bottom() if bottom != -1: margins = self.contentsMargins() height = (bottom + margins.top() + margins.bottom() + 2 * self.spacing()) size = QSize(0, height) else: size = QSize(0, 0) qtutils.ensure_valid(size) return size
93
downloadview.py
Python
qutebrowser/browser/downloadview.py
a20bb67a878b2e68abf8268c1b0a27f018d01352
qutebrowser
2
148,344
27
11
25
66
7
1
30
88
get_replica_context
[serve] Introduce `context.py` and `client.py` (#24067) Serve stores context state, including the `_INTERNAL_REPLICA_CONTEXT` and the `_global_client` in `api.py`. However, these data structures are referenced throughout the codebase, causing circular dependencies. This change introduces two new files: * `context.py` * Intended to expose process-wide state to internal Serve code as well as `api.py` * Stores the `_INTERNAL_REPLICA_CONTEXT` and the `_global_client` global variables * `client.py` * Stores the definition for the Serve `Client` object, now called the `ServeControllerClient`
https://github.com/ray-project/ray.git
def get_replica_context() -> ReplicaContext: internal_replica_context = get_internal_replica_context() if internal_replica_context is None: raise RayServeException( "`serve.get_replica_context()` " "may only be called from within a " "Ray Serve deployment." ) return internal_replica_context @PublicAPI(stability="beta")
@PublicAPI(stability="beta")
26
api.py
Python
python/ray/serve/api.py
b51d0aa8b12ceb7ce082b69db4d2707ea52c0b69
ray
2
11,925
39
11
23
160
18
0
53
181
mixin_client_gateway_parser
feat: improve client interface (#4510) * feat: add host unpacking * feat: add tests for host unpacking * feat: raise error when duplicate parameters def in client * fix: batter url host parsing * feat: rename https to tls * feat: add deprecation warning for https arg * feat: add docs * feat: update docs * fix: type hint for classes * fix: missing renaming https tls * style: fix overload and cli autocomplete * fix: fix grammar in the docs * fix: fix grammar in the docs * fix: update docs Co-authored-by: Tobias Jacobowitz <[email protected]> * fix: update docs Co-authored-by: Tobias Jacobowitz <[email protected]> Co-authored-by: Jina Dev Bot <[email protected]> Co-authored-by: Tobias Jacobowitz <[email protected]>
https://github.com/jina-ai/jina.git
def mixin_client_gateway_parser(parser): gp = add_arg_group(parser, title='ClientGateway') _add_host(gp) _add_proxy(gp) gp.add_argument( '--port', type=int, default=helper.random_port(), help='The port of the Gateway, which the client should connect to.', ) gp.add_argument( '--tls', action='store_true', default=False, help='If set, connect to gateway using tls encryption', ) gp.add_argument( '--https', action=get_deprecation_renamed_action('--tls', _StoreTrueAction), # action='store_true', default=False, help='If set, connect to gateway using https', dest='tls', )
94
remote.py
Python
jina/parsers/orchestrate/runtimes/remote.py
b0f839b2030b1371518082be7bf79778d6e9f88d
jina
1
87,079
51
18
36
323
14
0
76
656
test_no_dynamic_sampling_returned_from_get_on_am2_plan
feat(ds): Handle GET and PUT in project details for v2 dynamic sampling [TET-475] (#40181) Ensures that when new AM2 plan flag is enabled GET request does not return `dynamicSampling` data in response, and for PUT request guards against storing `dynamicSampling` data. Also, handles popping `dynamicSampling` data from response if a PUT request is made to update some other project fields
https://github.com/getsentry/sentry.git
def test_no_dynamic_sampling_returned_from_get_on_am2_plan(self): dynamic_sampling_data = { "rules": [ { "sampleRate": 0.7, "type": "trace", "active": True, "condition": { "op": "and", "inner": [ {"op": "eq", "name": "field1", "value": ["val"]}, {"op": "glob", "name": "field1", "value": ["val"]}, ], }, "id": 1, }, { "sampleRate": 0.8, "type": "trace", "active": True, "condition": { "op": "and", "inner": [], }, "id": 2, }, ], "next_id": 3, } self.project.update_option("sentry:dynamic_sampling", dynamic_sampling_data) self.login_as(user=self.user) with Feature({"organizations:dynamic-sampling-basic": True}): response = self.get_success_response( self.organization.slug, self.project.slug, method="get" ) assert "dynamicSampling" not in response.data
180
test_project_details.py
Python
tests/sentry/api/endpoints/test_project_details.py
8c51b98545d71ed7ef0b3b924db13461e924023a
sentry
1
291,083
48
11
8
92
11
0
56
147
_async_stop_scanner
Accept advertisements from alternate scanners when a scanner stops scanning (#82448)
https://github.com/home-assistant/core.git
async def _async_stop_scanner(self) -> None: self.scanning = False _LOGGER.debug("%s: Stopping bluetooth discovery", self.name) try: await self.scanner.stop() # type: ignore[no-untyped-call] except BleakError as ex: # This is not fatal, and they may want to reload # the config entry to restart the scanner if they # change the bluetooth dongle. _LOGGER.error("%s: Error stopping scanner: %s", self.name, ex)
50
scanner.py
Python
homeassistant/components/bluetooth/scanner.py
a7caa038be2c0b05b53756ba6c9563854b2ca1ea
core
2
249,561
65
14
56
456
37
0
116
777
_generate_room
Persist CreateRoom events to DB in a batch (#13800)
https://github.com/matrix-org/synapse.git
def _generate_room(self) -> Tuple[str, List[Set[str]]]: room_id = self.helper.create_room_as(self.user_id, tok=self.token) # Mark the room as not having a chain cover index self.get_success( self.store.db_pool.simple_update( table="rooms", keyvalues={"room_id": room_id}, updatevalues={"has_auth_chain_index": False}, desc="test", ) ) # Create a fork in the DAG with different events. event_handler = self.hs.get_event_creation_handler() latest_event_ids = self.get_success( self.store.get_prev_events_for_room(room_id) ) event, context = self.get_success( event_handler.create_event( self.requester, { "type": "some_state_type", "state_key": "", "content": {}, "room_id": room_id, "sender": self.user_id, }, prev_event_ids=latest_event_ids, ) ) self.get_success( event_handler.handle_new_client_event( self.requester, events_and_context=[(event, context)] ) ) state1 = set(self.get_success(context.get_current_state_ids()).values()) event, context = self.get_success( event_handler.create_event( self.requester, { "type": "some_state_type", "state_key": "", "content": {}, "room_id": room_id, "sender": self.user_id, }, prev_event_ids=latest_event_ids, ) ) self.get_success( event_handler.handle_new_client_event( self.requester, events_and_context=[(event, context)] ) ) state2 = set(self.get_success(context.get_current_state_ids()).values()) # Delete the chain cover info.
306
test_event_chain.py
Python
tests/storage/test_event_chain.py
8ab16a92edd675453c78cfd9974081e374b0f998
synapse
1
249,741
5
6
9
17
2
0
5
12
_delete_expired_login_tokens
Save login tokens in database (#13844) * Save login tokens in database Signed-off-by: Quentin Gliech <[email protected]> * Add upgrade notes * Track login token reuse in a Prometheus metric Signed-off-by: Quentin Gliech <[email protected]>
https://github.com/matrix-org/synapse.git
async def _delete_expired_login_tokens(self) -> None:
41
registration.py
Python
synapse/storage/databases/main/registration.py
8756d5c87efc5637da55c9e21d2a4eb2369ba693
synapse
1
43,119
39
14
21
194
26
0
47
151
load_package_data
Fix links to sources for examples (#24386) The links to example sources in exampleinclude have been broken in a number of providers and they were additionally broken by AIP-47. This PR fixes it. Fixes: #23632 Fixes: https://github.com/apache/airflow-site/issues/536
https://github.com/apache/airflow.git
def load_package_data() -> List[Dict[str, Any]]: schema = _load_schema() result = [] for provider_yaml_path in get_provider_yaml_paths(): with open(provider_yaml_path) as yaml_file: provider = yaml.safe_load(yaml_file) try: jsonschema.validate(provider, schema=schema) except jsonschema.ValidationError: raise Exception(f"Unable to parse: {provider_yaml_path}.") provider_yaml_dir = os.path.dirname(provider_yaml_path) provider['python-module'] = _filepath_to_module(provider_yaml_dir) provider['package-dir'] = provider_yaml_dir provider['system-tests-dir'] = _filepath_to_system_tests(provider_yaml_dir) result.append(provider) return result
112
provider_yaml_utils.py
Python
docs/exts/provider_yaml_utils.py
08b675cf6642171cb1c5ddfb09607b541db70b29
airflow
3
310,837
7
10
2
44
8
0
7
21
async_turn_on
Update method names reflecting changes in UniFi library (#64817) * Update method names * Bump dependency to v30
https://github.com/home-assistant/core.git
async def async_turn_on(self, **kwargs): await self.device.set_port_poe_mode(self.client.switch_port, self.poe_mode)
26
switch.py
Python
homeassistant/components/unifi/switch.py
76bfbbafe1ef3761c75e397c285e8057db926fe4
core
1
3,770
34
13
23
134
7
0
35
261
_filter_all_statuses
🎉 🎉 Source FB Marketing: performance and reliability fixes (#9805) * Facebook Marketing performance improvement * add comments and little refactoring * fix integration tests with the new config * improve job status handling, limit concurrency to 10 * fix campaign jobs, refactor manager * big refactoring of async jobs, support random order of slices * update source _read_incremental to hook new state logic * fix issues with timeout * remove debugging and clean up, improve retry logic * merge changes from #8234 * fix call super _read_increment * generalize batch execution, add use_batch flag * improve coverage, do some refactoring of spec * update test, remove overrides of source * add split by AdSet * add smaller insights * fix end_date < start_date case * add account_id to PK * add notes * fix new streams * fix reversed incremental stream * update spec.json for SAT * upgrade CDK and bump version Co-authored-by: Dmytro Rezchykov <[email protected]> Co-authored-by: Eugene Kulak <[email protected]>
https://github.com/airbytehq/airbyte.git
def _filter_all_statuses(self) -> MutableMapping[str, Any]: filt_values = [ "active", "archived", "completed", "limited", "not_delivering", "deleted", "not_published", "pending_review", "permanently_deleted", "recently_completed", "recently_rejected", "rejected", "scheduled", "inactive", ] return { "filtering": [ {"field": f"{self.entity_prefix}.delivery_info", "operator": "IN", "value": filt_values}, ], }
68
base_streams.py
Python
airbyte-integrations/connectors/source-facebook-marketing/source_facebook_marketing/streams/base_streams.py
a3aae8017a0a40ff2006e2567f71dccb04c997a5
airbyte
1
106,796
17
12
3
63
9
0
17
37
get
Splitting all handlers into socket and base handlers
https://github.com/fossasia/visdom.git
def get(self): new_sub = ClientSocketWrapper(self.app) self.write(json.dumps({'success': True, 'sid': new_sub.sid})) # TODO refactor socket wrappers to one class
35
socket_handlers.py
Python
py/visdom/server/handlers/socket_handlers.py
76185b240badc5a4134aacfd114b159d2884c041
visdom
1
146,292
2
6
6
13
2
0
2
5
test_dag_to_workflow_options
[workflow] Convert DAG to workflow (#22925) * convert DAG to a workflow * deduplicate * check duplication of steps * add test for object refs
https://github.com/ray-project/ray.git
def test_dag_to_workflow_options(workflow_start_regular_shared):
51
test_dag_to_workflow.py
Python
python/ray/workflow/tests/test_dag_to_workflow.py
0a9f966e63a1a1d38a1e291338384b1e84b3a2a9
ray
1
176,291
85
11
93
248
17
0
147
272
average_shortest_path_length
DOC: Update documentation to include callables for weight argument (#5307) Update docs to include functions as valid input for weight argument.
https://github.com/networkx/networkx.git
def average_shortest_path_length(G, weight=None, method=None): r single_source_methods = ["unweighted", "dijkstra", "bellman-ford"] all_pairs_methods = ["floyd-warshall", "floyd-warshall-numpy"] supported_methods = single_source_methods + all_pairs_methods if method is None: method = "unweighted" if weight is None else "dijkstra" if method not in supported_methods: raise ValueError(f"method not supported: {method}") n = len(G) # For the special case of the null graph, raise an exception, since # there are no paths in the null graph. if n == 0: msg = ( "the null graph has no paths, thus there is no average" "shortest path length" ) raise nx.NetworkXPointlessConcept(msg) # For the special case of the trivial graph, return zero immediately. if n == 1: return 0 # Shortest path length is undefined if the graph is disconnected. if G.is_directed() and not nx.is_weakly_connected(G): raise nx.NetworkXError("Graph is not weakly connected.") if not G.is_directed() and not nx.is_connected(G): raise nx.NetworkXError("Graph is not connected.") # Compute all-pairs shortest paths.
239
generic.py
Python
networkx/algorithms/shortest_paths/generic.py
b5d41847b8db0c82372faf69cd3a339d11da7ef0
networkx
16
20,351
89
16
31
318
37
0
144
677
_create_drawables
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def _create_drawables(self, tokensource): lineno = charno = maxcharno = 0 maxlinelength = linelength = 0 for ttype, value in tokensource: while ttype not in self.styles: ttype = ttype.parent style = self.styles[ttype] # TODO: make sure tab expansion happens earlier in the chain. It # really ought to be done on the input, as to do it right here is # quite complex. value = value.expandtabs(4) lines = value.splitlines(True) # print lines for i, line in enumerate(lines): temp = line.rstrip('\n') if temp: self._draw_text( self._get_text_pos(linelength, lineno), temp, font = self._get_style_font(style), text_fg = self._get_text_color(style), text_bg = self._get_text_bg_color(style), ) temp_width, temp_hight = self.fonts.get_text_size(temp) linelength += temp_width maxlinelength = max(maxlinelength, linelength) charno += len(temp) maxcharno = max(maxcharno, charno) if line.endswith('\n'): # add a line for each extra line in the value linelength = 0 charno = 0 lineno += 1 self.maxlinelength = maxlinelength self.maxcharno = maxcharno self.maxlineno = lineno
197
img.py
Python
pipenv/patched/notpip/_vendor/pygments/formatters/img.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
6
320,950
74
18
36
445
48
0
95
312
build_sdist
build-release: Modernize pathlib, type annotations, modern syntax, dataclasses
https://github.com/qutebrowser/qutebrowser.git
def build_sdist() -> List[Artifact]: utils.print_title("Building sdist") dist_path = pathlib.Path('dist') _maybe_remove(dist_path) subprocess.run([sys.executable, '-m', 'build'], check=True) dist_files = list(dist_path.glob('*.tar.gz')) filename = f'qutebrowser-{qutebrowser.__version__}.tar.gz' assert dist_files == [dist_path / filename], dist_files dist_file = dist_files[0] subprocess.run(['gpg', '--detach-sign', '-a', str(dist_file)], check=True) by_ext = collections.defaultdict(list) with tarfile.open(dist_file) as tar: for tarinfo in tar.getmembers(): if not tarinfo.isfile(): continue path = pathlib.Path(*pathlib.Path(tarinfo.name).parts[1:]) by_ext[path.suffix].append(path) assert '.pyc' not in by_ext utils.print_title("sdist contents") for ext, paths in sorted(by_ext.items()): utils.print_subtitle(ext) print('\n'.join(str(p) for p in paths)) artifacts = [ Artifact( path=dist_file, mimetype='application/gzip', description='Source release', ), Artifact( path=dist_file.with_suffix(dist_file.suffix + '.asc'), mimetype='application/pgp-signature', description='Source release - PGP signature', ), ] return artifacts
261
build_release.py
Python
scripts/dev/build_release.py
611a6d5cb2f15182e14c2249d1b5cedc44385878
qutebrowser
5
81,074
40
12
12
167
18
0
52
167
pseudo_build_inventory
Merge pull request #5784 from ansible/runner_changes_42 (#12083)
https://github.com/ansible/awx.git
def pseudo_build_inventory(self, inventory_update, private_data_dir): src = inventory_update.source injector = None if inventory_update.source in InventorySource.injectors: injector = InventorySource.injectors[src]() if injector is not None: content = injector.inventory_contents(inventory_update, private_data_dir) # must be a statically named file self.write_private_data_file(private_data_dir, injector.filename, content, 'inventory', 0o700) rel_path = os.path.join('inventory', injector.filename) elif src == 'scm': rel_path = os.path.join('project', inventory_update.source_path) return rel_path
104
jobs.py
Python
awx/main/tasks/jobs.py
a0ccc8c92583db0a8cf8e36e06ca631c65fdaaec
awx
4
288,029
8
8
3
35
5
0
8
22
sw_version
Refactor apcupsd to use config flow (#64809) * Add Config Flow to APCUPSd integration and remove YAML support. * Hide the binary sensor if user does not select STATFLAG resource. * Add tests for config flows. * Simplify config flow code. * Spell fix. * Fix pylint warnings. * Simplify the code for config flow. * First attempt to implement import flows to suppport legacy YAML configurations. * Remove unnecessary log calls. * Wrap synchronous update call with `hass.async_add_executor_job`. * Import the YAML configurations when sensor platform is set up. * Move the logger call since the variables are not properly set up. * Add codeowner. * Fix name field of manifest.json. * Fix linting issue. * Fix incorrect dependency due to incorrect rebase. * Update codeowner and config flows via hassfest. * Postpone the deprecation warning to 2022.7. * Import future annotations for init file. * Add an newline at the end to make prettier happy. * Update github id. * Add type hints for return types of steps in config flow. * Move the deprecation date for YAML config to 2022.12. * Update according to reviews. * Use async_forward_entry_setups. * Add helper properties to `APCUPSdData` class. * Add device_info for binary sensor. * Simplify config flow. * Remove options flow strings. * update the tests according to the changes. * Add `entity_registry_enabled_default` to entities and use imported CONF_RESOURCES to disable entities instead of skipping them. * Update according to reviews. * Do not use model of the UPS as the title for the integration. Instead, simply use "APCUPSd" as the integration title and let the device info serve as title for each device instead. * Change schema to be a global variable. * Add more comments. * Rewrite the tests for config flows. * Fix enabled_by_default. * Show friendly titles in the integration. * Add import check in `async_setup_platform` to avoid importing in sensor platform setup. * Add import check in `async_setup_platform` to avoid importing in sensor platform setup. * Update comments in test files. * Use parametrize instead of manually iterating different test cases. * Swap the order of the platform constants. * Avoid using broad exceptions. * Set up device info via `_attr_device_info`. * Remove unrelated test in `test_config_flow`. * Use `DeviceInfo` instead of dict to assign to `_attr_device_info`. * Add english translation. * Add `async_create_issue` for deprecated YAML configuration. * Enable UPS status by default since it could show "online, charging, on battery etc" which is meaningful for all users. * Apply suggestions from code review * Apply suggestion * Apply suggestion Co-authored-by: Martin Hjelmare <[email protected]>
https://github.com/home-assistant/core.git
def sw_version(self) -> str | None: return self.status.get("VERSION")
19
__init__.py
Python
homeassistant/components/apcupsd/__init__.py
52307708c843b947a2d631f2fe7ddaa8bd9a90d7
core
1
26,039
27
10
9
126
19
0
35
71
test_query_pages_by_staff_no_perm
Allow fetching unpublished pages by app with manage pages permission (#9181) * Allow fetching unpublished pages by app with manage pages permission * Update changelog
https://github.com/saleor/saleor.git
def test_query_pages_by_staff_no_perm(staff_api_client, page_list, page): # given unpublished_page = page unpublished_page.is_published = False unpublished_page.save(update_fields=["is_published"]) page_count = Page.objects.count() # when response = staff_api_client.post_graphql(PAGES_QUERY) # then content = get_graphql_content(response) data = content["data"]["pages"]["edges"] assert len(data) == page_count - 1
72
test_pages.py
Python
saleor/graphql/page/tests/queries/test_pages.py
098ff7b495ff9d37242ecdd38d9b08bfddb2cd19
saleor
1
165,784
23
11
8
80
10
0
27
95
mid
TYP: fix mid and length for Interval and Intervalarray (#46472)
https://github.com/pandas-dev/pandas.git
def mid(self) -> Index: try: return 0.5 * (self.left + self.right) except TypeError: # datetime safe version return self.left + 0.5 * self.length _interval_shared_docs["overlaps"] = textwrap.dedent( )
39
interval.py
Python
pandas/core/arrays/interval.py
6d7e004b1fc69942390d953bf21098a786c12c92
pandas
2
44,105
27
13
10
113
13
0
36
142
setdefault
Remove `:type` lines now sphinx-autoapi supports typehints (#20951) * Remove `:type` lines now sphinx-autoapi supports typehints Since we have no updated sphinx-autoapi to a more recent version it supports showing type hints in the documentation, so we don't need to have the type hints _and_ the `:type` lines -- which is good, as the ones in the doc strings are easy to get out of date! The following settings have been set: `autodoc_typehints = 'description'` -- show types in description (where previous `:type` used to show up) `autodoc_typehints_description_target = 'documented'` -- only link to types that are documented. (Without this we have some missing return types that aren't documented, and aren't linked to in our current python API docs, so this caused a build failure) `autodoc_typehints_format = 'short'` -- Shorten type hints where possible, i.e. `StringIO` instead of `io.StringIO` * Add argument type names to local spelling dictionary Now that we are using the type hints in the docs, sphinxcontrib-spelling picks them up as words to be checked, so we have to ignore them. I've chosen to add the provider specific ones to local dictionary files rather than the global, as for example, `mgmt` is an error in most places, but not in some of the Azure provider.
https://github.com/apache/airflow.git
def setdefault(cls, key, default, description=None, deserialize_json=False): obj = Variable.get(key, default_var=None, deserialize_json=deserialize_json) if obj is None: if default is not None: Variable.set(key, default, description=description, serialize_json=deserialize_json) return default else: raise ValueError('Default Value must be set') else: return obj
74
variable.py
Python
airflow/models/variable.py
602abe8394fafe7de54df7e73af56de848cdf617
airflow
3
100,826
6
6
3
24
3
0
6
20
state
Refactoring and TravisCI to Github Actions (#1239) * refactor training * travis to actions
https://github.com/deepfakes/faceswap.git
def state(self) -> "State": return self._state
12
model.py
Python
plugins/train/model/_base/model.py
ff6b0209dd5ad57b81b0aca570df7f39a7119bfb
faceswap
1
183,586
19
13
12
104
10
0
29
86
_get_scrollbar_thicknesses
[css] Add "scrollbar-size" CSS properties - first step
https://github.com/Textualize/textual.git
def _get_scrollbar_thicknesses(self) -> tuple[int, int]: vertical_scrollbar_size = horizontal_scrollbar_size = 1 if self.styles.scrollbar_size_vertical is not None: vertical_scrollbar_size = int(self.styles.scrollbar_size_vertical.value) if self.styles.scrollbar_size_horizontal is not None: horizontal_scrollbar_size = int(self.styles.scrollbar_size_horizontal.value) return vertical_scrollbar_size, horizontal_scrollbar_size
66
widget.py
Python
src/textual/widget.py
ee30b54828fa5212e0a30437f777b045dec4a8cc
textual
3
299,429
43
17
20
196
31
0
54
214
test_change_relay_mode
Insteon Device Control Panel (#70834) Co-authored-by: Paulus Schoutsen <[email protected]>
https://github.com/home-assistant/core.git
async def test_change_relay_mode(hass, hass_ws_client, iolinc_properties_data): ws_client, devices = await _setup( hass, hass_ws_client, "44.44.44", iolinc_properties_data ) device = devices["44.44.44"] relay_prop = device.configuration[RELAY_MODE] assert relay_prop.value == RelayMode.MOMENTARY_A with patch.object(insteon.api.properties, "devices", devices): await ws_client.send_json( { ID: 2, TYPE: "insteon/properties/change", DEVICE_ADDRESS: "44.44.44", PROPERTY_NAME: RELAY_MODE, PROPERTY_VALUE: str(RelayMode.LATCHING).lower(), } ) msg = await ws_client.receive_json() assert msg["success"] assert relay_prop.new_value == RelayMode.LATCHING
121
test_api_properties.py
Python
tests/components/insteon/test_api_properties.py
a9ca774e7ed1d8fe502a53d5b765c1d9b393a524
core
1
266,098
4
6
2
16
3
0
4
18
list_buttons
4751 Enable plugins to inject content within object list views (#10901) * 4751 add plugin buttons to list templates * 4751 add plugin buttons to list templates * 4751 add documentation * 4751 fix object reference * 4751 update docs
https://github.com/netbox-community/netbox.git
def list_buttons(self): raise NotImplementedError
8
templates.py
Python
netbox/extras/plugins/templates.py
27bf7b4a9add27b4f3f8b0f4fd5dfc4cfe74a65b
netbox
1
292,688
13
11
6
88
12
0
16
51
poll_track_info
Refactor Sonos media metadata handling (#66840) Co-authored-by: Paulus Schoutsen <[email protected]>
https://github.com/home-assistant/core.git
def poll_track_info(self) -> dict[str, Any]: track_info = self.soco.get_current_track_info() track_info[DURATION_SECONDS] = _timespan_secs(track_info.get("duration")) track_info[POSITION_SECONDS] = _timespan_secs(track_info.get("position")) return track_info
52
media.py
Python
homeassistant/components/sonos/media.py
cfd763db40544c31077b46631bbdd9655581dfe9
core
1
269,503
27
12
7
110
14
0
35
71
_get_available_gpus
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _get_available_gpus(): if tf.compat.v1.executing_eagerly_outside_functions(): # Returns names of devices directly. return [d.name for d in tf.config.list_logical_devices("GPU")] global _LOCAL_DEVICES if _LOCAL_DEVICES is None: _LOCAL_DEVICES = get_session().list_devices() return [x.name for x in _LOCAL_DEVICES if x.device_type == "GPU"]
65
backend.py
Python
keras/backend.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
6
102,165
48
8
8
49
6
0
56
83
test_supported_invalid_op
Revert "Revert D32498569: allow external backend codegen to toggle whether to generate out= and inplace kernels" (#69950) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69950 This reverts commit f6cad53443704dfe5a20cc62bee14d91e3bffcaa. Test Plan: Imported from OSS Reviewed By: albanD Differential Revision: D33113545 Pulled By: bdhirsh fbshipit-source-id: d6590294662588d36c09662dea65919ad4e1e288
https://github.com/pytorch/pytorch.git
def test_supported_invalid_op(self) -> None: yaml_str = output_error = self.get_errors_from_gen_backend_stubs(yaml_str) self.assertExpectedInline(output_error, ) # The backend is valid, but doesn't have a valid autograd key. They can't override autograd kernels in that case. # Only using Vulkan here because it has a valid backend key but not an autograd key- if this changes we can update the test.
26
test_gen_backend_stubs.py
Python
tools/test/test_gen_backend_stubs.py
bb5b4cceb6f737448eaaa6817cd773b6f4b0e77d
pytorch
1
288,149
6
6
3
26
5
0
6
20
characteristics
Add ESPHome BleakClient (#78911) Co-authored-by: Paulus Schoutsen <[email protected]>
https://github.com/home-assistant/core.git
def characteristics(self) -> list[BleakGATTCharacteristic]: return self.__characteristics
15
service.py
Python
homeassistant/components/esphome/bluetooth/service.py
7042d6d35be54865b1252c0b28a50cce1a92eabc
core
1
20,293
15
10
5
68
10
0
19
42
_load_formatters
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def _load_formatters(module_name): mod = __import__(module_name, None, None, ['__all__']) for formatter_name in mod.__all__: cls = getattr(mod, formatter_name) _formatter_cache[cls.name] = cls
43
__init__.py
Python
pipenv/patched/notpip/_vendor/pygments/formatters/__init__.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
2
43,853
27
13
16
165
11
0
42
146
env_vars
Test util `env_vars` to take arbitrary env vars (#20818) Currently it assumes that you will only use this for config settings (which you can already do with `conf_vars`). We should allow any kind of env var so that for example it could be used to patch an airflow conn or any other env var (which is sort of what is advertised in the function name anyway).
https://github.com/apache/airflow.git
def env_vars(overrides): orig_vars = {} new_vars = [] for env, value in overrides.items(): if env in os.environ: orig_vars[env] = os.environ.pop(env, '') else: new_vars.append(env) os.environ[env] = value try: yield finally: for env, value in orig_vars.items(): os.environ[env] = value for env in new_vars: os.environ.pop(env)
100
config.py
Python
tests/test_utils/config.py
17a594f8cbb7ff07dff4e7b6b3797d98ec5a9ac5
airflow
6
294,231
57
13
18
203
21
0
74
270
async_scheduled_update_request
Motion request update till stop (#68580) * update untill stop * fixes * fix spelling
https://github.com/home-assistant/core.git
async def async_scheduled_update_request(self, *_): # add the last position to the list and keep the list at max 2 items self._previous_positions.append(self.current_cover_position) if len(self._previous_positions) > 2: del self._previous_positions[: len(self._previous_positions) - 2] await self.hass.async_add_executor_job(self._blind.Update_trigger) self.async_write_ha_state() if len(self._previous_positions) < 2 or not all( self.current_cover_position == prev_position for prev_position in self._previous_positions ): # keep updating the position @UPDATE_INTERVAL_MOVING until the position does not change. async_track_point_in_time( self.hass, self.async_scheduled_update_request, dt_util.utcnow() + timedelta(seconds=UPDATE_INTERVAL_MOVING), ) else: self._previous_positions = [] self._requesting_position = False
125
cover.py
Python
homeassistant/components/motion_blinds/cover.py
83983bc875445d7147cb98e70f1214c6ed270da9
core
5
113,930
671
32
451
5,598
186
0
2,169
13,762
_parse_query
Better error messaging in mysql api (#1911) * Better error messaging in mysql api
https://github.com/mindsdb/mindsdb.git
def _parse_query(self, sql): mindsdb_sql_struct = parse_sql(sql, dialect='mindsdb') # is it query to 'predictors'? if ( isinstance(mindsdb_sql_struct.from_table, Identifier) and mindsdb_sql_struct.from_table.parts[-1].lower() == 'predictors' and ( self.database == 'mindsdb' or mindsdb_sql_struct.from_table.parts[0].lower() == 'mindsdb' ) ): dn = self.datahub.get(self.mindsdb_database_name) data, columns = dn.get_predictors(mindsdb_sql_struct) table_name = ('mindsdb', 'predictors', 'predictors') data = [{(key, key): value for key, value in row.items()} for row in data] data = [{table_name: x} for x in data] self.columns_list = [ (table_name + (column_name, column_name)) for column_name in columns ] columns = [(column_name, column_name) for column_name in columns] self.fetched_data = { 'values': data, 'columns': {table_name: columns}, 'tables': [table_name] } return # is it query to 'commands'? if ( isinstance(mindsdb_sql_struct.from_table, Identifier) and mindsdb_sql_struct.from_table.parts[-1].lower() == 'commands' and ( self.database == 'mindsdb' or mindsdb_sql_struct.from_table.parts[0].lower() == 'mindsdb' ) ): self.fetched_data = { 'values': [], 'columns': {('mindsdb', 'commands', 'commands'): [('command', 'command')]}, 'tables': [('mindsdb', 'commands', 'commands')] } self.columns_list = [('mindsdb', 'commands', 'commands', 'command', 'command')] return # is it query to 'datasources'? if ( isinstance(mindsdb_sql_struct.from_table, Identifier) and mindsdb_sql_struct.from_table.parts[-1].lower() == 'datasources' and ( self.database == 'mindsdb' or mindsdb_sql_struct.from_table.parts[0].lower() == 'mindsdb' ) ): dn = self.datahub.get(self.mindsdb_database_name) data, columns = dn.get_datasources(mindsdb_sql_struct) table_name = ('mindsdb', 'datasources', 'datasources') data = [{(key, key): value for key, value in row.items()} for row in data] data = [{table_name: x} for x in data] self.columns_list = [ (table_name + (column_name, column_name)) for column_name in columns ] columns = [(column_name, column_name) for column_name in columns] self.fetched_data = { 'values': data, 'columns': {table_name: columns}, 'tables': [table_name] } return integrations_names = self.datahub.get_datasources_names() integrations_names.append('information_schema') integrations_names.append('files') all_tables = get_all_tables(mindsdb_sql_struct) predictor_metadata = {} predictors = db.session.query(db.Predictor).filter_by(company_id=self.session.company_id) for model_name in set(all_tables): for p in predictors: if p.name == model_name: if isinstance(p.data, dict) and 'error' not in p.data: ts_settings = p.learn_args.get('timeseries_settings', {}) if ts_settings.get('is_timeseries') is True: window = ts_settings.get('window') order_by = ts_settings.get('order_by')[0] group_by = ts_settings.get('group_by') if isinstance(group_by, list) is False and group_by is not None: group_by = [group_by] predictor_metadata[model_name] = { 'timeseries': True, 'window': window, 'horizon': ts_settings.get('horizon'), 'order_by_column': order_by, 'group_by_columns': group_by } else: predictor_metadata[model_name] = { 'timeseries': False } self.model_types.update(p.data.get('dtypes', {})) plan = plan_query( mindsdb_sql_struct, integrations=integrations_names, predictor_namespace=self.mindsdb_database_name, predictor_metadata=predictor_metadata, default_namespace=self.database ) steps_data = [] for step in plan.steps: data = [] if type(step) == GetPredictorColumns: predictor_name = step.predictor.parts[-1] dn = self.datahub.get(self.mindsdb_database_name) columns = dn.get_table_columns(predictor_name) columns = [ (column_name, column_name) for column_name in columns ] data = { 'values': [], 'columns': { (self.mindsdb_database_name, predictor_name, predictor_name): columns }, 'tables': [(self.mindsdb_database_name, predictor_name, predictor_name)] } elif type(step) == FetchDataframeStep: data = self._fetch_dataframe_step(step) elif type(step) == UnionStep: raise ErNotSupportedYet('Union step is not implemented') # TODO add union support # left_data = steps_data[step.left.step_num] # right_data = steps_data[step.right.step_num] # data = left_data + right_data elif type(step) == MapReduceStep: try: if step.reduce != 'union': raise Exception(f'Unknown MapReduceStep type: {step.reduce}') step_data = steps_data[step.values.step_num] vars = {} step_data_values = step_data['values'] for row in step_data_values: for row_data in row.values(): for name, value in row_data.items(): if name[0] != '__mindsdb_row_id': vars[name[1] or name[0]] = value data = { 'values': [], 'columns': {}, 'tables': [] } substep = step.step if substep == FetchDataframeStep: query = substep.query markQueryVar(query.where) for name, value in vars.items(): replaceQueryVar(query.where, value, name) sub_data = self._fetch_dataframe_step(substep) if len(data['columns']) == 0: data['columns'] = sub_data['columns'] if len(data['tables']) == 0: data['tables'] = sub_data['tables'] data['values'].extend(sub_data['values']) elif substep == MultipleSteps: data = self._multiple_steps_reduce(substep, vars) else: raise Exception(f'Unknown step type: {step.step}') except Exception as e: raise SqlApiException("error in map reduce step") from e elif type(step) == ApplyPredictorRowStep: try: predictor = '.'.join(step.predictor.parts) dn = self.datahub.get(self.mindsdb_database_name) where_data = step.row_dict data = dn.select( table=predictor, columns=None, where_data=where_data, integration_name=self.session.integration, integration_type=self.session.integration_type ) data = [{(key, key): value for key, value in row.items()} for row in data] table_name = get_preditor_alias(step, self.database) values = [{table_name: x} for x in data] columns = {table_name: []} if len(data) > 0: row = data[0] columns[table_name] = list(row.keys()) # TODO else data = { 'values': values, 'columns': columns, 'tables': [table_name] } except Exception as e: raise SqlApiException("error in apply predictor row step.") from e elif type(step) in (ApplyPredictorStep, ApplyTimeseriesPredictorStep): try: dn = self.datahub.get(self.mindsdb_database_name) predictor = '.'.join(step.predictor.parts) where_data = [] for row in steps_data[step.dataframe.step_num]['values']: new_row = {} for table_name in row: keys_intersection = set(new_row) & set(row[table_name]) if len(keys_intersection) > 0: raise Exception( f'The predictor got two identical keys from different datasources: {keys_intersection}' ) new_row.update(row[table_name]) where_data.append(new_row) where_data = [{key[1]: value for key, value in row.items()} for row in where_data] is_timeseries = predictor_metadata[predictor]['timeseries'] _mdb_make_predictions = None if is_timeseries: if 'LATEST' in self.raw: _mdb_make_predictions = False else: _mdb_make_predictions = True for row in where_data: if '__mdb_make_predictions' not in row: row['__mdb_make_predictions'] = _mdb_make_predictions for row in where_data: for key in row: if isinstance(row[key], datetime.date): row[key] = str(row[key]) data = dn.select( table=predictor, columns=None, where_data=where_data, integration_name=self.session.integration, integration_type=self.session.integration_type ) # if is_timeseries: # if 'LATEST' not in self.raw: # # remove additional records from predictor results: # # first 'window_size' and last 'horizon' records # # otherwise there are many unxpected rows in prediciton result: # # ---------------------------------------------------------------------------------------- # # mysql> SELECT tb.time, tb.state, tb.pnew_case, tb.new_case from # # MYSQL_LOCAL.test_data.covid AS # # ta JOIN mindsdb.covid_hor3 AS tb # # WHERE ta.state = "CA" AND ta.time BETWEEN "2020-10-19" AND "2020-10-20"; # # ---------------------------------------------------------------------------------------- # # +------------+-------+-----------+----------+ # # | time | state | pnew_case | new_case | # # +------------+-------+-----------+----------+ # # | 2020-10-09 | CA | 0 | 2862 | # # | 2020-10-10 | CA | 0 | 2979 | # # | 2020-10-11 | CA | 0 | 3075 | # # | 2020-10-12 | CA | 0 | 3329 | # # | 2020-10-13 | CA | 0 | 2666 | # # | 2020-10-14 | CA | 0 | 2378 | # # | 2020-10-15 | CA | 0 | 3449 | # # | 2020-10-16 | CA | 0 | 3803 | # # | 2020-10-17 | CA | 0 | 4170 | # # | 2020-10-18 | CA | 0 | 3806 | # # | 2020-10-19 | CA | 0 | 3286 | # # | 2020-10-20 | CA | 0 | 3474 | # # | 2020-10-21 | CA | 0 | 3474 | # # | 2020-10-22 | CA | 0 | 3474 | # # +------------+-------+-----------+----------+ # # 14 rows in set (2.52 sec) # window_size = predictor_metadata[predictor]['window'] # horizon = predictor_metadata[predictor]['horizon'] # if len(data) >= (window_size + horizon): # data = data[window_size:] # if len(data) > horizon and horizon > 1: # data = data[:-horizon + 1] data = [{(key, key): value for key, value in row.items()} for row in data] table_name = get_preditor_alias(step, self.database) values = [{table_name: x} for x in data] columns = {table_name: []} if len(data) > 0: row = data[0] columns[table_name] = list(row.keys()) # TODO else data = { 'values': values, 'columns': columns, 'tables': [table_name] } except Exception as e: raise SqlApiException("error in apply predictor step") from e elif type(step) == JoinStep: try: left_data = steps_data[step.left.step_num] right_data = steps_data[step.right.step_num] # FIXME https://github.com/mindsdb/mindsdb_sql/issues/136 is_timeseries = False if True in [type(step) == ApplyTimeseriesPredictorStep for step in plan.steps]: right_data = steps_data[step.left.step_num] left_data = steps_data[step.right.step_num] is_timeseries = True if step.query.condition is not None: raise Exception('At this moment supported only JOIN without condition') if step.query.join_type.upper() not in ('LEFT JOIN', 'JOIN'): raise Exception('At this moment supported only JOIN and LEFT JOIN') if ( len(left_data['tables']) != 1 or len(right_data['tables']) != 1 or left_data['tables'][0] == right_data['tables'][0] ): raise Exception('At this moment supported only JOIN of two different tables') data = { 'values': [], 'columns': {}, 'tables': list(set(left_data['tables'] + right_data['tables'])) } for data_part in [left_data, right_data]: for table_name in data_part['columns']: if table_name not in data['columns']: data['columns'][table_name] = data_part['columns'][table_name] else: data['columns'][table_name].extend(data_part['columns'][table_name]) for table_name in data['columns']: data['columns'][table_name] = list(set(data['columns'][table_name])) left_key = left_data['tables'][0] right_key = right_data['tables'][0] left_columns_map = {} left_columns_map_reverse = {} for i, column_name in enumerate(left_data['columns'][left_key]): left_columns_map[f'a{i}'] = column_name left_columns_map_reverse[column_name] = f'a{i}' right_columns_map = {} right_columns_map_reverse = {} for i, column_name in enumerate(right_data['columns'][right_key]): right_columns_map[f'b{i}'] = column_name right_columns_map_reverse[column_name] = f'b{i}' left_df_data = [] for row in left_data['values']: row = row[left_key] left_df_data.append({left_columns_map_reverse[key]: value for key, value in row.items()}) right_df_data = [] for row in right_data['values']: row = row[right_key] right_df_data.append({right_columns_map_reverse[key]: value for key, value in row.items()}) df_a = pd.DataFrame(left_df_data) df_b = pd.DataFrame(right_df_data) a_name = f'a{round(time.time()*1000)}' b_name = f'b{round(time.time()*1000)}' con = duckdb.connect(database=':memory:') con.register(a_name, df_a) con.register(b_name, df_b) resp_df = con.execute(f).fetchdf() con.unregister(a_name) con.unregister(b_name) con.close() resp_df = resp_df.where(pd.notnull(resp_df), None) resp_dict = resp_df.to_dict(orient='records') for row in resp_dict: new_row = {left_key: {}, right_key: {}} for key, value in row.items(): if key.startswith('a'): new_row[left_key][left_columns_map[key]] = value else: new_row[right_key][right_columns_map[key]] = value data['values'].append(new_row) # remove all records with empty data from predictor from join result # otherwise there are emtpy records in the final result: # +------------+------------+-------+-----------+----------+ # | time | time | state | pnew_case | new_case | # +------------+------------+-------+-----------+----------+ # | 2020-10-21 | 2020-10-24 | CA | 0.0 | 5945.0 | # | 2020-10-22 | 2020-10-23 | CA | 0.0 | 6141.0 | # | 2020-10-23 | 2020-10-22 | CA | 0.0 | 2940.0 | # | 2020-10-24 | 2020-10-21 | CA | 0.0 | 3707.0 | # | NULL | 2020-10-20 | NULL | nan | nan | # | NULL | 2020-10-19 | NULL | nan | nan | # | NULL | 2020-10-18 | NULL | nan | nan | # | NULL | 2020-10-17 | NULL | nan | nan | # | NULL | 2020-10-16 | NULL | nan | nan | # +------------+------------+-------+-----------+----------+ # 9 rows in set (2.07 sec) # if is_timeseries: # data_values = [] # for row in data['values']: # for key in row: # if 'mindsdb' in key: # if not is_empty_prediction_row(row[key]): # data_values.append(row) # break # data['values'] = data_values except Exception as e: raise SqlApiException("error in join step") from e elif type(step) == FilterStep: raise ErNotSupportedYet('FilterStep is not implemented') # elif type(step) == ApplyTimeseriesPredictorStep: # raise Exception('ApplyTimeseriesPredictorStep is not implemented') elif type(step) == LimitOffsetStep: try: step_data = steps_data[step.dataframe.step_num] data = { 'values': step_data['values'].copy(), 'columns': step_data['columns'].copy(), 'tables': step_data['tables'].copy() } if isinstance(step.offset, Constant) and isinstance(step.offset.value, int): data['values'] = data['values'][step.offset.value:] if isinstance(step.limit, Constant) and isinstance(step.limit.value, int): data['values'] = data['values'][:step.limit.value] except Exception as e: raise SqlApiException("error in limit offset step") from e elif type(step) == ProjectStep: try: step_data = steps_data[step.dataframe.step_num] columns_list = [] for column_full_name in step.columns: table_name = None if type(column_full_name) == Star: for table_name, table_columns_list in step_data['columns'].items(): for column in table_columns_list: columns_list.append(table_name + column) elif type(column_full_name) == Identifier: column_name_parts = column_full_name.parts column_alias = None if column_full_name.alias is None else '.'.join(column_full_name.alias.parts) if len(column_name_parts) > 2: raise Exception(f'Column name must contain no more than 2 parts. Got name: {".".join(column_full_name)}') elif len(column_name_parts) == 1: column_name = column_name_parts[0] appropriate_table = None if len(step_data['tables']) == 1: appropriate_table = step_data['tables'][0] else: for table_name, table_columns in step_data['columns'].items(): if (column_name, column_name) in table_columns: if appropriate_table is not None: raise Exception('Found multiple appropriate tables for column {column_name}') else: appropriate_table = table_name if appropriate_table is None: # it is probably constaint # FIXME https://github.com/mindsdb/mindsdb_sql/issues/133 # column_name = column_name.strip("'") # name_or_alias = column_alias or column_name # column_alias = name_or_alias # for row in step_data['values']: # for table in row: # row[table][(column_name, name_or_alias)] = row[table][(column_name, column_name)] # appropriate_table = step_data['tables'][0] columns_list.append(appropriate_table + (column_alias, column_alias)) else: columns_list.append(appropriate_table + (column_name, column_alias or column_name)) # column_name elif len(column_name_parts) == 2: table_name_or_alias = column_name_parts[0] column_name = column_name_parts[1] appropriate_table = None for table_name, table_columns in step_data['columns'].items(): checkig_table_name_or_alias = table_name[2] or table_name[1] if table_name_or_alias == checkig_table_name_or_alias: for table_column_name in table_columns: if ( table_column_name[1] == column_name or table_column_name[1] is None and table_column_name[0] == column_name ): break else: raise Exception(f'Can not find column "{column_name}" in table "{table_name}"') appropriate_table = table_name break if appropriate_table is None: raise Exception(f'Can not find approproate table for column {column_name}') columns_to_copy = None for column in step_data['columns'][appropriate_table]: if column[0] == column_name and (column[1] is None or column[1] == column_name): columns_to_copy = column break else: raise Exception(f'Can not find approproate column in data: {(column_name, column_alias)}') for row in step_data['values']: row[appropriate_table][(column_name, column_alias)] = row[appropriate_table][columns_to_copy] columns_list.append(appropriate_table + (column_name, column_alias)) else: raise Exception('Undefined column name') else: raise Exception(f'Unexpected column name type: {column_full_name}') self.columns_list = columns_list data = step_data except Exception as e: raise SqlApiException("error on project step") from e else: raise SqlApiException(F'Unknown planner step: {step}') steps_data.append(data) try: if self.outer_query is not None: data = [] # +++ result = [] for row in steps_data[-1]: data_row = {} for column_record in self.columns_list: table_name = column_record[:3] column_name = column_record[3] data_row[column_record[4] or column_record[3]] = row[table_name][column_name] result.append(data_row) # --- data = self._make_list_result_view(result) df = pd.DataFrame(data) result = query_df(df, self.outer_query) try: self.columns_list = [ ('', '', '', x, x) for x in result.columns ] except Exception: self.columns_list = [ ('', '', '', result.name, result.name) ] # +++ make list result view new_result = [] for row in result.to_dict(orient='records'): data_row = [] for column_record in self.columns_list: column_name = column_record[4] or column_record[3] data_row.append(row.get(column_name)) new_result.append(data_row) result = new_result # --- self.fetched_data = result else: self.fetched_data = steps_data[-1] except Exception as e: raise SqlApiException("error in preparing result quiery step") from e try: if hasattr(self, 'columns_list') is False: self.columns_list = [] for row in self.fetched_data: for table_key in row: for column_name in row[table_key]: if (table_key + (column_name, column_name)) not in self.columns_list: self.columns_list.append((table_key + (column_name, column_name))) # if there was no 'ProjectStep', then get columns list from last step: if self.columns_list is None: self.columns_list = [] for table_name in self.fetched_data['columns']: self.columns_list.extend([ table_name + column for column in self.fetched_data['columns'][table_name] ]) self.columns_list = [x for x in self.columns_list if x[3] != '__mindsdb_row_id'] except Exception as e: raise SqlApiException("error in column list step") from e
3,320
sql_query.py
Python
mindsdb/api/mysql/mysql_proxy/classes/sql_query.py
10a5300838e4ae45d42495fdf53d76c702f66518
mindsdb
153
280,886
18
12
29
188
11
0
22
64
print_help
Mutual funds (#1106) * Add mutual fund menu based primarily on investpy (investment.com) * Add start/end date for loading + some bug fixes * hugo * poetry add investpy rich dnspython then poetry update * fix reset (need to load country) and dim yfinance commands when country not US * Fix search and add better error handling * Catch yfinance commands when country not US. Improve loading to change country if symbol found but not in matching country. Changed main menu printing. * Fix poetry / reqs merge conflict * review comments * Once again update poetry * Clean up poetry to match conda (numpy was culprit) * cvxpy down * Think i goofed on generating reqs full * update poetry metadata * cleanup after review 2 * move space * another deps try * bump pygmnets in yaml * try something different * rename console -> t_console (terminal_console) for mypy
https://github.com/OpenBB-finance/OpenBBTerminal.git
def print_help(self): fund_string = f"{self.fund_name or None}" fund_string2 = f" ({self.fund_symbol})" if self.fund_symbol else "" fund_string += fund_string2 help_str = f t_console.print(help_str)
33
mutual_fund_controller.py
Python
gamestonk_terminal/mutual_funds/mutual_fund_controller.py
e437f29ae498124b34a401015b001cb6c27c1e93
OpenBBTerminal
2
186,460
47
12
15
170
23
0
52
210
test_ssl_config_files_hash_in_all_hashes
Improve assertions in certbot-apache tests. (#9131) * Improve assertions in certbot-apache tests. Replacements inspired by flake8-assertive. * Fix test failures * assertEqual is not for None :D * Pass all tests :)
https://github.com/certbot/certbot.git
def test_ssl_config_files_hash_in_all_hashes(self): from certbot_apache._internal.constants import ALL_SSL_OPTIONS_HASHES import pkg_resources tls_configs_dir = pkg_resources.resource_filename( "certbot_apache", os.path.join("_internal", "tls_configs")) all_files = [os.path.join(tls_configs_dir, name) for name in os.listdir(tls_configs_dir) if name.endswith('options-ssl-apache.conf')] self.assertGreaterEqual(len(all_files), 1) for one_file in all_files: file_hash = crypto_util.sha256sum(one_file) self.assertIn( file_hash, ALL_SSL_OPTIONS_HASHES, f"Constants.ALL_SSL_OPTIONS_HASHES must be appended with the sha256 " f"hash of {one_file} when it is updated." )
102
configurator_test.py
Python
certbot-apache/tests/configurator_test.py
a0dbe1e85035f12e194d91148836d830871ec554
certbot
4
118,736
15
8
2
27
4
0
15
32
container
Replace static apps with live Cloud apps (#4317) Co-authored-by: kajarenc <[email protected]>
https://github.com/streamlit/streamlit.git
def container(self): return self.dg._block() # TODO: Enforce that columns are not nested or in Sidebar
14
layouts.py
Python
lib/streamlit/elements/layouts.py
72703b38029f9358a0ec7ca5ed875a6b438ece19
streamlit
1
268,847
23
11
12
148
11
1
29
54
_name_scope_unnester
Fix tf.name_scope support for Keras nested layers. Infer nested layer structure inside `Layer.__call__` PiperOrigin-RevId: 423989336
https://github.com/keras-team/keras.git
def _name_scope_unnester(full_name_scope): if not getattr(_name_scope_unnester_stack, 'value', None): _name_scope_unnester_stack.value = [''] _name_scope_unnester_stack.value.append(full_name_scope) try: full_name_scope = _name_scope_unnester_stack.value[-1] outer_name_scope = _name_scope_unnester_stack.value[-2] relative_name_scope = full_name_scope.lstrip(outer_name_scope) relative_name_scope = relative_name_scope.lstrip('/') yield relative_name_scope finally: _name_scope_unnester_stack.value.pop() @keras_export('keras.layers.Layer')
@keras_export('keras.layers.Layer')
79
base_layer.py
Python
keras/engine/base_layer.py
9733982976f62c351ea3db2aeb1f28a67bb5d2fa
keras
3
108,748
4
7
3
31
4
0
4
25
clear
Add internal method to clear without update to simplify code
https://github.com/matplotlib/matplotlib.git
def clear(self): self._clear_without_update() self.update()
16
widgets.py
Python
lib/matplotlib/widgets.py
a2540ac2387223a00af88f295121c9e5aa51079d
matplotlib
1
262,665
22
16
12
143
14
0
25
59
logsumexp
Adding OverFlow (#2183) * Adding encoder * currently modifying hmm * Adding hmm * Adding overflow * Adding overflow setting up flat start * Removing runs * adding normalization parameters * Fixing models on same device * Training overflow and plotting evaluations * Adding inference * At the end of epoch the test sentences are coming on cpu instead of gpu * Adding figures from model during training to monitor * reverting tacotron2 training recipe * fixing inference on gpu for test sentences on config * moving helpers and texts within overflows source code * renaming to overflow * moving loss to the model file * Fixing the rename * Model training but not plotting the test config sentences's audios * Formatting logs * Changing model name to camelcase * Fixing test log * Fixing plotting bug * Adding some tests * Adding more tests to overflow * Adding all tests for overflow * making changes to camel case in config * Adding information about parameters and docstring * removing compute_mel_statistics moved statistic computation to the model instead * Added overflow in readme * Adding more test cases, now it doesn't saves transition_p like tensor and can be dumped as json
https://github.com/coqui-ai/TTS.git
def logsumexp(x, dim): r m, _ = x.max(dim=dim) mask = m == -float("inf") s = (x - m.masked_fill_(mask, 0).unsqueeze(dim=dim)).exp().sum(dim=dim) return s.masked_fill_(mask, 1).log() + m.masked_fill_(mask, -float("inf"))
88
common_layers.py
Python
TTS/tts/layers/overflow/common_layers.py
3b8b105b0d6539ac12972de94e0b2a5077fa1ce2
TTS
1
299,859
6
6
54
21
6
0
6
9
record_meter_states
Break apart recorder into tasks and core modules (#71222)
https://github.com/home-assistant/core.git
def record_meter_states(hass, zero, entity_id, _attributes, seq):
447
test_recorder.py
Python
tests/components/sensor/test_recorder.py
29bda196b5e0a90a2bea7e1797742236114afc1c
core
3
272,667
5
9
2
32
4
0
5
11
maximum
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def maximum(inputs, **kwargs): return Maximum(**kwargs)(inputs)
18
maximum.py
Python
keras/layers/merging/maximum.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
101,377
30
12
18
180
16
0
33
193
_convert_images
Bugfix: convert - Gif Writer - Fix non-launch error on Gif Writer - convert plugins - linting - convert/fs_media/preview/queue_manager - typing - Change convert items from dict to Dataclass
https://github.com/deepfakes/faceswap.git
def _convert_images(self) -> None: logger.debug("Converting images") self._patch_threads.start() while True: self._check_thread_error() if self._disk_io.completion_event.is_set(): logger.debug("DiskIO completion event set. Joining Pool") break if self._patch_threads.completed(): logger.debug("All patch threads completed") break sleep(1) self._patch_threads.join() logger.debug("Putting EOF") queue_manager.get_queue("convert_out").put("EOF") logger.debug("Converted images")
97
convert.py
Python
scripts/convert.py
1022651eb8a7741014f5d2ec7cbfe882120dfa5f
faceswap
4
260,810
67
11
22
254
24
0
95
310
_predict_recursive
MAINT Remove `x_squared_norms` arg from `k_means_lloyd` signature (#24264) Co-authored-by: Thomas J. Fan <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def _predict_recursive(self, X, sample_weight, cluster_node): if cluster_node.left is None: # This cluster has no subcluster. Labels are just the label of the cluster. return np.full(X.shape[0], cluster_node.label, dtype=np.int32) # Determine if data points belong to the left or right subcluster centers = np.vstack((cluster_node.left.center, cluster_node.right.center)) if hasattr(self, "_X_mean"): centers += self._X_mean cluster_labels = _labels_inertia_threadpool_limit( X, sample_weight, centers, self._n_threads, return_inertia=False, ) mask = cluster_labels == 0 # Compute the labels for each subset of the data points. labels = np.full(X.shape[0], -1, dtype=np.int32) labels[mask] = self._predict_recursive( X[mask], sample_weight[mask], cluster_node.left ) labels[~mask] = self._predict_recursive( X[~mask], sample_weight[~mask], cluster_node.right ) return labels
171
_bisect_k_means.py
Python
sklearn/cluster/_bisect_k_means.py
60f16feaadaca28f9a1cc68d2f406201860d27e8
scikit-learn
3
19,993
17
9
9
42
6
0
18
45
is_wheel_installed
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def is_wheel_installed() -> bool: try: import pipenv.vendor.wheel as wheel # noqa: F401 except ImportError: return False return True
24
misc.py
Python
pipenv/patched/notpip/_internal/utils/misc.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
2
156,830
45
14
16
205
23
0
63
146
test_set_index_interpolate_large_uint
Revise divisions logic in from_pandas (#9221) * rewrite/fix sorted_division_locations * format * mem cleanup * fix some tests - but there is still an existing bug in merge_asof * saving partial test fixes * get test_dataframe.py passing * testing revisions * perfectly satisfy npartitions * tweak docstring * use npartitions=1 in cudf test (since it was only passing with one partition anyway) * avoid breaking dask-cudf (in most places) * reformat * fix groupby tests * avoid np.unique when possible * trigger linting * use 'correct' typing fix * remove type-ignore comment * Update dask/dataframe/io/tests/test_io.py Co-authored-by: Ian Rose <[email protected]> * subtact out 'drift' to improve npartitions accuracy * remove none_chunksize * remove unnecessary assertions * tweak test changes * add new test for duplicates * simplify unique call * add gpu coverage * roll back unique change * avoid overstepping too many unique groups Co-authored-by: Ian Rose <[email protected]>
https://github.com/dask/dask.git
def test_set_index_interpolate_large_uint(engine): if engine == "cudf": # NOTE: engine == "cudf" requires cudf/dask_cudf, # will be skipped by non-GPU CI. cudf = pytest.importorskip("cudf") dask_cudf = pytest.importorskip("dask_cudf") df = pd.DataFrame( {"x": np.array([612509347682975743, 616762138058293247], dtype=np.uint64)} ) if engine == "cudf": gdf = cudf.from_pandas(df) d = dask_cudf.from_cudf(gdf, npartitions=1) else: d = dd.from_pandas(df, 1) d1 = d.set_index("x", npartitions=1) assert d1.npartitions == 1 assert set(d1.divisions) == {612509347682975743, 616762138058293247}
122
test_shuffle.py
Python
dask/dataframe/tests/test_shuffle.py
c97193156a8ba6d9bf18c4e9f5d68830471aec74
dask
3
281,535
9
11
11
54
9
0
9
30
print_help
Terminal Wide Rich (#1161) * My idea for how we handle Rich moving forward * remove independent consoles * FIxed pylint issues * add a few vars * Switched print to console * More transitions * Changed more prints * Replaced all prints * Fixing tabulate * Finished replace tabulate * Finished removing rich from Tabulate * add Panel around menu * add GST watermark under feature flag * Fixed 46 tests * Delete test_screener[False].yaml * Delete test_screener[True].yaml * Fixed the rest of the tests * add help and source color vars and use rgb * rich on stocks/options * update rich on disc, dps, sia * rich in gov, ins and scr menus * ba and ca menus with rich * Fixed import issue * Fixed some tests * removed termcolor * Removed prettytable * add rich to remaining stocks menus * FIxed linting issue * Added James' changes * Updated dependencies * Add rich to cryptocurrency menu * refactor economy and forex * refactor etf with rich * refactor mfunds * refactor rich rest * not specify style so default color works well on any background * Fixing mypy issues * Updated tests * More test fixes * James' test fixes * Updating tests : stocks/screener - fix cassettes using BR * Updating tests : crypto * Updating tests : disable DEBUG_MODE * Updating tests : stocks/fa/yfinance * minor fixes that escape * Improve the rich table function (that replaces tabulate :D ) * Fixed bad code * delete rogue file + dcf fix + NoConsole * sia mypy * fuck you linter * fuck you linter pt 2 * skip hehe * i hate the black linter * ubuntu mypy attempt * Update : rich_config + gtff * Updating tests : conftest * Updating tests : stocks * Update : rich_config * Updating : rich_config * make panel configurable for Theodore :b * colors update * Merged * Updating : rich_config + feature_flags * Updating : rich_config * Updating tests : stocks * Updating : feature_flags Co-authored-by: DidierRLopes <[email protected]> Co-authored-by: Chavithra PARANA <[email protected]> Co-authored-by: james <[email protected]> Co-authored-by: jose-donato <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def print_help(self): help_text = f console.print(text=help_text, menu="Stocks - Backtesting")
22
bt_controller.py
Python
gamestonk_terminal/stocks/backtesting/bt_controller.py
82747072c511beb1b2672846ae2ee4aec53eb562
OpenBBTerminal
1
20,718
58
22
26
259
20
0
75
615
_check_buffer
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def _check_buffer(self) -> None: if self.quiet: del self._buffer[:] return with self._lock: if self._buffer_index == 0: if self.is_jupyter: # pragma: no cover from .jupyter import display display(self._buffer, self._render_buffer(self._buffer[:])) del self._buffer[:] else: text = self._render_buffer(self._buffer[:]) del self._buffer[:] if text: try: if WINDOWS: # pragma: no cover # https://bugs.python.org/issue37871 write = self.file.write for line in text.splitlines(True): write(line) else: self.file.write(text) self.file.flush() except UnicodeEncodeError as error: error.reason = f"{error.reason}\n*** You may need to add PYTHONIOENCODING=utf-8 to your environment ***" raise
148
console.py
Python
pipenv/patched/notpip/_vendor/rich/console.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
8
249,628
32
13
10
149
7
0
47
147
test_cache_age
Be more lenient in the oEmbed response parsing. (#14089) Attempt to parse any valid information from an oEmbed response (instead of bailing at the first unexpected data). This should allow for more partial oEmbed data to be returned, resulting in better / more URL previews, even if those URL previews are only partial.
https://github.com/matrix-org/synapse.git
def test_cache_age(self) -> None: # Correct-ish cache ages are allowed. for cache_age in ("1", 1.0, 1): result = self.parse_response({"cache_age": cache_age}) self.assertEqual(result.cache_age, 1000) # Invalid cache ages are ignored. for cache_age in ("invalid", {}): result = self.parse_response({"cache_age": cache_age}) self.assertIsNone(result.cache_age) # Cache age is optional. result = self.parse_response({}) self.assertIsNone(result.cache_age)
90
test_oembed.py
Python
tests/rest/media/v1/test_oembed.py
00c93d2e7ef5642c9cf900f3fdcfa229e70f843d
synapse
3
134,317
54
17
31
187
23
0
69
375
_fetch_next_result
[AIR] Hard deprecate train.report, warn on air.session misuse (#29613) Signed-off-by: Antoni Baum [email protected] Hard deprecates `ray.train.report` and other session functions and ensures that the user is informed when using `ray.air.session` if they are not in session for consistency with the old functions.
https://github.com/ray-project/ray.git
def _fetch_next_result(self) -> Optional[List[Dict]]: while True: results = self._backend_executor.get_next_results() if results is None: return None first_result = results[0] result_type = first_result.type if result_type is TrainingResultType.REPORT: result_data = [self._backend.decode_data(r.data) for r in results] return result_data elif result_type is TrainingResultType.CHECKPOINT: self._checkpoint_manager._process_checkpoint( results, decode_checkpoint_fn=self._backend.decode_data ) # Iterate until next REPORT call or training has finished. else: raise TrainBackendError( f"Unexpected result type: " f"{result_type}. " f"Expected one of " f"{[type in TrainingResultType]}" )
108
trainer.py
Python
python/ray/train/trainer.py
9b29fd6501ff0e3e69d0333bf214482b86f9e97f
ray
6
176,084
22
6
30
34
8
2
23
54
test_edgeql_for_in_computable_09
Add a `bag` type that tells assert_query_result to ignore order (#3314) assert_query_result currently supports using sets to ignore order, but that doesn't work for objects, which can't be hashed or sorted. There is a system for specifying a sort key for internal data, but it is way clunkier than just saying we don't care about the order. I converted some places that were using sort= to use this.
https://github.com/edgedb/edgedb.git
async def test_edgeql_for_in_computable_09(self): # This is basically test_edgeql_for_in_computable_01 but with # a WITH binding in front of the whole shape await self.assert_query_result( r
# This is basically test_edgeql_for_in_computable_01 but with # a WITH binding in front of the whole shape await self.assert_query_result( r''' WITH U := ( SELECT User { select_deck := ( FOR letter IN {'I', 'B'} UNION ( SELECT User.deck {User
48
test_edgeql_for.py
Python
tests/test_edgeql_for.py
26be7d28bdb4eb96c888e373e08f46e6b85711e3
edgedb
1
150,408
20
13
8
94
12
0
22
94
start_threaded_loop
initial concept for replicate, basic leader and follower logic
https://github.com/freqtrade/freqtrade.git
def start_threaded_loop(self): self._loop = asyncio.new_event_loop() if not self._thread: self._thread = Thread(target=self._loop.run_forever) self._thread.start() self._running = True else: raise RuntimeError("A loop is already running")
54
__init__.py
Python
freqtrade/rpc/replicate/__init__.py
9f6bba40af1a407f190a89f5c0c8b4e3f528ba46
freqtrade
2
276,025
21
13
9
87
10
0
24
123
_search_for_child_node
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _search_for_child_node(self, parent_id, path_to_child): if not path_to_child: return parent_id for child in self._proto.nodes[parent_id].children: if child.local_name == path_to_child[0]: return self._search_for_child_node( child.node_id, path_to_child[1:] ) return None
57
load.py
Python
keras/saving/saved_model/load.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
4
299,285
76
11
41
474
33
0
172
311
test_group_media_states
Add state buffering to media_player and use it in cast (#70802)
https://github.com/home-assistant/core.git
async def test_group_media_states(hass, mz_mock): entity_id = "media_player.speaker" reg = er.async_get(hass) info = get_fake_chromecast_info() chromecast, _ = await async_setup_media_player_cast(hass, info) _, conn_status_cb, media_status_cb, group_media_status_cb = get_status_callbacks( chromecast, mz_mock ) connection_status = MagicMock() connection_status.status = "CONNECTED" conn_status_cb(connection_status) await hass.async_block_till_done() state = hass.states.get(entity_id) assert state is not None assert state.name == "Speaker" assert state.state == "off" assert entity_id == reg.async_get_entity_id("media_player", "cast", str(info.uuid)) group_media_status = MagicMock(images=None) player_media_status = MagicMock(images=None) # Player has no state, group is buffering -> Should report 'buffering' group_media_status.player_state = "BUFFERING" group_media_status_cb(str(FakeGroupUUID), group_media_status) await hass.async_block_till_done() state = hass.states.get(entity_id) assert state.state == "buffering" # Player has no state, group is playing -> Should report 'playing' group_media_status.player_state = "PLAYING" group_media_status_cb(str(FakeGroupUUID), group_media_status) await hass.async_block_till_done() state = hass.states.get(entity_id) assert state.state == "playing" # Player is paused, group is playing -> Should report 'paused' player_media_status.player_state = None player_media_status.player_is_paused = True media_status_cb(player_media_status) await hass.async_block_till_done() await hass.async_block_till_done() state = hass.states.get(entity_id) assert state.state == "paused" # Player is in unknown state, group is playing -> Should report 'playing' player_media_status.player_state = "UNKNOWN" media_status_cb(player_media_status) await hass.async_block_till_done() state = hass.states.get(entity_id) assert state.state == "playing"
275
test_media_player.py
Python
tests/components/cast/test_media_player.py
66551e6fcbd063e53c13adc8a6462b8e00ce1450
core
1
259,113
9
8
3
44
5
0
9
30
get_feature_names_out
DOC Improve get_feature_names_out docstrings (#22718) Co-authored-by: Thomas J. Fan <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def get_feature_names_out(self, input_features=None): input_features = _check_feature_names_in(self, input_features) return input_features[self.get_support()]
27
_base.py
Python
sklearn/feature_selection/_base.py
279388d9ed2ea83194dd45a2d78161be30b43aa7
scikit-learn
1
299,241
26
11
8
111
15
0
30
99
_async_update_data
Add config flow to steam_online integration (#67261) Co-authored-by: Paulus Schoutsen <[email protected]>
https://github.com/home-assistant/core.git
async def _async_update_data(self) -> dict[str, dict[str, str | int]]: try: return await self.hass.async_add_executor_job(self._update) except (steam.api.HTTPError, steam.api.HTTPTimeoutError) as ex: if "401" in str(ex): raise ConfigEntryAuthFailed from ex raise UpdateFailed(ex) from ex
70
coordinator.py
Python
homeassistant/components/steam_online/coordinator.py
b1a6521abd71c0086b24849acc44f02eaabccff6
core
3
269,554
37
13
15
232
31
1
44
114
ctc_batch_cost
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def ctc_batch_cost(y_true, y_pred, input_length, label_length): label_length = tf.cast(tf.squeeze(label_length, axis=-1), tf.int32) input_length = tf.cast(tf.squeeze(input_length, axis=-1), tf.int32) sparse_labels = tf.cast( ctc_label_dense_to_sparse(y_true, label_length), tf.int32 ) y_pred = tf.math.log( tf.compat.v1.transpose(y_pred, perm=[1, 0, 2]) + epsilon() ) return tf.expand_dims( tf.compat.v1.nn.ctc_loss( inputs=y_pred, labels=sparse_labels, sequence_length=input_length ), 1, ) @keras_export("keras.backend.ctc_decode") @tf.__internal__.dispatch.add_dispatch_support @doc_controls.do_not_generate_docs
@keras_export("keras.backend.ctc_decode") @tf.__internal__.dispatch.add_dispatch_support @doc_controls.do_not_generate_docs
137
backend.py
Python
keras/backend.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
43,922
10
12
15
42
8
0
10
24
test_written_task_map
Add TaskMap and TaskInstance.map_id (#20286) Co-authored-by: Ash Berlin-Taylor <[email protected]>
https://github.com/apache/airflow.git
def test_written_task_map(self, dag_maker, xcom_value, expected_length, expected_keys): with dag_maker(dag_id="test_written_task_map") as dag:
119
test_taskinstance.py
Python
tests/models/test_taskinstance.py
d48a3a357fd89ec805d086d5b6c1f1d4daf77b9a
airflow
3
266,111
70
12
18
332
22
0
112
302
post_save_handler
Closes #10851: New staging mechanism (#10890) * WIP * Convert checkout() context manager to a class * Misc cleanup * Drop unique constraint from Change model * Extend staging tests * Misc cleanup * Incorporate M2M changes * Don't cancel wipe out creation records when an object is deleted * Rename Change to StagedChange * Add documentation for change staging
https://github.com/netbox-community/netbox.git
def post_save_handler(self, sender, instance, **kwargs): key = self.get_key_for_instance(instance) object_type = instance._meta.verbose_name # Creating a new object if kwargs.get('created'): logger.debug(f"[{self.branch}] Staging creation of {object_type} {instance} (PK: {instance.pk})") data = serialize_object(instance, resolve_tags=False) self.queue[key] = (ChangeActionChoices.ACTION_CREATE, data) return # Ignore pre_* many-to-many actions if 'action' in kwargs and kwargs['action'] not in ('post_add', 'post_remove', 'post_clear'): return # Object has already been created/updated in the queue; update its queued representation if key in self.queue: logger.debug(f"[{self.branch}] Updating staged value for {object_type} {instance} (PK: {instance.pk})") data = serialize_object(instance, resolve_tags=False) self.queue[key] = (self.queue[key][0], data) return # Modifying an existing object for the first time logger.debug(f"[{self.branch}] Staging changes to {object_type} {instance} (PK: {instance.pk})") data = serialize_object(instance, resolve_tags=False) self.queue[key] = (ChangeActionChoices.ACTION_UPDATE, data)
164
staging.py
Python
netbox/netbox/staging.py
a5308ea28e851a4ddb65a4e7ca2297b641e5891f
netbox
5
124,249
6
5
23
25
7
2
6
9
test_storage_isolation
[core][gcs] Add storage namespace to redis storage in GCS. (#25994) To enable one storage be able to be shared by multiple ray clusters, a special prefix is added to isolate the data between clusters: "<EXTERNAL_STORAGE_NAMESPACE>@" The namespace is given by an os environment: `RAY_external_storage_namespace` when start the head: `RAY_external_storage_namespace=1234 ray start --head` This flag is very important in HA GCS environment. For example, in ray serve operator, when the operator tries to bring up a new one, it's hard to just start a new db, but it's relatively easy to generate a new cluster id. Another example is that, the user might only be able to maintain one HA Redis DB, and the namespace enable the user to start multiple ray clusters which share the same db. This config should be moved to storage config in the future once we build that.
https://github.com/ray-project/ray.git
def test_storage_isolation(external_redis, call_ray_start, call_ray_start_2): script =
script = """ import ray ray.init("{address}", namespace="a")@ray.remote
75
test_advanced_9.py
Python
python/ray/tests/test_advanced_9.py
096c0cd66802f0eb86301343180eddb3eae0f03a
ray
1
32,190
5
8
4
42
6
0
5
33
as_target_tokenizer
NLLB tokenizer (#18126) * NLLB tokenizer * Apply suggestions from code review - Thanks Stefan! Co-authored-by: Stefan Schweter <[email protected]> * Final touches * Style :) * Update docs/source/en/model_doc/nllb.mdx Co-authored-by: Stefan Schweter <[email protected]> * Apply suggestions from code review Co-authored-by: Sylvain Gugger <[email protected]> * PR reviews * Auto models Co-authored-by: Stefan Schweter <[email protected]> Co-authored-by: Sylvain Gugger <[email protected]>
https://github.com/huggingface/transformers.git
def as_target_tokenizer(self): self.set_tgt_lang_special_tokens(self.tgt_lang) yield self.set_src_lang_special_tokens(self.src_lang)
23
tokenization_nllb.py
Python
src/transformers/models/nllb/tokenization_nllb.py
c1c79b06550b587b2a975016ef9d18b53258025b
transformers
1
124,881
17
6
62
15
2
0
18
27
test_recover_start_from_replica_actor_names
[Serve][Part2] Migrate the tests to use deployment graph api (#26507)
https://github.com/ray-project/ray.git
def test_recover_start_from_replica_actor_names(serve_instance): # Test failed to deploy with total of 2 replicas, # but first constructor call fails.
343
test_controller_recovery.py
Python
python/ray/serve/tests/fault_tolerance_tests/test_controller_recovery.py
09a6e5336ad6ab3c41e4a16e906c778aee2450bc
ray
14
211,968
29
11
19
132
21
0
35
142
to_json
Redesign serialization protocol (#11960) * Redesign serialization in bokeh * Redesign deserialization in bokehjs * Resolve type issues and test failures * Make 'bytes' serialization work in bokeh * Partially update bokeh's serialization tests * Resolve issues with cyclic references * Don't limit StaticGraphProvider to tuples * Make FixedTicker.ticks' type more flexible * Use np.array instead of np.ndarray * Remove references to BokehJSONEncoder * Resolve sphinx warnings related to JSON * Implement hybrid serialization for map/dict * Use === or !== with unset symbol * Finalize deserialization of refs * Remove 'old' attribute from ModelChangedEvent * Make ButtonClick.__init__ less restrictive * Use Map<number, ...> in StaticLayoutProvider.graph_layout * Start using Map<K, V> for non-string keys * Fix plotting/file/line_on_off example * Don't denormalize specs in bokehjs * Hack around issues with resolving figure model * Remove special cases from defaults' tests * Temporarily update unit/bokeh/test_objects * Promote streaming/patching events and remove hints * Allow to stream/patch any property in bokehjs * Drop unneeded Property.serializable_value() * Set callback_invoker on hinted events * Simplify unit/bokeh/test_objects.py * Always preserve ndarrays even for dtype="object" * Refine and normalize naming conventions * Remove unused functions * Move Model.to_json() to sphinxext.bokeh_model * Include references in serialized values * Actually encode data when streaming/patching * Robustify differential serialization * Allow bokehjs to send binary buffers * Add dtype=object code path to ColorSpec * Simplify definitions of data specs * Remove meaningless code comments * Introduce Bytes and replace Base64String * Add support for serialization of slices * Remove obsolete comment from property/dataspec.py * Add a comment regarding ndarray.tobytes() * Try serializing pandas' types last * Standardize error reporting * Resturucture bokehjs serialization code * Redesign default model resolution * Refactor 'kind' in document events * Don't depend on Document in Deserializer * Make Deserializer.encode() re-entrant * Move *Buffer to serialization/buffer * Finalize differential serialization * Serialize vectorized values as structures * Rename Event.{decode_json->from_serializable} * Don't use has_ref() in Model.to_serializable() * Handle circular object references in bokehjs * Reorganize serialization unit tests * Redesign model registry and qualified names * Remove the need for StaticSerializer * Make 'attributes' optional in type reps * Allow to serialize typed arrays as binary * Finalize handling of binary buffers * Use memoryview to further defer encoding * Test dict serialization and ordering * Downcast ndarrays {u}int{64->32} if possible * Add preliminary release/migration notes * Robustify encoding of objects and object refs * Remove support for serialization of relativedelta * Import pandas only if really necessary * Statically type bokeh.core.serialization * Add preliminary serialization's documentation * Add Deserializer.deserialize() for symmetric APIs * Handle streaming/patching/data events in io.notebook * Update handling of buffers in io.notebook * Properly serialize MessageSent event * Add a regression test for issue #11694 * Preserve order of inherited properties * Add support for serialization of symbols * Update defaults' tests to use type="object" * Move DocJson.version to the first entry * Add a preliminary regression test for #11930 * Fix integration/glyphs/rect_log_axis.py * Fix value detection in dataspecs involving String * Remove an unnecessary type assertion
https://github.com/bokeh/bokeh.git
def to_json(self) -> DocJson: data_models = [ model for model in Model.model_class_reverse_map.values() if is_DataModel(model) ] serializer = Serializer() defs = serializer.encode(data_models) roots = serializer.encode(self._roots) doc_json = DocJson( version=__version__, title=self.title, defs=defs, roots=roots, ) self.models.flush() return doc_json
83
document.py
Python
bokeh/document/document.py
fca16442ae90afcd2ac61f4e554e538776730831
bokeh
3
261,652
68
11
27
313
39
0
98
199
test_learning_curve_display_default_usage
FEA add LearningCurveDisplay to show plot learning curve (#24084) Co-authored-by: jeremie du boisberranger <[email protected]> Co-authored-by: Arturo Amor <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def test_learning_curve_display_default_usage(pyplot, data): X, y = data estimator = DecisionTreeClassifier(random_state=0) train_sizes = [0.3, 0.6, 0.9] display = LearningCurveDisplay.from_estimator( estimator, X, y, train_sizes=train_sizes ) import matplotlib as mpl assert display.errorbar_ is None assert isinstance(display.lines_, list) for line in display.lines_: assert isinstance(line, mpl.lines.Line2D) assert isinstance(display.fill_between_, list) for fill in display.fill_between_: assert isinstance(fill, mpl.collections.PolyCollection) assert fill.get_alpha() == 0.5 assert display.score_name == "Score" assert display.ax_.get_xlabel() == "Number of samples in the training set" assert display.ax_.get_ylabel() == "Score" _, legend_labels = display.ax_.get_legend_handles_labels() assert legend_labels == ["Testing metric"] train_sizes_abs, train_scores, test_scores = learning_curve( estimator, X, y, train_sizes=train_sizes ) assert_array_equal(display.train_sizes, train_sizes_abs) assert_allclose(display.train_scores, train_scores) assert_allclose(display.test_scores, test_scores)
211
test_plot.py
Python
sklearn/model_selection/tests/test_plot.py
758fe0d9c72ba343097003e7992c9239e58bfc63
scikit-learn
3
310,957
77
17
44
427
24
0
148
651
fetch_data
Cleanup GitHub sensor classes and descriptions (#64853)
https://github.com/home-assistant/core.git
async def fetch_data(self) -> IssuesPulls: pulls_count = 0 pull_last = None issues_count = 0 issue_last = None try: pull_response = await self._client.repos.pulls.list( self.repository, **{"params": {"per_page": 1}, "etag": self._pull_etag}, ) except GitHubNotModifiedException: # Return the last known data if the request result was not modified pulls_count = self.data.pulls_count pull_last = self.data.pull_last else: self._pull_etag = pull_response.etag pulls_count = pull_response.last_page_number or len(pull_response.data) pull_last = pull_response.data[0] if pull_response.data else None try: issue_response = await self._client.repos.issues.list( self.repository, **{"params": {"per_page": 1}, "etag": self._issue_etag}, ) except GitHubNotModifiedException: # Return the last known data if the request result was not modified issues_count = self.data.issues_count issue_last = self.data.issue_last else: self._issue_etag = issue_response.etag issues_count = ( issue_response.last_page_number or len(issue_response.data) ) - pulls_count issue_last = issue_response.data[0] if issue_response.data else None if issue_last is not None and issue_last.pull_request: issue_response = await self._client.repos.issues.list(self.repository) for issue in issue_response.data: if not issue.pull_request: issue_last = issue break return IssuesPulls( issues_count=issues_count, issue_last=issue_last, pulls_count=pulls_count, pull_last=pull_last, )
266
coordinator.py
Python
homeassistant/components/github/coordinator.py
b07f4ba398bf7f7fb65dd6f365454ed5ccfc8a58
core
11
26,596
38
15
31
212
27
0
53
171
test_homepage_events
Update orderNumber field in OrderEvent type (#9447)
https://github.com/saleor/saleor.git
def test_homepage_events(order_events, staff_api_client, permission_manage_orders): query = response = staff_api_client.post_graphql( query, permissions=[permission_manage_orders] ) content = get_graphql_content(response) edges = content["data"]["homepageEvents"]["edges"] only_types = {"PLACED", "PLACED_FROM_DRAFT", "ORDER_FULLY_PAID"} assert {edge["node"]["type"] for edge in edges} == only_types expected_numbers = set( OrderEvent.objects.filter( type__in=[ OrderEvents.PLACED, OrderEvents.PLACED_FROM_DRAFT, OrderEvents.ORDER_FULLY_PAID, ] ).values_list("order__number", flat=True) ) assert {int(edge["node"]["orderNumber"]) for edge in edges} == expected_numbers QUERY_ORDER_TOTAL =
125
test_homepage.py
Python
saleor/graphql/order/tests/test_homepage.py
bef5dc4fca6dbb4b67e3d00cabef66cbd0bb25a6
saleor
3
112,046
7
7
10
29
5
0
7
21
get_origin2wrapped_parameter_name_map
[Model Compression] Pruning Wrapper Refactor (#4488)
https://github.com/microsoft/nni.git
def get_origin2wrapped_parameter_name_map(self) -> Dict[str, str]: raise NotImplementedError()
17
compressor.py
Python
nni/algorithms/compression/v2/pytorch/base/compressor.py
2566badb06095b9e3ea16eb6f00fd58da65a95fd
nni
1
189,283
10
10
3
48
5
0
11
28
is_streaming_blob_type
Document streaming blob type Not having streaming blobs documented is causing users to pass in the wrong input for blob arguments. This commit resolves the issue by explicitly marking streaming blob types and auto-generating a usage note for streaming blobs.
https://github.com/aws/aws-cli.git
def is_streaming_blob_type(shape): return (shape and shape.type_name == 'blob' and shape.serialization.get('streaming', False))
27
utils.py
Python
awscli/utils.py
ff9b332d6ad9d3e7e845d08d40a9970f4d08ff18
aws-cli
3
44,419
10
8
3
47
8
0
10
24
key
Make `airflow dags test` be able to execute Mapped Tasks (#21210) * Make `airflow dags test` be able to execute Mapped Tasks In order to do this there were two steps required: - The BackfillJob needs to know about mapped tasks, both to expand them, and in order to update it's TI tracking - The DebugExecutor needed to "unmap" the mapped task to get the real operator back I was testing this with the following dag: ``` from airflow import DAG from airflow.decorators import task from airflow.operators.python import PythonOperator import pendulum @task def make_list(): return list(map(lambda a: f'echo "{a!r}"', [1, 2, {'a': 'b'}])) def consumer(*args): print(repr(args)) with DAG(dag_id='maptest', start_date=pendulum.DateTime(2022, 1, 18)) as dag: PythonOperator(task_id='consumer', python_callable=consumer).map(op_args=make_list()) ``` It can't "unmap" decorated operators successfully yet, so we're using old-school PythonOperator We also just pass the whole value to the operator, not just the current mapping value(s) * Always have a `task_group` property on DAGNodes And since TaskGroup is a DAGNode, we don't need to store parent group directly anymore -- it'll already be stored * Add "integation" tests for running mapped tasks via BackfillJob * Only show "Map Index" in Backfill report when relevant Co-authored-by: Tzu-ping Chung <[email protected]>
https://github.com/apache/airflow.git
def key(self) -> TaskInstanceKey: return TaskInstanceKey(self.dag_id, self.task_id, self.run_id, self.try_number, self.map_index)
31
taskinstance.py
Python
airflow/models/taskinstance.py
6fc6edf6af7f676bfa54ff3a2e6e6d2edb938f2e
airflow
1
64,295
57
20
23
404
26
0
85
62
get_incoming_rate
refactor: get incoming fifo/lifo rate functions Re-use same logic for computing incoming rate.
https://github.com/frappe/erpnext.git
def get_incoming_rate(args, raise_error_if_no_rate=True): from erpnext.stock.stock_ledger import get_previous_sle, get_valuation_rate if isinstance(args, str): args = json.loads(args) in_rate = 0 if (args.get("serial_no") or "").strip(): in_rate = get_avg_purchase_rate(args.get("serial_no")) else: valuation_method = get_valuation_method(args.get("item_code")) previous_sle = get_previous_sle(args) if valuation_method in ('FIFO', 'LIFO'): if previous_sle: previous_stock_queue = json.loads(previous_sle.get('stock_queue', '[]') or '[]') in_rate = _get_fifo_lifo_rate(previous_stock_queue, args.get("qty") or 0, valuation_method) if previous_stock_queue else 0 elif valuation_method == 'Moving Average': in_rate = previous_sle.get('valuation_rate') or 0 if not in_rate: voucher_no = args.get('voucher_no') or args.get('name') in_rate = get_valuation_rate(args.get('item_code'), args.get('warehouse'), args.get('voucher_type'), voucher_no, args.get('allow_zero_valuation'), currency=erpnext.get_company_currency(args.get('company')), company=args.get('company'), raise_error_if_no_rate=raise_error_if_no_rate) return flt(in_rate)
235
utils.py
Python
erpnext/stock/utils.py
61c5ad44d3fe282e453d77df8acd1fbf9642c44a
erpnext
13
248,275
25
13
68
92
12
0
29
86
_expire_url_cache_data
URL preview cache expiry logs: INFO -> DEBUG, text clarifications (#12720)
https://github.com/matrix-org/synapse.git
async def _expire_url_cache_data(self) -> None: assert self._worker_run_media_background_jobs now = self.clock.time_msec() logger.debug("Running url preview cache expiry") if not (await self.store.db_pool.updates.has_completed_background_updates()): logger.debug("Still running DB updates; skipping url preview cache expiry") return
329
preview_url_resource.py
Python
synapse/rest/media/v1/preview_url_resource.py
57f6c496d0e26b1b455de936bd950e1899a5ae25
synapse
12
121,981
28
13
8
104
13
1
29
43
_use_python_method
Introduce class PyArray that contains the data members of python Array. A few key methods is implemented in C++ while the rest are still implmemented in python and added to the class later. A class decorator, @use_cpp_array, is added to add python methods to xc.Array. PiperOrigin-RevId: 473075244
https://github.com/google/jax.git
def _use_python_method(f): # TODO(chky): remove 'type: ignore' on decorated property once mypy does a release if isinstance(f, property): _python_methods.add(cast(property, f).fget.__name__) elif isinstance(f, pxla.maybe_cached_property): _python_methods.add(f.func.__name__) else: _python_methods.add(f.__name__) return f @_use_cpp_array
@_use_cpp_array
61
array.py
Python
jax/experimental/array.py
0400db959be865b3ca312ca3355824f0706723c7
jax
3
140,064
71
13
27
271
29
0
98
371
get_next
Ignore previous tasks before submitting ones via `map` and `map_unordered` (#23684)
https://github.com/ray-project/ray.git
def get_next(self, timeout=None, ignore_if_timedout=False): if not self.has_next(): raise StopIteration("No more results to get") if self._next_return_index >= self._next_task_index: raise ValueError( "It is not allowed to call get_next() after get_next_unordered()." ) future = self._index_to_future[self._next_return_index] timeout_msg = "Timed out waiting for result" raise_timeout_after_ignore = False if timeout is not None: res, _ = ray.wait([future], timeout=timeout) if not res: if not ignore_if_timedout: raise TimeoutError(timeout_msg) else: raise_timeout_after_ignore = True del self._index_to_future[self._next_return_index] self._next_return_index += 1 future_key = tuple(future) if isinstance(future, list) else future i, a = self._future_to_actor.pop(future_key) self._return_actor(a) if raise_timeout_after_ignore: raise TimeoutError( timeout_msg + ". The task {} has been ignored.".format(future) ) return ray.get(future)
166
actor_pool.py
Python
python/ray/util/actor_pool.py
5b9b4fa018af04089320b03394711c9916a61e23
ray
8
293,909
18
9
3
63
8
0
20
29
get_state
Avoid selecting attributes in the history api when `no_attributes` is passed (#68352)
https://github.com/home-assistant/core.git
def get_state(hass, utc_point_in_time, entity_id, run=None, no_attributes=False): states = get_states(hass, utc_point_in_time, (entity_id,), run, None, no_attributes) return states[0] if states else None
46
history.py
Python
homeassistant/components/recorder/history.py
816695cc96c19110ccda10431d92160ea6064d32
core
2
309,027
9
7
4
30
6
0
10
26
_get_distance_unit
Use SensorEntityDescription in Mazda integration (#63423) * Use SensorEntityDescription in Mazda integration * Change lambdas to functions * Minor fixes * Address review comments
https://github.com/home-assistant/core.git
def _get_distance_unit(unit_system): if unit_system.name == CONF_UNIT_SYSTEM_IMPERIAL: return LENGTH_MILES return LENGTH_KILOMETERS
17
sensor.py
Python
homeassistant/components/mazda/sensor.py
8915b73f724b58e93284a823c0d2e99fbfc13e84
core
2
124,544
39
13
23
149
12
0
51
215
get_state
[RLlib] Checkpoint and restore connectors. (#26253)
https://github.com/ray-project/ray.git
def get_state(self) -> PolicyState: state = { # All the policy's weights. "weights": self.get_weights(), # The current global timestep. "global_timestep": self.global_timestep, } if self.config.get("enable_connectors", False): # Checkpoint connectors state as well if enabled. connector_configs = {} if self.agent_connectors: connector_configs["agent"] = self.agent_connectors.to_config() if self.action_connectors: connector_configs["action"] = self.action_connectors.to_config() state["connector_configs"] = connector_configs return state
84
policy.py
Python
rllib/policy/policy.py
0c469e490e0ed5e6ca848c627f3b852382e2bf2a
ray
4
170,050
24
11
16
107
12
0
36
89
test_execute_fail
TST/CLN: Ensure sqlite memory connections are closed (#49154) * TST/CLN: Use fixtures for TestXSQLite * More sqlite closing
https://github.com/pandas-dev/pandas.git
def test_execute_fail(self, sqlite_buildin): create_sql = cur = sqlite_buildin.cursor() cur.execute(create_sql) sql.execute('INSERT INTO test VALUES("foo", "bar", 1.234)', sqlite_buildin) sql.execute('INSERT INTO test VALUES("foo", "baz", 2.567)', sqlite_buildin) with pytest.raises(sql.DatabaseError, match="Execution failed on sql"): sql.execute('INSERT INTO test VALUES("foo", "bar", 7)', sqlite_buildin)
61
test_sql.py
Python
pandas/tests/io/test_sql.py
5a4339f686225ed5eadc5c4b7d2508c0765ef577
pandas
1
39,809
31
14
28
164
18
0
41
175
_set_random_id
improve error message and logic for _set_random_id exclusions
https://github.com/plotly/dash.git
def _set_random_id(self): if hasattr(self, "id"): return getattr(self, "id") kind = f"`{self._namespace}.{self._type}`" # pylint: disable=no-member if getattr(self, "persistence", False): raise RuntimeError( f ) if "dash_snapshots" in sys.modules: raise RuntimeError( f ) v = str(uuid.UUID(int=rd.randint(0, 2 ** 128))) setattr(self, "id", v) return v
85
base_component.py
Python
dash/development/base_component.py
0ef2b4aac220def202aa240db84cc3976c5d796b
dash
4
162,632
154
19
209
792
44
0
266
991
process_info
Add pre-processor stage `after_filter` * Move `_match_entry` and `post_extract` to `process_video_result`. It is also left in `process_info` for API compat * `--list-...` options and `--force-write-archive` now obey filtering options * Move `SponsorBlockPP` to `after_filter`. Closes https://github.com/yt-dlp/yt-dlp/issues/2536 * Reverts 4ec82a72bbf7ff0066edb50dcad20aa77ac2fe09 since this commit addresses the issue it was solving
https://github.com/yt-dlp/yt-dlp.git
def process_info(self, info_dict): assert info_dict.get('_type', 'video') == 'video' original_infodict = info_dict if 'format' not in info_dict and 'ext' in info_dict: info_dict['format'] = info_dict['ext'] # This is mostly just for backward compatibility of process_info # As a side-effect, this allows for format-specific filters if self._match_entry(info_dict) is not None: info_dict['__write_download_archive'] = 'ignore' return # Does nothing under normal operation - for backward compatibility of process_info self.post_extract(info_dict) # info_dict['_filename'] needs to be set for backward compatibility info_dict['_filename'] = full_filename = self.prepare_filename(info_dict, warn=True) temp_filename = self.prepare_filename(info_dict, 'temp') files_to_move = {} self._num_downloads += 1 # Forced printings self.__forced_printings(info_dict, full_filename, incomplete=('format' not in info_dict)) if self.params.get('simulate'): info_dict['__write_download_archive'] = self.params.get('force_write_download_archive') return if full_filename is None: return if not self._ensure_dir_exists(encodeFilename(full_filename)): return if not self._ensure_dir_exists(encodeFilename(temp_filename)): return if self._write_description('video', info_dict, self.prepare_filename(info_dict, 'description')) is None: return sub_files = self._write_subtitles(info_dict, temp_filename) if sub_files is None: return files_to_move.update(dict(sub_files)) thumb_files = self._write_thumbnails( 'video', info_dict, temp_filename, self.prepare_filename(info_dict, 'thumbnail')) if thumb_files is None: return files_to_move.update(dict(thumb_files)) infofn = self.prepare_filename(info_dict, 'infojson') _infojson_written = self._write_info_json('video', info_dict, infofn) if _infojson_written: info_dict['infojson_filename'] = infofn # For backward compatibility, even though it was a private field info_dict['__infojson_filename'] = infofn elif _infojson_written is None: return # Note: Annotations are deprecated annofn = None if self.params.get('writeannotations', False): annofn = self.prepare_filename(info_dict, 'annotation') if annofn: if not self._ensure_dir_exists(encodeFilename(annofn)): return if not self.params.get('overwrites', True) and os.path.exists(encodeFilename(annofn)): self.to_screen('[info] Video annotations are already present') elif not info_dict.get('annotations'): self.report_warning('There are no annotations to write.') else: try: self.to_screen('[info] Writing video annotations to: ' + annofn) with io.open(encodeFilename(annofn), 'w', encoding='utf-8') as annofile: annofile.write(info_dict['annotations']) except (KeyError, TypeError): self.report_warning('There are no annotations to write.') except (OSError, IOError): self.report_error('Cannot write annotations file: ' + annofn) return # Write internet shortcut files
1,463
YoutubeDL.py
Python
yt_dlp/YoutubeDL.py
09b49e1f688831c3ad7181decf38c90f8451e6c4
yt-dlp
70
259,434
109
10
25
255
25
0
174
383
test_tweedie_log_identity_consistency
ENH migrate GLMs / TweedieRegressor to linear loss (#22548) Co-authored-by: Olivier Grisel <[email protected]> Co-authored-by: Thomas J. Fan <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def test_tweedie_log_identity_consistency(p): half_tweedie_log = HalfTweedieLoss(power=p) half_tweedie_identity = HalfTweedieLossIdentity(power=p) n_samples = 10 y_true, raw_prediction = random_y_true_raw_prediction( loss=half_tweedie_log, n_samples=n_samples, seed=42 ) y_pred = half_tweedie_log.link.inverse(raw_prediction) # exp(raw_prediction) # Let's compare the loss values, up to some constant term that is dropped # in HalfTweedieLoss but not in HalfTweedieLossIdentity. loss_log = half_tweedie_log.loss( y_true=y_true, raw_prediction=raw_prediction ) + half_tweedie_log.constant_to_optimal_zero(y_true) loss_identity = half_tweedie_identity.loss( y_true=y_true, raw_prediction=y_pred ) + half_tweedie_identity.constant_to_optimal_zero(y_true) # Note that HalfTweedieLoss ignores different constant terms than # HalfTweedieLossIdentity. Constant terms means terms not depending on # raw_prediction. By adding these terms, `constant_to_optimal_zero`, both losses # give the same values. assert_allclose(loss_log, loss_identity) # For gradients and hessians, the constant terms do not matter. We have, however, # to account for the chain rule, i.e. with x=raw_prediction # gradient_log(x) = d/dx loss_log(x) # = d/dx loss_identity(exp(x)) # = exp(x) * gradient_identity(exp(x)) # Similarly, # hessian_log(x) = exp(x) * gradient_identity(exp(x)) # + exp(x)**2 * hessian_identity(x) gradient_log, hessian_log = half_tweedie_log.gradient_hessian( y_true=y_true, raw_prediction=raw_prediction ) gradient_identity, hessian_identity = half_tweedie_identity.gradient_hessian( y_true=y_true, raw_prediction=y_pred ) assert_allclose(gradient_log, y_pred * gradient_identity) assert_allclose( hessian_log, y_pred * gradient_identity + y_pred**2 * hessian_identity )
155
test_loss.py
Python
sklearn/_loss/tests/test_loss.py
75a94f518f7bd7d0bf581ffb67d9f961e3c4efbc
scikit-learn
1
153,340
4
11
2
42
8
0
4
10
display_time_updates
REFACTOR-#4251: define public interfaces in `modin.core.execution.ray` module (#3868) Signed-off-by: Anatoly Myachev <[email protected]>
https://github.com/modin-project/modin.git
def display_time_updates(bar): threading.Thread(target=_show_time_updates, args=(bar,)).start()
25
modin_aqp.py
Python
modin/core/execution/ray/generic/modin_aqp.py
e7cb2e82f8b9c7a68f82abdd3b6011d661230b7e
modin
1
274,800
18
10
7
105
11
0
23
76
test_sparse_categorical_accuracy_eager
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def test_sparse_categorical_accuracy_eager(self): metric = metrics.sparse_categorical_accuracy y_true = np.arange(6).reshape([6, 1]) y_pred = np.arange(36).reshape([6, 6]) self.assertAllEqual( metric(y_true, y_pred), [0.0, 0.0, 0.0, 0.0, 0.0, 1.0] )
82
metrics_functional_test.py
Python
keras/metrics/metrics_functional_test.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
224,002
46
15
10
184
15
0
62
155
get_theme_dir
Remove spaces at the ends of docstrings, normalize quotes
https://github.com/mkdocs/mkdocs.git
def get_theme_dir(self): entry_points = EntryPoint.parse_map(self.distribution.entry_points, self.distribution) if 'mkdocs.themes' not in entry_points: raise DistutilsOptionError("no mkdocs.themes are defined in entry_points") if self.theme is None and len(entry_points['mkdocs.themes']) == 1: # Default to the only theme defined in entry_points as none specified. self.theme = tuple(entry_points['mkdocs.themes'].keys())[0] if self.theme not in entry_points['mkdocs.themes']: raise DistutilsOptionError("you must specify a valid theme name to work on") theme = entry_points['mkdocs.themes'][self.theme] return path.dirname(theme.resolve().__file__)
108
babel.py
Python
mkdocs/commands/babel.py
e7f07cc82ab2be920ab426ba07456d8b2592714d
mkdocs
5
186,643
7
8
6
48
6
0
7
28
_autohsts_save_state
Add typing to certbot.apache (#9071) * Add typing to certbot.apache Co-authored-by: Adrien Ferrand <[email protected]>
https://github.com/certbot/certbot.git
def _autohsts_save_state(self) -> None: self.storage.put("autohsts", self._autohsts) self.storage.save()
27
configurator.py
Python
certbot-apache/certbot_apache/_internal/configurator.py
7d9e9a49005de7961e84d2a7c608db57dbab3046
certbot
1
285,199
88
13
12
230
31
0
103
151
get_engle_granger_two_step_cointegration_test
Here we merge all API Refactor related branches (#2236) * Update api.py * Updated forex menu * refactor ycrv command * refactor ycrv command black * refactor ecocal command * Minh changes * Adding space to test pushing * title fix ecocal df * get economic calendar annotation * fix investingcom tests * refactor index command * refactor overview command * give defaults to wsj view function args * rename date args investincom * refacto bigmac command * fix ecocal typo * refactor rtps command * alphavantage gdp * alphavantage gdp per capita * alphavantage cpi * alphavantage tyld * alphavantage inf * refactor macro command * refactor macro command w helpers * refactor treasury command * fix macro on terminal * treasury labels * refactor maturities * update treasury maturities doc strings * refactor get economic calendar finhub * refactor map command api * display map filter choices * route economy api to performance map * route economy api to performance map * display group choices on valuation command * refactor performance and valuation commands * refactor spectrum model and view * add choices to spectrum controller * delete image after view * fix model tests finviz * fix finciz view tests * refactor futures * fix some tests * fix more tests * fix controller test * refactor fred series notes * update fred notes docstring * refacto fred series ids * fix pred and qa when empty datasets * refactor fred * uncomment stuff * refacto get series data * fix some tests * set defaults on args * refactor fred yield curve * black * fix spell and remove ecocal names * fix linting * linting * pylint fix * change dangerous defaults * Working through crypto fixes (#2256) * Working through crypto fixes * Continued adding crypto stuff * Added crypto overview * Added test fixes * Added fixtures * Fixed tests * Fixed charting issue * Removed broken APIs * Final adjustments * Added test fixes * map get groups and get ycrv countries into old api * exposed econdb helper funcs * remove helpers * refactor search indices * linting * refactor arg currency * pylint from currency * Started switching crpyto ascending to ascend * Merging * Portfolio model arguements, params, and docstring * Refactored for etf commands (#2292) * Refactored for etf commands * Fixed tests * Added load command * Fixed menu * Portfolio logic fixes * Added econometrics (#2260) * Added econometrics * Fixed tests * Simplified API * Added test fixes * Added test csv * Allowed examples to be loaded * Fund refactor (#2291) * Fund refactor * Changed fund_name and fund to name * Changed ascending to ascend * Stock menu refactoring for easier API usage (#2194) * Stocks refactoring for easier API usage * Linting * Refactor newly added features * Linting * Fixing tests * Refactor common files used by stocks menu * Fixing flake8 * Fix linting and tests * Linting * Fix flake8 * refactor insider_data * refactor mentions * refactor watchlist * refactor sentiment * refactor sentiment * fix yahoofinance tests * refactor load and candle * refactor get_news and display_news * refactor stocks.ins.act * candle default matplotlib * fix yahoofinance_view tests * fix ark model tests * fix ark view tests * fix business insider model * fix business insider view * refactor csimarket model * fix tests csi market model * update dd controller * fix get suppliers tests * fix dd controller tests * fix finhub tests * fix finviz tests * fix fmp tests * fix marketwatch tests * corrected argument keywords in test_bt_model * corrected argument keywords in test_bt_view * refactor fa controller * refactor marketwatch view * refactor gov controller * fix tests fa av * fix tests elect * fix dcf tests * fix polygon tests * fix fmp tests * fix quiverquant tests * fix yahoofinance fa tests * fix more fa tests * fix insider tests * fix more tests * fix more tests * fix options tests * fix stock gov tests * fix tests test_ba_controller * fix tests for test_finviz_compare_model.py * fixed 2 tests * fixed tests * fixed tests * fixed tests * fixed tests * fixed tests * fixed tests * fixed tests * fixed tests * fixed tests * fixed tests * fix final tests * fixed tests * fixed tests * Fix tests * black * forgot to black tests * fixed tests * fixed tests * fixed tests * fixed tests * flakefix * Tests + code : Stocks / Discovery * fix tests * added recorder * fixed tests * fixed tests * black * black * remove unused imports * refactor display raw * sia dicts fix * pylint * linting * remove dangerous default * fix tests * fix beta model test * black * skip screener qa test * change sector path to sectors * update tests readme * fix metric defaults * black * substitute lost ticker * defaults cpic * another round on sia * refactor cramer * reduce default tweets on sentiment * refactor yf hist, corr, volume * arkorders default * refactor income, balance, cashflow * refacto scorr, screener, getfinnhub * refactor stockgrid * ibkr refactor * another round on stockgrid * add dividens end point * refactor discovery endpoints * update docstrings with similar input * refactor messages * refactor ba * refactor regioons * refactor twitter sentiment * refactor hist * refactor regions * give default to timeframe * refactor bunch of defaults and arg names * remove leftover imports * refactor vwap * let tests run * fix tests * fix stock tests * fix stockanalysis tests * flake * MYPY * Made important changes * added fixes * Fixed big issue * Added fixes to tests * fix qa tests * fix tests * fix 1 more test * last stocks failing * fix crypto test Co-authored-by: Chavithra PARANA <[email protected]> Co-authored-by: montezdesousa <[email protected]> Co-authored-by: hjoaquim <[email protected]> Co-authored-by: montezdesousa <[email protected]> Co-authored-by: colin99d <[email protected]> * fix portfolio tests * change period to window * update ca docstrings * refactor get_similar_companies func * Fixed * Update CI * Update CI 2 * Update CI 3 * Update dependencies Co-authored-by: colin99d <[email protected]> Co-authored-by: Colin Delahunty <[email protected]> Co-authored-by: montezdesousa <[email protected]> Co-authored-by: James Simmons <[email protected]> Co-authored-by: Theodore Aptekarev <[email protected]> Co-authored-by: minhhoang1023 <[email protected]> Co-authored-by: jose-donato <[email protected]> Co-authored-by: montezdesousa <[email protected]> Co-authored-by: northern-64bit <[email protected]> Co-authored-by: hjoaquim <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def get_engle_granger_two_step_cointegration_test(dependent_series, independent_series): warnings.simplefilter(action="ignore", category=FutureWarning) long_run_ols = sm.OLS(dependent_series, sm.add_constant(independent_series)) warnings.simplefilter(action="default", category=FutureWarning) long_run_ols_fit = long_run_ols.fit() c, gamma = long_run_ols_fit.params z = long_run_ols_fit.resid short_run_ols = sm.OLS(dependent_series.diff().iloc[1:], (z.shift().iloc[1:])) short_run_ols_fit = short_run_ols.fit() alpha = short_run_ols_fit.params[0] # NOTE: The p-value returned by the adfuller function assumes we do not estimate z # first, but test stationarity of an unestimated series directly. This assumption # should have limited effect for high N, however. Critical values taking this into # account more accurately are provided in e.g. McKinnon (1990) and Engle & Yoo (1987). adfstat, pvalue, _, _, _ = adfuller(z, maxlag=1, autolag=None) return c, gamma, alpha, z, adfstat, pvalue
147
econometrics_model.py
Python
openbb_terminal/econometrics/econometrics_model.py
9e1a58e2dbedec4e4a9f9c2e32ddf091776c606b
OpenBBTerminal
1