status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 31,027 | ["airflow/config_templates/default_celery.py"] | Airflow doesn't recognize `rediss:...` url to point to a Redis broker | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow 2.5.3
Redis is attached using `rediss:...` url. While deploying the instance, it Airflow/Celery downgrades `rediss` to `redis` with the warning `[2023-05-02 18:38:30,377: WARNING/MainProcess] Secure redis scheme specified (rediss) with no ssl options, defaulting to insecure SSL behaviour.`
Adding `AIRFLOW__CELERY__SSL_ACTIVE=True` as an environmental variable (the same as `ssl_active = true` in `airflow.cfg` file `[celery]` section) fails with the error
`airflow.exceptions.AirflowException: The broker you configured does not support SSL_ACTIVE to be True. Please use RabbitMQ or Redis if you would like to use SSL for broker.`
<img width="1705" alt="Screenshot 2023-05-12 at 12 07 45 PM" src="https://github.com/apache/airflow/assets/94494788/b56cf054-d122-4baf-b6e9-75effe804731">
### What you think should happen instead
It seems that Airflow doesn't recognize `rediss:...` url to be related to Redis broker
### How to reproduce
Airflow 2.5.3
Python 3.10.9
Redis 4.0.14 (url starts with `rediss:...`)
![Screenshot 2023-05-12 at 12 07 29 PM](https://github.com/apache/airflow/assets/94494788/04226516-cd29-4fe8-8ecc-aca2e1bb5045)
You need to add `AIRFLOW__CELERY__SSL_ACTIVE=True` as an environmental variable or `ssl_active = true` to `airflow.cfg` file `[celery]` section and deploy the instance
![Screenshot 2023-05-12 at 12 07 15 PM](https://github.com/apache/airflow/assets/94494788/214c7485-1718-4835-b921-a140e8e6311a)
### Operating System
Ubuntu 22.04.2 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Heroku platform, heroku-22 stack, python 3.10.9
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31027 | https://github.com/apache/airflow/pull/31028 | d91861d3bdbde18c937978c878d137d6c758e2c6 | 471fdacd853a5bcb190e1ffc017a4e650097ed69 | "2023-05-02T20:10:11Z" | python | "2023-06-07T17:09:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,025 | ["airflow/www/static/js/dag/details/graph/Node.tsx", "airflow/www/static/js/dag/details/graph/utils.ts", "airflow/www/static/js/utils/graph.ts"] | New graph view renders incorrectly when prefix_group_id=false | ### Apache Airflow version
2.6.0
### What happened
If a task_group in a dag has `prefix_group_id=false` in its config, the new graph won't render correctly. When the group is collapsed nothing is shown and there is an error in the console. When the group is expanded, the nodes will render but its edges become disconnected. As reported in https://github.com/apache/airflow/issues/29852#issuecomment-1531766479
This is because we use the prefix to determine where an edge is supposed to be rendered. We shouldn't make that assumption and actually iterate through the nodes to find where an edge belongs.
### What you think should happen instead
It renders like any other task group
### How to reproduce
Add `prefix_group_id=false` to a task group
### Operating System
any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31025 | https://github.com/apache/airflow/pull/32764 | 53c6305bd0a914738074821d5f5f233e3ed5bee5 | 3e467ba510d29e912d89115769726111b8bce891 | "2023-05-02T18:15:05Z" | python | "2023-07-22T10:23:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,014 | ["airflow/www/static/js/trigger.js", "airflow/www/templates/airflow/trigger.html", "docs/apache-airflow/core-concepts/params.rst", "tests/www/views/test_views_trigger_dag.py"] | Exception when manually triggering dags via UI with `params` defined. | ### Apache Airflow version
2.6.0
### What happened
When clicking the "Trigger DAG w/ config" in a DAG UI I receive a 500 "Oops" page when `params` are defined for the DAG.
The Airflow webserver logs show this:
```
2023-05-02T13:02:50 - [2023-05-02T12:02:50.249+0000] {app.py:1744} ERROR - Exception on /trigger [GET]
2023-05-02T13:02:50 - Traceback (most recent call last):
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 2529, in wsgi_app
2023-05-02T13:02:50 - response = self.full_dispatch_request()
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1825, in full_dispatch_request
2023-05-02T13:02:50 - rv = self.handle_user_exception(e)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1823, in full_dispatch_request
2023-05-02T13:02:50 - rv = self.dispatch_request()
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1799, in dispatch_request
2023-05-02T13:02:50 - return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/auth.py", line 47, in decorated
2023-05-02T13:02:50 - return func(*args, **kwargs)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/decorators.py", line 125, in wrapper
2023-05-02T13:02:50 - return f(*args, **kwargs)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 76, in wrapper
2023-05-02T13:02:50 - return func(*args, session=session, **kwargs)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/views.py", line 1967, in trigger
2023-05-02T13:02:50 - return self.render_template(
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/views.py", line 640, in render_template
2023-05-02T13:02:50 - return super().render_template(
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/baseviews.py", line 339, in render_template
2023-05-02T13:02:50 - return render_template(
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask/templating.py", line 147, in render_template
2023-05-02T13:02:50 - return _render(app, template, context)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask/templating.py", line 130, in _render
2023-05-02T13:02:50 - rv = template.render(context)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/jinja2/environment.py", line 1301, in render
2023-05-02T13:02:50 - self.environment.handle_exception()
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/jinja2/environment.py", line 936, in handle_exception
2023-05-02T13:02:50 - raise rewrite_traceback_stack(source=source)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/templates/airflow/trigger.html", line 106, in top-level template code
2023-05-02T13:02:50 - <span class="help-block">{{ form_details.description }}</span>
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/templates/airflow/main.html", line 21, in top-level template code
2023-05-02T13:02:50 - {% from 'airflow/_messages.html' import show_message %}
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 2, in top-level template code
2023-05-02T13:02:50 - {% import 'appbuilder/baselib.html' as baselib %}
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/templates/appbuilder/init.html", line 42, in top-level template code
2023-05-02T13:02:50 - {% block body %}
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/flask_appbuilder/templates/appbuilder/baselayout.html", line 19, in block 'body'
2023-05-02T13:02:50 - {% block content %}
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/templates/airflow/trigger.html", line 162, in block 'content'
2023-05-02T13:02:50 - {{ form_element(form_key, form_details) }}
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/jinja2/runtime.py", line 777, in _invoke
2023-05-02T13:02:50 - rv = self._func(*arguments)
2023-05-02T13:02:50 - File "/opt/airflow/.local/lib/python3.10/site-packages/airflow/www/templates/airflow/trigger.html", line 83, in template
2023-05-02T13:02:50 - {%- for txt in form_details.value -%}
2023-05-02T13:02:50 - TypeError: 'NoneType' object is not iterable
```
### What you think should happen instead
No error is shown (worked in 2.5.2)
### How to reproduce
Create a DAG with the following config defined for parameters:
```
params={
"delete_actions": Param(
False,
description="Whether to delete actions after execution.",
type="boolean",
),
"dates": Param(
None,
description="An explicit list of date strings to run on.",
type=["null", "array"],
minItems=1,
),
"start_date_inclusive": Param(
None,
description="An inclusive start-date used to generate a list of dates to run on.",
type=["null", "string"],
pattern="^[0-9]{4}[-/][0-9]{2}[-/][0-9]{2}$",
),
"end_date_exclusive": Param(
None,
description="An exclusive end-date used to generate a list of dates to run on.",
type=["null", "string"],
pattern="^[0-9]{4}[-/][0-9]{2}[-/][0-9]{2}$",
),
"actions_bucket_name": Param(
None,
description='An S3 bucket to read batch actions from. Set as "ACTIONS_BUCKET".',
type=["null", "string"],
),
"actions_path_prefix": Param(
None,
description='An S3 bucket to read batch actions from. Prefixes "ACTIONS_PATH".',
type=["null", "string"],
pattern="^.+/$",
),
"sns_output_topic_name": Param(
None,
description='An SNS output topic ARN to set as "DATA_READY_TO_INDEX_OUTPUT_TOPIC."',
type=["null", "string"],
),
},
# required to convert params to their correct types
render_template_as_native_obj=True,
```
Deploy the DAG, click the manual trigger button.
### Operating System
Debian
### Versions of Apache Airflow Providers
N/A
### Deployment
Other Docker-based deployment
### Deployment details
Amazon ECS
Python version: 3.10.11
Airflow version: 2.6.0 (official docker image as base)
### Anything else
Occurs every time.
Does *NOT* occur when `params` are not defined on the DAG.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31014 | https://github.com/apache/airflow/pull/31078 | 49cc213919a7e2a5d4bdc9f952681fa4ef7bf923 | b8b18bd74b72edc4b40e91258fccc54cf3aff3c1 | "2023-05-02T12:08:03Z" | python | "2023-05-06T12:20:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,984 | ["airflow/models/dagrun.py", "airflow/models/taskinstance.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py"] | Unable to remove DagRun and TaskInstance with note | ### Apache Airflow version
2.6.0
### What happened
Hi, I'm unable to remove DagRun and TaskInstance when they have note attached.
### What you think should happen instead
Should be able to remove DagRuns or TaskInstances with or without notes.
Also note should be removed when parent entity is removed.
### How to reproduce
1. Create note in DagRun or TaskInstance
2. Try to remove the row that note has been added by clicking delete record icon. This will display alert in the UI `General Error <class 'AssertionError'>`
3. Select checkbox DagRun containing note, click `Actions` dropdown and select `Delete`. This won't display anything in the UI.
### Operating System
OSX 12.6
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==5.2.2
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-http==4.2.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-sqlite==3.3.1
### Deployment
Virtualenv installation
### Deployment details
Deployed using Postgresql 13.9 and sqlite 3
### Anything else
DagRun deletion Log
```
[2023-05-01T13:06:42.125+0700] {interface.py:790} ERROR - Delete record error: Dependency rule tried to blank-out primary key column 'dag_run_note.dag_run_id' on instance '<DagRunNote at 0x1125afa00>'
Traceback (most recent call last):
File "/opt/airflow/.venv/lib/python3.10/site-packages/flask_appbuilder/models/sqla/interface.py", line 775, in delete
self.session.commit()
File "<string>", line 2, in commit
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1451, in commit
self._transaction.commit(_to_root=self.future)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 829, in commit
self._prepare_impl()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
self.session.flush()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3446, in flush
self._flush(objects)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3585, in _flush
with util.safe_reraise():
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3546, in _flush
flush_context.execute()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
rec.execute(self)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 577, in execute
self.dependency_processor.process_deletes(uow, states)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/dependency.py", line 552, in process_deletes
self._synchronize(
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/dependency.py", line 610, in _synchronize
sync.clear(dest, self.mapper, self.prop.synchronize_pairs)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/sync.py", line 86, in clear
raise AssertionError(
AssertionError: Dependency rule tried to blank-out primary key column 'dag_run_note.dag_run_id' on instance '<DagRunNote at 0x1125afa00>'
```
TaskInstance deletion Log
```
[2023-05-01T13:06:42.125+0700] {interface.py:790} ERROR - Delete record error: Dependency rule tried to blank-out primary key column 'task_instance_note.task_id' on instance '<TaskInstanceNote at 0x1126ba770>'
Traceback (most recent call last):
File "/opt/airflow/.venv/lib/python3.10/site-packages/flask_appbuilder/models/sqla/interface.py", line 775, in delete
self.session.commit()
File "<string>", line 2, in commit
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1451, in commit
self._transaction.commit(_to_root=self.future)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 829, in commit
self._prepare_impl()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
self.session.flush()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3446, in flush
self._flush(objects)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3585, in _flush
with util.safe_reraise():
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3546, in _flush
flush_context.execute()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
rec.execute(self)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 577, in execute
self.dependency_processor.process_deletes(uow, states)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/dependency.py", line 552, in process_deletes
self._synchronize(
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/dependency.py", line 610, in _synchronize
sync.clear(dest, self.mapper, self.prop.synchronize_pairs)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/sync.py", line 86, in clear
raise AssertionError(
AssertionError: Dependency rule tried to blank-out primary key column 'task_instance_note.task_id' on instance '<TaskInstanceNote at 0x1126ba770>'
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30984 | https://github.com/apache/airflow/pull/30987 | ec7674f111177c41c02e5269ad336253ed9c28b4 | 0212b7c14c4ce6866d5da1ba9f25d3ecc5c2188f | "2023-05-01T06:29:36Z" | python | "2023-05-01T21:14:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,947 | ["BREEZE.rst"] | BREEZE: add troubleshooting section to cover ETIMEDOUT during start-airflow | ### What do you see as an issue?
BREEZE troubleshooting section does not have issue related to ETIMEOUT and potential fix when it happens:
https://github.com/apache/airflow/blob/main/BREEZE.rst#troubleshooting
### Solving the problem
BREEZE.rst can be improved by having ways to troubleshoot and fix the ETIMEOUT error when running `start-airflow`, which seemed to be one of the common problems when using BREEZE.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30947 | https://github.com/apache/airflow/pull/30949 | 783aa9cecbf47b4d0e5509d1919f644b9689b6b3 | bd542fdf51ad9550e5c4348f11e70b5a6c9adb48 | "2023-04-28T18:03:00Z" | python | "2023-04-28T20:37:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,932 | ["airflow/models/baseoperator.py", "tests/models/test_mappedoperator.py"] | Tasks created using "dynamic task mapping" ignore the Task Group passed as argument | ### Apache Airflow version
main (development)
### What happened
When creating a DAG with Task Groups and a Mapped Operator, if the Task Group is passed as argument to Mapped Operator's `partial` method it is ignored and the operator is not added to the group.
### What you think should happen instead
The Mapped Operator should be added to the Task Group passed as an argument.
### How to reproduce
Create a DAG with a source code like the following one
```python
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.empty import EmptyOperator
from airflow.utils import timezone
from airflow.utils.task_group import TaskGroup
with DAG("dag", start_date=timezone.datetime(2016, 1, 1)) as dag:
start = EmptyOperator(task_id="start")
finish = EmptyOperator(task_id="finish")
group = TaskGroup("test-group")
commands = ["echo a", "echo b", "echo c"]
mapped = BashOperator.partial(task_id="task_2", task_group=group).expand(bash_command=commands)
start >> group >> finish
# assert mapped.task_group == group
```
### Operating System
macOS 13.2.1 (22D68)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30932 | https://github.com/apache/airflow/pull/30933 | 1d4b1410b027c667d4e2f51f488f98b166facf71 | 4ee2de1e38a85abb89f9f313a3424c7368e12d1a | "2023-04-27T23:34:38Z" | python | "2023-04-29T21:27:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,900 | ["airflow/api_connexion/endpoints/dag_endpoint.py", "tests/api_connexion/endpoints/test_dag_endpoint.py"] | REST API, order_by parameter in dags list is not taken into account | ### Apache Airflow version
2.5.3
### What happened
It seems that the order_by parameters is not used when calling dags list with the rest api
The following two commands returns the same results which should not be possible cause one is ascending and the other descending
curl -X 'GET' 'http://<server_name>:<port>/api/v1/dags?limit=100&order_by=dag_id&only_active=true' -H 'accept: application/json'
curl -X 'GET' 'http://<server_name>:<port>/api/v1/dags?limit=100&order_by=-dag_id&only_active=true' -H 'accept: application/json'
by the way, giving an incorrect field name doesn't throw an error
### What you think should happen instead
_No response_
### How to reproduce
The following two commands returns the same results
curl -X 'GET' 'http://<server_name>:<port>/api/v1/dags?limit=100&order_by=-dag_id&only_active=true' -H 'accept: application/json'
curl -X 'GET' 'http://<server_name>:<port>/api/v1/dags?limit=100&order_by=dag_id&only_active=true' -H 'accept: application/json'
Same problem is visible with the swagger ui
### Operating System
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-http==4.2.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-mssql==3.3.2
apache-airflow-providers-mysql==4.0.2
apache-airflow-providers-oracle==3.6.0
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-vertica==3.3.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30900 | https://github.com/apache/airflow/pull/30926 | 36fe6d0377d37b5f6be8ea5659dcabb44b4fc233 | 1d4b1410b027c667d4e2f51f488f98b166facf71 | "2023-04-27T10:10:57Z" | python | "2023-04-29T16:07:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,884 | ["airflow/jobs/dag_processor_job_runner.py"] | DagProcessor Performance Regression | ### Apache Airflow version
2.5.3
### What happened
Upgrading from `2.4.3` to `2.5.3` caused a significant increase in dag processing time on standalone dag processor (~1-2s to 60s):
```
/opt/airflow/dags/ecco_airflow/dags/image_processing/product_image_load.py 0 -1 56.68s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/known_consumers/known_consumers.py 0 -1 56.64s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/monitoring/row_counts.py 0 -1 56.67s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/omnichannel/base.py 0 -1 56.66s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/omnichannel/oc_data.py 0 -1 56.67s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/omnichannel/oc_stream.py 0 -1 56.52s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/reporting/reporting_data_foundation.py 0 -1 56.63s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/retail_analysis/retail_analysis_dbt.py 0 -1 56.66s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/rfm_segments/rfm_segments.py 0 -1 56.02s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/utils/airflow.py 0 -1 56.65s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/aad_users_listing.py 1 0 55.51s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/funnel_io.py 1 0 56.13s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/iar_param.py 1 0 56.50s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/sfmc_copy.py 1 0 56.59s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/us_legacy_datawarehouse.py 1 0 55.15s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/cdp/ecco_cdp_auditing.py 1 0 56.54s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/cdp/ecco_cdp_budget_daily_phasing.py 1 0 56.63s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/cdp/ecco_cdp_gold_rm_tests.py 1 0 55.00s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/consumer_entity_matching/graph_entity_matching.py 1 0 56.67s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/data_backup/data_backup.py 1 0 56.69s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/hive/adhoc_entity_publish.py 1 0 55.33s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/image_regression/train.py 1 0 56.63s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/maintenance/db_maintenance.py 1 0 56.58s 2023-04-26T12:56:15
```
Also seeing messages like these
```
[2023-04-26T12:56:15.322+0000] {manager.py:979} DEBUG - Processor for /opt/airflow/dags/ecco_airflow/dags/bronze/us_legacy_datawarehouse.py finished
[2023-04-26T12:56:15.323+0000] {processor.py:296} DEBUG - Waiting for <ForkProcess name='DagFileProcessor68-Process' pid=116 parent=7 stopped exitcode=0>
[2023-04-26T12:56:15.323+0000] {manager.py:979} DEBUG - Processor for /opt/airflow/dags/ecco_airflow/dags/cdp/ecco_cdp_gold_rm_tests.py finished
[2023-04-26T12:56:15.323+0000] {processor.py:296} DEBUG - Waiting for <ForkProcess name='DagFileProcessor69-Process' pid=122 parent=7 stopped exitcode=0>
[2023-04-26T12:56:15.324+0000] {manager.py:979} DEBUG - Processor for /opt/airflow/dags/ecco_airflow/dags/bronze/streaming/sap_inventory_feed.py finished
[2023-04-26T12:56:15.324+0000] {processor.py:314} DEBUG - Waiting for <ForkProcess name='DagFileProcessor70-Process' pid=128 parent=7 stopped exitcode=-SIGKILL>
[2023-04-26T12:56:15.324+0000] {manager.py:986} ERROR - Processor for /opt/airflow/dags/ecco_airflow/dags/bronze/streaming/sap_inventory_feed.py exited with return code -9.
```
In `2.4.3`:
```
/opt/airflow/dags/ecco_airflow/dags/image_regression/train.py 1 0 1.34s 2023-04-26T14:19:08
/opt/airflow/dags/ecco_airflow/dags/known_consumers/known_consumers.py 1 0 1.12s 2023-04-26T14:19:00
/opt/airflow/dags/ecco_airflow/dags/maintenance/db_maintenance.py 1 0 0.63s 2023-04-26T14:18:27
/opt/airflow/dags/ecco_airflow/dags/monitoring/row_counts.py 1 0 3.74s 2023-04-26T14:18:45
/opt/airflow/dags/ecco_airflow/dags/omnichannel/oc_data.py 1 0 1.21s 2023-04-26T14:18:47
/opt/airflow/dags/ecco_airflow/dags/omnichannel/oc_stream.py 1 0 1.22s 2023-04-26T14:18:30
/opt/airflow/dags/ecco_airflow/dags/reporting/reporting_data_foundation.py 1 0 1.39s 2023-04-26T14:19:08
/opt/airflow/dags/ecco_airflow/dags/retail_analysis/retail_analysis_dbt.py 1 0 1.32s 2023-04-26T14:18:51
/opt/airflow/dags/ecco_airflow/dags/rfm_segments/rfm_segments.py 1 0 1.20s 2023-04-26T14:18:34
```
### What you think should happen instead
Dag processing time remains unchanged
### How to reproduce
Provision Airflow with the following settings:
## Airflow 2.5.3
- K8s 1.25.6
- Kubernetes executor
- Postgres backend (Postgres 11.0)
- Deploy using Airflow Helm **v1.9.0** with image **2.5.3-python3.9**
- pgbouncer enabled
- standalone dag processort with 3500m cpu / 4000Mi memory, single replica
- dags and logs mounted from RWM volume (Azure files)
## Airflow 2.4.3
- K8s 1.25.6
- Kubernetes executor
- Postgres backend (Postgres 11.0)
- Deploy using Airflow Helm **v1.7.0** with image **2.4.3-python3.9**
- pgbouncer enabled
- standalone dag processort with 2500m cpu / 2000Mi memory, single replica
- dags and logs mounted from RWM volume (Azure files)
## Image modifications
We use image built from `apache/airflow:2.4.3-python3.9`, with some dependencies added/reinstalled with different versions.
### Poetry dependency spec:
For `2.5.3`:
```
[tool.poetry.dependencies]
python = ">=3.9,<3.11"
authlib = "~1.0.1"
adapta = { version = "==2.2.3", extras = ["azure", "storage"] }
numpy = "==1.23.3"
db-dtypes = "~1.0.4"
gevent = "^21.12.0"
sqlalchemy = ">=1.4,<2.0"
snowflake-sqlalchemy = ">=1.4,<2.0"
esd-services-api-client = "~0.6.0"
apache-airflow-providers-common-sql = "~1.3.1"
apache-airflow-providers-databricks = "~3.1.0"
apache-airflow-providers-google = "==8.4.0"
apache-airflow-providers-microsoft-azure = "~5.2.1"
apache-airflow-providers-datadog = "~3.0.0"
apache-airflow-providers-snowflake = "~3.3.0"
apache-airflow = "==2.5.3"
dataclasses-json = ">=0.5.7,<0.6"
```
For `2.4.3`:
```
[tool.poetry.dependencies]
python = ">=3.9,<3.11"
authlib = "~1.0.1"
adapta = { version = "==2.2.3", extras = ["azure", "storage"] }
numpy = "==1.23.3"
db-dtypes = "~1.0.4"
gevent = "^21.12.0"
sqlalchemy = ">=1.4,<2.0"
snowflake-sqlalchemy = ">=1.4,<2.0"
esd-services-api-client = "~0.6.0"
apache-airflow-providers-common-sql = "~1.3.1"
apache-airflow-providers-databricks = "~3.1.0"
apache-airflow-providers-google = "==8.4.0"
apache-airflow-providers-microsoft-azure = "~5.2.1"
apache-airflow-providers-datadog = "~3.0.0"
apache-airflow-providers-snowflake = "~3.3.0"
apache-airflow = "==2.4.3"
dataclasses-json = ">=0.5.7,<0.6"
```
### Operating System
Container OS: Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-databricks==3.1.0
apache-airflow-providers-datadog==3.0.0
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==8.4.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==5.2.1
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-odbc==3.1.2
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-snowflake==3.3.0
apache-airflow-providers-sqlite==3.3.2
apache-airflow-providers-ssh==3.2.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
See How-to-reproduce section
### Anything else
Occurs by upgrading the helm chart from 1.7.0/2.4.3 to 1.9.0/2.5.3 installation.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30884 | https://github.com/apache/airflow/pull/30899 | 7ddad1a24b1664cef3827b06d9c71adbc558e9ef | 00ab45ffb7dee92030782f0d1496d95b593fd4a7 | "2023-04-26T14:47:31Z" | python | "2023-04-27T11:27:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,883 | ["airflow/models/skipmixin.py", "tests/models/test_skipmixin.py"] | BranchPythonOperator skips downstream tasks for all mapped instances in TaskGroup mapping | ### Apache Airflow version
2.5.1, 2.6.0
### What happened
Hello!
When using a branching operator in a mapped task group, skipped tasks will be for all mapped instances of the task_group.
Here is an example DAG exhibiting the issue.
![image](https://user-images.githubusercontent.com/11246353/234595433-6c1460b7-e808-4de1-9eb8-8b9fdb6f616c.png)
When the BranchOperator sets a downstream task as "skipped", it will also do so retroactively.
If branch_a is selected and has time to run before the first time where branch_b is selected, it will do without issue. However, the status of that instance will still be set to skipped. Any subsequent choice of "branch_a" will be skipped.
Logs for such a case below (obtained using the DAG below.).
I am running Airflow v2.5.1.
### What you think should happen instead
branch_a selected:
```log
[2023-04-26, 13:58:09 UTC] {python.py:177} INFO - Done. Returned value was: showcase_branching_issues.branch_a
[2023-04-26, 13:58:09 UTC] {python.py:211} INFO - Branch callable return showcase_branching_issues.branch_a
[2023-04-26, 13:58:09 UTC] {skipmixin.py:155} INFO - Following branch showcase_branching_issues.branch_a
[2023-04-26, 13:58:09 UTC] {skipmixin.py:211} INFO - Skipping tasks ['showcase_branching_issues.branch_b']
[2023-04-26, 13:58:09 UTC] {taskinstance.py:1318} INFO - Marking task as SUCCESS. dag_id=branching_issue, task_id=showcase_branching_issues.branch_int, map_index=0, execution_date=20230426T135806, start_date=20230426T135809, end_date=20230426T135809
[2023-04-26, 13:58:09 UTC] {local_task_job.py:208} INFO - Task exited with return code 0
[2023-04-26, 13:58:09 UTC] {taskinstance.py:2578} INFO - 2 downstream tasks scheduled from follow-on schedule check
```
branch_a running:
```log
[2023-04-26, 13:58:10 UTC] {python.py:177} INFO - Done. Returned value was: None
[2023-04-26, 13:58:10 UTC] {taskinstance.py:1318} INFO - Marking task as SUCCESS. dag_id=branching_issue, task_id=showcase_branching_issues.branch_a, map_index=0, execution_date=20230426T135806, start_date=20230426T135810, end_date=20230426T135810
[2023-04-26, 13:58:10 UTC] {local_task_job.py:208} INFO - Task exited with return code 0
[2023-04-26, 13:58:10 UTC] {taskinstance.py:2578} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
branch_b selected:
```log
[2023-04-26, 13:58:14 UTC] {python.py:177} INFO - Done. Returned value was: showcase_branching_issues.branch_b
[2023-04-26, 13:58:14 UTC] {python.py:211} INFO - Branch callable return showcase_branching_issues.branch_b
[2023-04-26, 13:58:14 UTC] {skipmixin.py:155} INFO - Following branch showcase_branching_issues.branch_b
[2023-04-26, 13:58:14 UTC] {skipmixin.py:211} INFO - Skipping tasks ['showcase_branching_issues.branch_a']
[2023-04-26, 13:58:14 UTC] {taskinstance.py:1318} INFO - Marking task as SUCCESS. dag_id=branching_issue, task_id=showcase_branching_issues.branch_int, map_index=1, execution_date=20230426T135806, start_date=20230426T135809, end_date=20230426T135814
[2023-04-26, 13:58:14 UTC] {local_task_job.py:208} INFO - Task exited with return code 0
[2023-04-26, 13:58:14 UTC] {taskinstance.py:2578} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
All branch_a and branch_b instances set to skipped, no task_b instance ran.
---
Branch selection and "skipped" status should be relative to a particular task_group instance.
### How to reproduce
Here is a minimal example DAG which showcases the issue:
```python
from datetime import datetime
from airflow.decorators import dag, task, task_group
@dag(
dag_id="branching_issue",
schedule=None,
start_date=datetime(2021, 1, 1),
)
def BranchingIssue():
@task
def branch_b():
pass
@task
def branch_a():
pass
@task
def initiate_dynamic_mapping():
import random
random_len = random.randint(1, 10)
return [i for i in range(random_len)]
@task.branch
def branch_int(k):
import time
branch = "showcase_branching_issues."
if k % 2 == 0:
branch += "branch_a"
else:
time.sleep(5)
branch += "branch_b"
return branch
@task_group
def showcase_branching_issues(k):
selected_branch = branch_int(k)
selected_branch >> [branch_a(), branch_b()]
list_k = initiate_dynamic_mapping()
showcase_branching_issues.expand(k=list_k)
dag = BranchingIssue()
```
### Operating System
Ubuntu 22.04.1 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
I tried searching for related issues or fixes in newer/upcoming releases but found nothing, please excuse me if I missed something.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30883 | https://github.com/apache/airflow/pull/31153 | ef75a3a6757a033586c933f7b62ab86f846af754 | 9985c3571175d054bfabef02979ecc934e6aae73 | "2023-04-26T14:19:04Z" | python | "2023-07-06T16:06:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,838 | ["airflow/www/templates/airflow/dags.html", "airflow/www/views.py"] | Sort Dag List by Last Run Date | ### Description
It would be helpful to me if I could see the most recently ran DAGs and their health in the Airflow UI. Right now many fields are sortable but not last run.
The solution here would likely build off the previous work from this issue: https://github.com/apache/airflow/issues/8459
### Use case/motivation
When my team updates a docker image we want to confirm our DAGs are still running healthy. One way to do that would be to pop open the Airflow UI and look at our teams DAGs (using the label tag) and confirm the most recently ran jobs are still healthy.
### Related issues
I think it would build off of https://github.com/apache/airflow/issues/8459
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30838 | https://github.com/apache/airflow/pull/31234 | 7ebda3898db2eee72d043a9565a674dea72cd8fa | 3363004450355582712272924fac551dc1f7bd56 | "2023-04-24T13:41:07Z" | python | "2023-05-17T15:11:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,797 | ["airflow/serialization/serde.py", "tests/utils/test_json.py"] | Deserialization of nested dict failing | ### Apache Airflow version
2.6.0b1
### What happened
When returning nested dictionary data from Task A and passing the returned value in Task B the deserialization fails if the data is nested dictionary with nonprimitive or iterable type.
```
Traceback (most recent call last):
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/models/abstractoperator.py", line 570, in _do_render_template_fields
rendered_content = self.render_template(
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/template/templater.py", line 162, in render_template
return tuple(self.render_template(element, context, jinja_env, oids) for element in value)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/template/templater.py", line 162, in <genexpr>
return tuple(self.render_template(element, context, jinja_env, oids) for element in value)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/template/templater.py", line 158, in render_template
return value.resolve(context)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/utils/session.py", line 76, in wrapper
return func(*args, session=session, **kwargs)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/models/xcom_arg.py", line 342, in resolve
result = ti.xcom_pull(
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper
return func(*args, **kwargs)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 2454, in xcom_pull
return XCom.deserialize_value(first)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/models/xcom.py", line 666, in deserialize_value
return BaseXCom._deserialize_value(result, False)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/models/xcom.py", line 659, in _deserialize_value
return json.loads(result.value.decode("UTF-8"), cls=XComDecoder, object_hook=object_hook)
File "/opt/homebrew/Cellar/[email protected]/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/opt/homebrew/Cellar/[email protected]/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/homebrew/Cellar/[email protected]/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/utils/json.py", line 122, in object_hook
val = deserialize(dct)
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/serialization/serde.py", line 212, in deserialize
return {str(k): deserialize(v, full) for k, v in o.items()}
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/serialization/serde.py", line 212, in <dictcomp>
return {str(k): deserialize(v, full) for k, v in o.items()}
File "/Users/utkarsharma/sandbox/astronomer/apache-airflow-provider-transfers/.nox/dev/lib/python3.10/site-packages/airflow/serialization/serde.py", line 206, in deserialize
raise TypeError()
```
The way we are deserializing is by adding a [custom encoder](https://docs.python.org/3/library/json.html#encoders-and-decoders) for JSON and we are overriding the `object_hook` as shown below.
https://github.com/apache/airflow/blob/ebe2f2f626ffee4b9d0f038fe5b89c322125a49b/airflow/utils/json.py#L107-L126
But if you try to run below code:
```
import json
def object_hook(dct: dict) -> dict:
print("dct : ", dct)
return dct
if __name__ == "__main__":
val = json.dumps({"a": {"a-1": 1, "a-2": {"a-2-1": 1, "a-2-2": 2}}, "b": {"b-1": 1, "b-2": 2}, "c": {"c-1": 1, "c-2": 2}})
print("val : ", val, "\n\n")
return_val = json.loads(val, object_hook=object_hook)
```
Output:
```
val : {"a": {"a-1": 1, "a-2": {"a-2-1": 1, "a-2-2": 2}}, "b": {"b-1": 1, "b-2": 2}, "c": {"c-1": 1, "c-2": 2}}
dct : {'a-2-1': 1, 'a-2-2': 2}
dct : {'a-1': 1, 'a-2': {'a-2-1': 1, 'a-2-2': 2}}
dct : {'b-1': 1, 'b-2': 2}
dct : {'c-1': 1, 'c-2': 2}
dct : {'a': {'a-1': 1, 'a-2': {'a-2-1': 1, 'a-2-2': 2}}, 'b': {'b-1': 1, 'b-2': 2}, 'c': {'c-1': 1, 'c-2': 2}}
```
`object_hook` is called with every decoded value. Because of this `deserialize` is getting called even with the deserialized data causing this issue.
deserialize function code:
https://github.com/apache/airflow/blob/ebe2f2f626ffee4b9d0f038fe5b89c322125a49b/airflow/serialization/serde.py#L174
### What you think should happen instead
Airflow should be able to Serialization/Deserialization without any issue
### How to reproduce
Refer - https://github.com/apache/airflow/pull/30798
Run below code:
```
import pandas as pd
from airflow import DAG
from astro import sql as aql
from airflow.utils import timezone
with DAG("random-string", start_date=timezone.datetime(2016, 1, 1), catchup=False):
@task
def taskA():
return {"foo": 1, "bar": 2, "baz": pd.DataFrame({"numbers": [1, 2, 3], "Colors": ["red", "white", "blue"]})}
@task
def taskB(x):
print(x)
v = taskA()
taskB(v)
```
### Operating System
Mac - ventura
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30797 | https://github.com/apache/airflow/pull/30819 | cbaea573b3658dd941335e21c5f29118b31cb6d8 | 58e26d9df42f10e4e2b46cd26c6832547945789b | "2023-04-21T16:47:31Z" | python | "2023-04-23T10:38:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,796 | ["docs/apache-airflow/authoring-and-scheduling/plugins.rst"] | Tasks forked by the Local Executor are loading stale modules when the modules are also referenced by plugins | ### Apache Airflow version
2.5.3
### What happened
After upgrading from Airflow 2.4.3 to 2.5.3, tasks forked by the `Local Executor` can run with outdated module imports if those modules are also imported by plugins. It seems as though tasks will reuse imports that were first loaded when the scheduler boots, and any subsequent updates to those shared modules do not get reflected in new tasks.
I verified this issue occurs for all patch versions of 2.5.
### What you think should happen instead
Given that the plugin documentation states:
> if you make any changes to plugins and you want the webserver or scheduler to use that new code you will need to restart those processes.
this behavior may be attended. But it's not clear that this affects the code for forked tasks as well. So if this is not actually a bug then perhaps the documentation can be updated.
### How to reproduce
Given a plugin file like:
```python
from airflow.models.baseoperator import BaseOperatorLink
from src.custom_operator import CustomOperator
class CustomerOperatorLink(BaseOperatorLink):
operators = [CustomOperator]
```
And a dag file like
```
from src.custom_operator import CustomOperator
...
```
Any updates to the `CustomOperator` will not be reflected in new running tasks after the scheduler boots.
### Operating System
Debian bullseye
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
Workarounds
- Set `execute_tasks_new_python_interpreter` to `False`
- In my case of using Operator Links, I can alternatively set the Operator Link in my custom operator using `operator_extra_links`, which wouldn't require importing the operator from the plugin file.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30796 | https://github.com/apache/airflow/pull/31781 | ab8c9ec2545caefb232d8e979b18b4c8c8ad3563 | 18f2b35c8fe09aaa8d2b28065846d7cf1e85cae2 | "2023-04-21T15:35:10Z" | python | "2023-06-08T18:50:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,689 | ["airflow/sensors/external_task.py", "tests/sensors/test_external_task_sensor.py"] | ExternalTaskSensor waits forever for TaskGroup containing mapped tasks | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
If you have an `ExternalTaskSensor` that uses `external_task_group_id` to wait on a `TaskGroup`, and if that `TaskGroup` contains any [mapped tasks](https://airflow.apache.org/docs/apache-airflow/2.3.0/concepts/dynamic-task-mapping.html), the sensor will be stuck waiting forever even after the task group is successful.
### What you think should happen instead
`ExternalTaskSensor` should be able to wait on `TaskGroup`s, regardless of whether or not that group contains mapped tasks.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
import logging
from airflow.decorators import dag, task
from airflow.operators.empty import EmptyOperator
from airflow.sensors.external_task import ExternalTaskSensor
from airflow.utils.task_group import TaskGroup
logger = logging.getLogger(__name__)
@dag(
schedule_interval='@daily',
start_date=datetime.datetime(2023, 4, 17),
)
def task_groups():
with TaskGroup(group_id='group'):
EmptyOperator(task_id='operator1') >> EmptyOperator(task_id='operator2')
with TaskGroup(group_id='mapped_tasks'):
@task
def get_tasks():
return [1, 2, 3]
@task
def process(x):
print(x)
process.expand(x=get_tasks())
ExternalTaskSensor(
task_id='wait_for_normal_task_group',
external_dag_id='task_groups',
external_task_group_id='group',
poke_interval=3,
check_existence=True,
)
ExternalTaskSensor(
task_id='wait_for_mapped_task_group',
external_dag_id='task_groups',
external_task_group_id='mapped_tasks',
poke_interval=3,
check_existence=True,
)
task_groups()
```
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone
### Anything else
I think the bug is [here](https://github.com/apache/airflow/blob/731ef3d692fc7472e245f39f3e3e42c2360cb769/airflow/sensors/external_task.py#L364):
```
elif self.external_task_group_id:
external_task_group_task_ids = self.get_external_task_group_task_ids(session)
count = (
self._count_query(TI, session, states, dttm_filter)
.filter(TI.task_id.in_(external_task_group_task_ids))
.scalar()
) / len(external_task_group_task_ids)
```
If the group contains mapped tasks, `external_task_group_ids` only contains a list of task names (not expanded to include mapped task indices), but the `count` will count all mapped instances. This returns a larger value than the calling function expects to receive when it checks for `count_allowed == len(dttm_filter)`, so `poke` always returns `False`.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30689 | https://github.com/apache/airflow/pull/30742 | ae3a61775a79a3000df0a8bdf50807033f4e3cdc | 3c30e54de3b8a6fe793b0ff1ed8225562779d96c | "2023-04-17T21:23:39Z" | python | "2023-05-18T07:38:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,673 | ["airflow/providers/openlineage/utils/utils.py"] | Open-Lineage type-ignore in OpenLineageRedactor likely hides some problem | ### Body
The new `attrs` package 23.1 released on 16th of April (11 hours ago) added typing information to "attrs.asdict" method and it mypy tests started to fail with
```
airflow/providers/openlineage/utils/utils.py:345: error: Argument 1 to "asdict"
has incompatible type "Type[AttrsInstance]"; expected "AttrsInstance"
[arg-type]
... for dict_key, subval in attrs.asdict(item, recurse=False)....
^
airflow/providers/openlineage/utils/utils.py:345: note: ClassVar protocol member AttrsInstance.__attrs_attrs__ can never be matched by a class object
```
The nature of this error (receiving Type where expecting instance indicates that there is somewhat serious issue here.
Especially that there were a `type: ignore` one line above that would indicate that something is quite wrong here (when we ignore typing issue, we usually comment why and ignore very specific error (`type: ignore[attr-undefined]` for example) when we have good reason to ignore it.
Since open-lineage is not yet released/functional and partially in progress, this is not an issue to be solved immediately, but soon (cc: @mobuchowski).
For now I am workaroudning this by adding another `type: ignore` to stop the static checks from failing (they fail only for PRs that are updating dependencies) and allow to upgrade to attrs 23.1.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/30673 | https://github.com/apache/airflow/pull/30677 | 2557c07aa5852d415a647679180d4dbf81a5d670 | 6a6455ad1c2d76eaf9c60814c2b0a0141ad29da0 | "2023-04-16T21:32:52Z" | python | "2023-04-17T13:56:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,635 | ["airflow/providers/google/cloud/operators/bigquery.py"] | `BigQueryGetDataOperator` does not respect project_id parameter | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.11.0
google-cloud-bigquery==2.34.4
### Apache Airflow version
2.5.2+astro.2
### Operating System
OSX
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
When setting a `project_id` parameter for `BigQueryGetDataOperator` the default project from env is not overwritten. Maybe something broke after it was added in? https://github.com/apache/airflow/pull/25782
### What you think should happen instead
Passing in as parameter should take precedence over reading in from environment
### How to reproduce
Part1
```py
from airflow.providers.google.cloud.operators.bigquery import BigQueryGetDataOperator
bq = BigQueryGetDataOperator(
task_id=f"my_test_query_task_id",
gcp_conn_id="bigquery",
table_id="mytable",
dataset_id="mydataset",
project_id="my_non_default_project",
)
f2 = bq.execute(None)
```
in env i have set
```py
AIRFLOW_CONN_BIGQUERY=gcpbigquery://
GOOGLE_CLOUD_PROJECT=my_primary_project
GOOGLE_APPLICATION_CREDENTIALS=/usr/local/airflow/gcloud/application_default_credentials.json
```
The credentials json file doesn't have project
Part2
Unsetting GOOGLE_CLOUD_PROJECT and rerunning results in
```sh
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/cloud/operators/bigquery.py", line 886, in execute
schema: dict[str, list] = hook.get_schema(
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/common/hooks/base_google.py", line 463, in inner_wrapper
raise AirflowException(
airflow.exceptions.AirflowException: The project id must be passed either as keyword project_id parameter or as project_id extra in Google Cloud connection definition. Both are not set!
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30635 | https://github.com/apache/airflow/pull/30651 | d3aeb4db0c539f2151ef395300cb2b5efc6dce08 | 4eab616e9f0a89c1a6268d5b5eaba526bfa9be6d | "2023-04-14T01:00:24Z" | python | "2023-04-15T00:39:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,613 | ["airflow/providers/amazon/aws/hooks/base_aws.py", "tests/providers/amazon/aws/hooks/test_dynamodb.py"] | DynamoDBHook - not able to registering a custom waiter | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon=7.4.1
### Apache Airflow version
airflow=2.5.3
### Operating System
Mac
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
We can register a custom waiter by adding a JSON file to the path - `airflow/airflow/providers/amazon/aws/waiters/`. The should be named `<client_type>.json` in this case - `dynamodb.json`. Once registered we can use the custom waiter.
content of the file - `airflow/airflow/providers/amazon/aws/waiters/dynamodb.json`:
```
{
"version": 2,
"waiters": {
"export_table": {
"operation": "ExportTableToPointInTime",
"delay": 30,
"maxAttempts": 60,
"acceptors": [
{
"matcher": "path",
"expected": "COMPLETED",
"argument": "ExportDescription.ExportStatus",
"state": "success"
},
{
"matcher": "path",
"expected": "FAILED",
"argument": "ExportDescription.ExportStatus",
"state": "failure"
},
{
"matcher": "path",
"expected": "IN_PROGRESS",
"argument": "ExportDescription.ExportStatus",
"state": "retry"
}
]
}
}
}
```
Getting below error post running test case:
```
class TestCustomDynamoDBServiceWaiters:
"""Test waiters from ``amazon/aws/waiters/dynamodb.json``."""
STATUS_COMPLETED = "COMPLETED"
STATUS_FAILED = "FAILED"
STATUS_IN_PROGRESS = "IN_PROGRESS"
@pytest.fixture(autouse=True)
def setup_test_cases(self, monkeypatch):
self.client = boto3.client("dynamodb", region_name="eu-west-3")
monkeypatch.setattr(DynamoDBHook, "conn", self.client)
@pytest.fixture
def mock_export_table_to_point_in_time(self):
"""Mock ``DynamoDBHook.Client.export_table_to_point_in_time`` method."""
with mock.patch.object(self.client, "export_table_to_point_in_time") as m:
yield m
def test_service_waiters(self):
assert os.path.exists('/Users/utkarsharma/sandbox/airflow-sandbox/airflow/airflow/providers/amazon/aws/waiters/dynamodb.json')
hook_waiters = DynamoDBHook(aws_conn_id=None).list_waiters()
assert "export_table" in hook_waiters
```
## Error
tests/providers/amazon/aws/waiters/test_custom_waiters.py:273 (TestCustomDynamoDBServiceWaiters.test_service_waiters)
'export_table' != ['table_exists', 'table_not_exists']
Expected :['table_exists', 'table_not_exists']
Actual :'export_table'
<Click to see difference>
self = <tests.providers.amazon.aws.waiters.test_custom_waiters.TestCustomDynamoDBServiceWaiters object at 0x117f085e0>
def test_service_waiters(self):
assert os.path.exists('/Users/utkarsharma/sandbox/airflow-sandbox/airflow/airflow/providers/amazon/aws/waiters/dynamodb.json')
hook_waiters = DynamoDBHook(aws_conn_id=None).list_waiters()
> assert "export_table" in hook_waiters
E AssertionError: assert 'export_table' in ['table_exists', 'table_not_exists']
test_custom_waiters.py:277: AssertionError
### What you think should happen instead
It should register the custom waiter and test case should pass.the
### How to reproduce
Add the file mentioned above to Airflow's code base and try running the test case provided.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30613 | https://github.com/apache/airflow/pull/30595 | cb5a2c56b99685305eecdd3222b982a1ef668019 | 7c2d3617bf1be0781e828d3758ee6d9c6490d0f0 | "2023-04-13T04:27:21Z" | python | "2023-04-14T16:43:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,600 | ["airflow/dag_processing/manager.py", "airflow/models/dag.py", "tests/dag_processing/test_job_runner.py"] | DAGs deleted from zips aren't deactivated | ### Apache Airflow version
2.5.3
### What happened
When a DAG is removed from a zip in the DAGs directory, but the zip file remains, it is not marked correctly as inactive. It is still visible in the UI, and attempting to open the DAG results in an `DAG "mydag" seems to be missing from DagBag.` error in the UI.
The DAG is removed from the SerializedDag table, resulting in the scheduler repeatedly erroring with `[2023-04-12T12:43:51.165+0000] {scheduler_job.py:1063} ERROR - DAG 'mydag' not found in serialized_dag table`.
I have done some minor investigating and it appears that [this piece of code](https://github.com/apache/airflow/blob/2.5.3/airflow/dag_processing/manager.py#L748-L772) may be the cause.
`dag_filelocs` provides the path to a specific python file within a zip, so `SerializedDagModel.remove_deleted_dags` is able to remove the missing DAG.
However, `self._file_paths` only contains the top-level zip name, so `DagModel.deactivate_deleted_dags` will only deactivate DAGs where the zip they are contained in is deleted, regardless of whether the DAG is still inside the zip.
I can see there are [other methods that handle DAG deactivation](https://github.com/apache/airflow/blob/2.5.3/airflow/models/dag.py#L2945-L2968) and I'm not sure how these all interact but this does seem to cause this specific issue.
### What you think should happen instead
DAGS that are no longer in the DagBag are marked as inactive
### How to reproduce
Running airflow locally with docker-compose:
- Create a zipfile with 2 DAG py files in in ./dags
- Wait for the DAGs to be parsed by the scheduler and appear in the UI
- Overwrite the existing DAG zip, with a new zip containing only 1 of the original DAG py files
- Wait for scheduler loop to parse the new zip
- Attempt to open the removed DAG in the UI, you will see an error
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
If I replace the docker image in the docker compose with an image built from this Dockerfile:
```
FROM apache/airflow:2.5.3
RUN sed -i '772s/self._file_paths/dag_filelocs/' /home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py
RUN sed -i '3351s/correct_maybe_zipped(dag_model.fileloc)/dag_model.fileloc/' /home/airflow/.local/lib/python3.7/site-packages/airflow/models/dag.py
```
The DAG is deactivated as expected
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30600 | https://github.com/apache/airflow/pull/30608 | 0f3b6579cb67d3cf8bd9fa8f9abd502fc774201a | 7609021ce93d61f2101f5e8cdc126bb8369d334b | "2023-04-12T14:05:38Z" | python | "2023-04-13T04:10:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,593 | ["airflow/jobs/dag_processor_job_runner.py"] | After upgrading to 2.5.3, Dag Processing time increased dramatically. | ### Apache Airflow version
2.5.3
### What happened
I upgraded my airflow cluster from 2.5.2 to 2.5.3 , after which strange things started happening.
I'm currently using a standalone dagProcessor, and the parsing time that used to take about 2 seconds has suddenly increased to about 10 seconds.
I'm thinking it's weird because I haven't made any changes other than a version up, but is there something I can look into? Thanks in advance! 🙇🏼
![image](https://user-images.githubusercontent.com/16011260/231323427-e0d95506-c752-4a2b-93fc-9880b18814f3.png)
### What you think should happen instead
I believe that the time it takes to parse a Dag should be constant, or at least have some variability, but shouldn't take as long as it does now.
### How to reproduce
If you cherrypick [this commit](https://github.com/apache/airflow/pull/30079) into 2.5.2 stable code, the issue will recur.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
- Kubernetes 1.21 Cluster
- 1.7.0 helm chart
- standalone dag processor
- using kubernetes executor
- using mysql database
### Anything else
This issue still persists, and restarting the Dag Processor has not resolved the issue.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30593 | https://github.com/apache/airflow/pull/30899 | 7ddad1a24b1664cef3827b06d9c71adbc558e9ef | 00ab45ffb7dee92030782f0d1496d95b593fd4a7 | "2023-04-12T01:28:37Z" | python | "2023-04-27T11:27:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,562 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/utils/db.py", "tests/utils/test_db.py"] | alembic Logging | ### Apache Airflow version
2.5.3
### What happened
When I call the airflow initdb function, it outputs these lines to the log
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
### What you think should happen instead
There should be a mechanism to disable these logs, or they should just be set to WARN by default
### How to reproduce
Set up a new postgres connection and call:
from airflow.utils.db import initdb
initdb()
### Operating System
MacOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30562 | https://github.com/apache/airflow/pull/31415 | c5597d1fabe5d8f3a170885f6640344d93bf64bf | e470d784627502f171819fab072e0bbab4a05492 | "2023-04-10T11:25:58Z" | python | "2023-05-23T01:33:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,504 | ["airflow/providers/microsoft/azure/operators/data_factory.py"] | Pipeline run URL is empty for AzureDataFactoryRunPipelineOperator | ### Apache Airflow Provider(s)
microsoft-azure
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==7.4.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-elasticsearch==4.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==5.3.0
apache-airflow-providers-microsoft-mssql==3.3.2
apache-airflow-providers-microsoft-winrm==3.1.1
apache-airflow-providers-odbc==3.2.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-salesforce==5.3.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-ssh==3.6.0
```
### Apache Airflow version
2.5.3
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Virtualenv installation
### Deployment details
- Using PostgreSQL 14.7
### What happened
Starting `apache-airflow-providers-microsoft-azure==5.0.0`, the `get_link()` function doesn't return a URL value for the `AzureDataFactoryRunPipelineOperator` operator due to a class instance check in the commit [78b8ea2f22](https://github.com/apache/airflow/commit/78b8ea2f22239db3ef9976301234a66e50b47a94)
**Web server log :**
```
{{data_factory.py:52}} INFO - The <class 'airflow.serialization.serialized_objects.SerializedBaseOperator'> is not <class 'airflow.providers.microsoft.azure.operators.data_factory.AzureDataFactoryRunPipelineOperator'> class.
```
### What you think should happen instead
An URL link should be generated during the run of the `AzureDataFactoryRunPipelineOperator`.
### How to reproduce
**DAG used to reproduce the problem :**
```
from airflow import DAG
from datetime import datetime, timedelta
from airflow.providers.microsoft.azure.operators.data_factory import AzureDataFactoryRunPipelineOperator
from airflow.models import Variable
import os
os.environ["HTTP_PROXY"] = "xxxx"
os.environ["HTTPS_PROXY"] = "xxxx"
with DAG(
dag_id='azure_data_factory',
default_args={
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
'retries': 0,
'retry_delay': timedelta(minutes=5),
},
start_date=datetime(2023, 1, 1),
schedule=None,
max_active_runs=1,
catchup=False,
) as dag:
run_test_pipeline = AzureDataFactoryRunPipelineOperator(
task_id="run_test_pipeline",
azure_data_factory_conn_id=Variable.get("ADF_CONNECTION_NAME"),
pipeline_name=Variable.get("ADF_PIPELINE_NAME_DEMO"),
wait_for_termination=True,
check_interval=30
)
run_test_pipeline
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30504 | https://github.com/apache/airflow/pull/30514 | 12cafbe5c31f953641d1b406cbf99551aff6412c | a09fd0d121476964f1c9d7f12960c24517500d2c | "2023-04-06T12:12:03Z" | python | "2023-04-08T15:39:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,465 | ["dev/breeze/src/airflow_breeze/commands/main_command.py"] | Error running `breeze setup regenerate-command-images` | after running `pre-commit run` , it asked me to run `breeze setup regenerate-command-images` but while running it I got below errors
PS C:\pycharm\codes\airflow-repo-clone\airflow> breeze setup regenerate-command-images
Traceback (most recent call last):
File "C:\pycharm\codes\airflow-repo-clone\airflow\dev\breeze\src\airflow_breeze\commands\main_command.py", line 122, in check_for_python_emulation
system_machine = subprocess.check_output(["uname", "-m"], text=True).strip()
File "C:\python3.10\setup\lib\subprocess.py", line 420, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\python3.10\setup\lib\subprocess.py", line 501, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\python3.10\setup\lib\subprocess.py", line 966, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\python3.10\setup\lib\subprocess.py", line 1435, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\python3.10\setup\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\python3.10\setup\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "c:\users\anand\.local\bin\breeze.exe\__main__.py", line 7, in <module>
File "C:\Users\anand\.local\pipx\venvs\apache-airflow-breeze\lib\site-packages\click\core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "C:\Users\anand\.local\pipx\venvs\apache-airflow-breeze\lib\site-packages\rich_click\rich_group.py", line 21, in main
rv = super().main(*args, standalone_mode=False, **kwargs)
File "C:\Users\anand\.local\pipx\venvs\apache-airflow-breeze\lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "C:\Users\anand\.local\pipx\venvs\apache-airflow-breeze\lib\site-packages\click\core.py", line 1654, in invoke
super().invoke(ctx)
File "C:\Users\anand\.local\pipx\venvs\apache-airflow-breeze\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\anand\.local\pipx\venvs\apache-airflow-breeze\lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "C:\Users\anand\.local\pipx\venvs\apache-airflow-breeze\lib\site-packages\click\decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "C:\pycharm\codes\airflow-repo-clone\airflow\dev\breeze\src\airflow_breeze\commands\main_command.py", line 114, in main
check_for_python_emulation()
File "C:\pycharm\codes\airflow-repo-clone\airflow\dev\breeze\src\airflow_breeze\commands\main_command.py", line 146, in check_for_python_emulation
except TimeoutOccurred:
UnboundLocalError: local variable 'TimeoutOccurred' referenced before assignment
_Originally posted by @rohan472000 in https://github.com/apache/airflow/issues/30405#issuecomment-1496414377_
| https://github.com/apache/airflow/issues/30465 | https://github.com/apache/airflow/pull/30464 | 112d4d663e89343a4669f6001131581313e7c82b | 56ff116ab3a005a07d62adbb7a0bdc0443cf2b85 | "2023-04-04T18:52:44Z" | python | "2023-04-04T20:03:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,414 | ["airflow/www/views.py", "tests/www/views/test_views_tasks.py"] | Cannot clear tasking instances on "List Task Instance" page with User role | ### Apache Airflow version
main (development)
### What happened
Only users with the role `Admin` are allowed to use the action clear on the TaskInstance list view.
### What you think should happen instead
Users with role `User` should be able to clear task instance in the Task Instance page.
### How to reproduce
Try to clear Task instance while using a user with a `User` role.
### Operating System
Fedora 37
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30414 | https://github.com/apache/airflow/pull/30415 | 22bef613678e003dde9128ac05e6c45ce934a50c | b140c4473335e4e157ff2db85148dd120c0ed893 | "2023-04-01T11:20:33Z" | python | "2023-04-22T17:10:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,407 | [".github/workflows/ci.yml", "BREEZE.rst", "dev/breeze/src/airflow_breeze/commands/testing_commands.py", "dev/breeze/src/airflow_breeze/commands/testing_commands_config.py", "dev/breeze/src/airflow_breeze/utils/selective_checks.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_testing_tests.svg"] | merge breeze's --test-type and --test-types options | ### Description
using `breeze testing tests` recently I noticed that the way to specify which tests to run is very confusing:
* `--test-type` supports specifying one type only (or `All`), allows specifying which provider tests to run in details, and is ignored if `--run-in-parallel` is provided (from what I saw)
* `--test-types` (note the `s` at the end) supports a list of types, does not allow to select specific provider tests, and is ignored if `--run-in-parallel` is NOT specified.
I _think_ that the two are mutually exclusive (i.e. there is no situation where one is taken into account and the other isn’t ignored), so it’d make sense to merge them.
Definition of Done:
- --test-type or --test-types can be used interchangeably, whether the tests are running in parallel or not (it'd be a bit like how `kubectl` allows using singular or plural for some actions, like `k get pod` == `k get pods`)
- When using the type `Providers`, specific provider tests can be selected between square brackets using the current syntax (e.g. `Providers[airbyte,http]`)
- several types can be specified, separated by a space (e.g. `"WWW CLI"`)
- the two bullet points above can be combined (e.g. `--test-type "Always Providers[airbyte,http] WWW"`)
### Use case/motivation
having a different behavior for a very similar option depending on whether we are running in parallel or not is confusing, and from a user perspective, there is no benefit to having those as separate options.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30407 | https://github.com/apache/airflow/pull/30424 | 90ba6fe070d903bca327b52b2f61468408d0d96a | 20606438c27337c20aa9aff8397dfa6f286f03d3 | "2023-03-31T22:12:56Z" | python | "2023-04-04T11:30:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,400 | ["airflow/executors/kubernetes_executor.py"] | ERROR - Unknown error in KubernetesJobWatcher | ### Official Helm Chart version
1.7.0
### Apache Airflow version
2.4.0
### Kubernetes Version
K3s Kubernetes Version: v1.24.2+k3s2
### Helm Chart configuration
_No response_
### Docker Image customizations
_No response_
### What happened
Same as https://github.com/apache/airflow/issues/12229
```
[2023-03-31T17:47:08.304+0000] {kubernetes_executor.py:112} ERROR - Unknown error in KubernetesJobWatcher. Failing
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/executors/kubernetes_executor.py", line 103, in run
self.resource_version = self._run(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/executors/kubernetes_executor.py", line 148, in _run
for event in list_worker_pods():
File "/home/airflow/.local/lib/python3.10/site-packages/kubernetes/watch/watch.py", line 182, in stream
raise client.rest.ApiException(
kubernetes.client.exceptions.ApiException: (410)
Reason: Expired: too old resource version: 202541353 (202544371)
Process KubernetesJobWatcher-3:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/executors/kubernetes_executor.py", line 103, in run
self.resource_version = self._run(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/executors/kubernetes_executor.py", line 148, in _run
for event in list_worker_pods():
File "/home/airflow/.local/lib/python3.10/site-packages/kubernetes/watch/watch.py", line 182, in stream
raise client.rest.ApiException(
kubernetes.client.exceptions.ApiException: (410)
Reason: Expired: too old resource version: 202541353 (202544371)
[2023-03-31T17:47:08.832+0000] {kubernetes_executor.py:291} ERROR - Error while health checking kube watcher process. Process died for unknown reasons
```
### What you think should happen instead
no errors in the logs?
### How to reproduce
appears soon after AF-scheduler pod restart
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30400 | https://github.com/apache/airflow/pull/30425 | cce9b2217b86a88daaea25766d0724862577cc6c | 9e5fabecb05e83700688d940d31a0fbb49000d64 | "2023-03-31T18:13:24Z" | python | "2023-04-13T13:56:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,382 | ["airflow/providers/amazon/aws/transfers/sql_to_s3.py", "docs/apache-airflow-providers-amazon/transfer/sql_to_s3.rst", "tests/providers/amazon/aws/transfers/test_sql_to_s3.py", "tests/system/providers/amazon/aws/example_sql_to_s3.py"] | SqlToS3Operator not able to write data with partition_cols provided. | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
I am using the standard operator version which comes with apache/airflow:2.5.2.
### Apache Airflow version
2.5.2
### Operating System
Ubuntu 22.04.2 LTS
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
I have used a simple docker compose setup can using the same in my local.
### What happened
I am using SqlToS3Operator in my Dag. I need to store the data using the partition col. The operator writes the data in a temporary file but in my case it should be a folder. I am getting the below error for the same.
```
[2023-03-31, 03:47:57 UTC] {sql_to_s3.py:175} INFO - Writing data to temp file
[2023-03-31, 03:47:57 UTC] {taskinstance.py:1775} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/transfers/sql_to_s3.py", line 176, in execute
getattr(data_df, file_options.function)(tmp_file.name, **self.pd_kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/util/_decorators.py", line 207, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/frame.py", line 2685, in to_parquet
**kwargs,
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/io/parquet.py", line 423, in to_parquet
**kwargs,
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/io/parquet.py", line 190, in write
**kwargs,
File "/home/airflow/.local/lib/python3.7/site-packages/pyarrow/parquet/__init__.py", line 3244, in write_to_dataset
max_rows_per_group=row_group_size)
File "/home/airflow/.local/lib/python3.7/site-packages/pyarrow/dataset.py", line 989, in write_dataset
min_rows_per_group, max_rows_per_group, create_dir
File "pyarrow/_dataset.pyx", line 2775, in pyarrow._dataset._filesystemdataset_write
File "pyarrow/error.pxi", line 113, in pyarrow.lib.check_status
NotADirectoryError: [Errno 20] Cannot create directory '/tmp/tmp3z4dpv_p.parquet/application_createdAt=2020-06-05 11:47:44.000000000'. Detail: [errno 20] Not a directory
```
### What you think should happen instead
The Operator should have supported the partition col as well.
### How to reproduce
I am using the below code snipet for the same.
```
sql_to_s3_task = SqlToS3Operator(
task_id="sql_to_s3_task",
sql_conn_id="mysql_con",
query=sql,
s3_bucket=Variable.get("AWS_S3_BUCKET"),
aws_conn_id="aws_con",
file_format="parquet",
s3_key="Fact_applications",
pd_kwargs={
"partition_cols":['application_createdAt']
},
replace=True,
)
```
This could be using to reproduce the same.
### Anything else
I believe [this](https://github.com/apache/airflow/blob/6e751812d2e48b743ae1bc375e3bebb8414b4a0e/airflow/providers/amazon/aws/transfers/sql_to_s3.py#L173) logic should be updated for the same.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30382 | https://github.com/apache/airflow/pull/30460 | 372a0881d9591f6d69105b1ab6709f5f42560fb6 | d7cef588d6f6a749bd5e8fbf3153a275f4120ee8 | "2023-03-31T04:14:10Z" | python | "2023-04-18T23:19:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,365 | ["airflow/cli/cli_config.py", "airflow/cli/commands/dag_command.py", "tests/cli/commands/test_dag_command.py"] | Need an REST API or/and Airflow CLI to fetch last parsed time of a given DAG | ### Description
We need to access the time at which a given DAG was parsed last.
Airflow Version : 2.2.2 and above.
### Use case/motivation
End users want to run a given DAG post applying the changes they have done on them. This would mean that the DAG should be parsed post the edits done to it. Right now the last parsed time is available by accessing the airflow database only. Querying the database directly is not the best solution to the problem. Ideally airflow should be exposing APIs that end users can consume that can help provide the last parsed time for a given DAG.
### Related issues
Not Aware.
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30365 | https://github.com/apache/airflow/pull/30432 | c5b685e88dd6ecf56d96ef4fefa6c409f28e2b22 | 7074167d71c93b69361d24c1121adc7419367f2a | "2023-03-30T08:34:47Z" | python | "2023-04-14T17:14:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,341 | ["airflow/providers/amazon/aws/transfers/s3_to_redshift.py", "tests/providers/amazon/aws/transfers/test_s3_to_redshift.py"] | S3ToRedshiftOperator does not support default values on upsert | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
7.2.1
### Apache Airflow version
2.5.1
### Operating System
Ubuntu
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
I am trying to use the `S3ToRedshiftOperator` to copy data into an existing table which has a column defined as non-null with default.
The copy fails with the following error:
```
redshift_connector.error.ProgrammingError: {'S': 'ERROR', 'C': '42601', 'M': 'NOT NULL column without DEFAULT must be included in column list', 'F': '../src/pg/src/backend/commands/commands_copy.c', 'L': '2727', 'R': 'DoTheCopy'}
```
This is happening because when using the `UPSERT` method, the operator first creates a temporary table with this statement ([here](https://github.com/apache/airflow/blob/47cf233ccd612a68bea1ad3898f06b91c63c1964/airflow/providers/amazon/aws/transfers/s3_to_redshift.py#L173)):
```
CREATE TABLE #bar (LIKE foo.bar);
```
And then attempts to copy data into this temporary table. By default, `CREATE TABLE ... LIKE` does not include default values: _The default behavior is to exclude default expressions, so that all columns of the new table have null defaults._ (from the [docs](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_NEW.html)).
### What you think should happen instead
We should be able to include default values when creating the temporary table.
### How to reproduce
* Create a table with a column defined as non-null with default value
* Use the operator to copy data into it using the `UPSERT` method
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30341 | https://github.com/apache/airflow/pull/32558 | bf68e1060b0214ee195c61f9d7be992161e25589 | 145b16caaa43f0c42bffd97344df916c602cddde | "2023-03-28T06:30:34Z" | python | "2023-07-13T06:29:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,335 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/config_templates/default_celery.py", "tests/executors/test_celery_executor.py"] | Reccomend (or set as default) to enable pool_recycle for celery workers (especially if using MySQL) | ### What do you see as an issue?
Similar to how `sql_alchemy_pool_recycle` defaults to 1800 seconds for the Airflow metastore: https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#config-database-sql-alchemy-pool-recycle
If users are using celery as their backend it provides extra stability to set `pool_recycle`. This problem is particularly acute for users who are using MySQL as backend for tasks because MySQL disconnects connections after 8 hours of being idle. While Airflow can usually force celery to retry connecting it does not always work and tasks can fail.
This is specifically reccomended by the SqlAlchemy docs:
* https://docs.sqlalchemy.org/en/14/core/pooling.html#setting-pool-recycle
* https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine.params.pool_recycle
* https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_wait_timeout
### Solving the problem
We currently have a file which looks like this:
```python
from airflow.config_templates.default_celery import DEFAULT_CELERY_CONFIG
database_engine_options = DEFAULT_CELERY_CONFIG.get(
"database_engine_options", {}
)
# Use pool_pre_ping to detect stale db connections
# https://github.com/apache/airflow/discussions/22113
database_engine_options["pool_pre_ping"] = True
# Use pool recyle due to MySQL disconnecting sessions after 8 hours
# https://docs.sqlalchemy.org/en/14/core/pooling.html#setting-pool-recycle
# https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine.params.pool_recycle
# https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_wait_timeout
database_engine_options["pool_recycle"] = 1800
DEFAULT_CELERY_CONFIG["database_engine_options"] = database_engine_options
```
And we point the env var `AIRFLOW__CELERY__CELERY_CONFIG_OPTIONS` to this object, not sure if this is best practise?
### Anything else
Maybe just change the default options to include this?
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30335 | https://github.com/apache/airflow/pull/30426 | cb18d923f8253ac257c1b47e9276c39bae967666 | bc1d68a6eb01919415c399d678f491e013eb9238 | "2023-03-27T16:31:21Z" | python | "2023-06-02T14:16:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,324 | ["airflow/providers/cncf/kubernetes/CHANGELOG.rst", "airflow/providers/cncf/kubernetes/operators/pod.py", "airflow/providers/cncf/kubernetes/provider.yaml", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/decorators/test_kubernetes.py", "tests/providers/cncf/kubernetes/operators/test_pod.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | KPO deferrable needs kubernetes_conn_id while non deferrable does not | ### Apache Airflow version
2.5.2
### What happened
Not sure if this is a feature not a bug, but I can use KubernetesPodOperator fine without setting a kubernetes_conn_id.
For example:
```
start = KubernetesPodOperator(
namespace="mynamespace",
cluster_context="mycontext",
security_context={ 'runAsUser': 1000 },
name="hello",
image="busybox",
image_pull_secrets=[k8s.V1LocalObjectReference('prodregistry')],
cmds=["sh", "-cx"],
arguments=["echo Start"],
task_id="Start",
in_cluster=False,
is_delete_operator_pod=True,
config_file="/home/airflow/.kube/config",
)
```
But if I add deferrable=True to this it won't work. It seems to require an explicit kubernetes_conn_id (which we don't configure).
Is not possible to the deferrable version to work as the non deferrable one?
### What you think should happen instead
I hoped that kpo deferrable would work the same as non deferrable.
### How to reproduce
Use KPO with deferrable=True but no kubernetes_conn_id setting
### Operating System
Debian 11
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30324 | https://github.com/apache/airflow/pull/28848 | a09fd0d121476964f1c9d7f12960c24517500d2c | 85b9135722c330dfe1a15e50f5f77f3d58109a52 | "2023-03-27T09:59:56Z" | python | "2023-04-08T16:26:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,309 | ["airflow/providers/docker/hooks/docker.py", "airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | in DockerOperator, adding an attribute `tls_verify` to choose whether to validate the provided certificate. | ### Description
The current version of docker operator always performs TLS certificate validation. I think it would be nice to add an option to choose whether or not to validate the provided certificate.
### Use case/motivation
My work environment has several docker hosts with expired self-signed certificates. Since it is difficult to renew all certificates immediately, we are using a custom docker operator to disable certificate validation.
It would be nice if it was provided as an official feature, so I registered an issue.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30309 | https://github.com/apache/airflow/pull/30310 | 51f9910ecbf1186aff164e09d118bdf04d21dfcb | c1a685f752703eeb01f9369612af8c88c24cca09 | "2023-03-26T15:14:46Z" | python | "2023-04-14T10:17:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,289 | ["airflow/sensors/base.py", "tests/sensors/test_base.py"] | If the first poke of a sensor throws an exception, `timeout` does not work | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow Version: 2.2.5
In `reschedule` mode, if the first poke of a sensor throws an exception, `timeout` does not work. There can be any combination of the poke returning `False` or raising an exception after that. My guess is that something is initialized in some database incorrectly, because this returns an empty list every time if the first poke raises an exception:
```
TaskReschedule.find_for_task_instance(
context["ti"], try_number=first_try_number
)
```
This happens here in the main branch: https://github.com/apache/airflow/blob/main/airflow/sensors/base.py#L174-L181
If the first poke returns `False`, I don't see this issue.
### What you think should happen instead
The timeout should be respected whether `poke` returns successfully or not.
A related issue is that if every poke raises an uncaught exception, the timeout will never be respected, since the timeout is checked only after a successful poke. Maybe both issues can be fixed at once?
### How to reproduce
Use this code. Run the dag several times, and see if the total duration including all retries is greater than the timeout.
```
import datetime
import random
from airflow import DAG
from airflow.models import TaskReschedule
from airflow.sensors.base import BaseSensorOperator
from airflow.utils.context import Context
class RandomlyFailSensor(BaseSensorOperator):
def poke(self, context: Context) -> bool:
first_try_number = context["ti"].max_tries - self.retries + 1
task_reschedules = TaskReschedule.find_for_task_instance(
context["ti"], try_number=first_try_number
)
self.log.error(f"\n\nIf this is the first attempt, or first attempt failed, "
f"this will be empty: \n\t{task_reschedules}\n\n")
if random.random() < .5:
self.log.error("\n\nIf this was the very first poke, the timeout *will not* work.\n\n")
raise Exception('Failed!')
else:
self.log.error("\n\nIf this was the very first poke, the timeout *will* work.\n\n")
return False
dag = DAG(
'sensors_test',
schedule_interval=None,
max_active_runs=1,
catchup=False,
default_args={
"owner": "me",
"depends_on_past": False,
"start_date": datetime.datetime(2018, 1, 1),
"email_on_failure": False,
"email_on_retry": False,
"execution_timeout": datetime.timedelta(minutes=10),
}
)
t_always_fail_sensor = RandomlyFailSensor(
task_id='random_fail_sensor',
mode="reschedule",
poke_interval=1,
retry_delay=datetime.timedelta(seconds=1),
timeout=15,
retries=50,
dag=dag
)
```
### Operating System
Debian 11? This Docker image: https://hub.docker.com/layers/library/python/3.8.12/images/sha256-60d1cda1542582095795c25bff869b0c615e2a913c4026ed0313ede156b60468?context=explore
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
I use an internal tool that hides the details of the deployment. If there is more info that would be helpful for debugging, let me know.
### Anything else
- This happens every time, based on the conditions I describe above.
- I'd be happy to submit a PR, but that depends on what my manager says.
- @yuqian90 might know more about this issue, since they contributed related code in [this commit](https://github.com/apache/airflow/commit/a0e6a847aa72ddb15bdc147695273fb3aec8839d#diff-62f7d8a52fefdb8e05d4f040c6d3459b4a56fe46976c24f68843dbaeb5a98487R1164).
- Impact: if the first poke throws an exception and the rest return False, the task will continue indefinitely.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30289 | https://github.com/apache/airflow/pull/30293 | 41c8e58deec2895b0a04879fcde5444b170e679e | 24887091b807527b7f32a58e85775f4daec3aa84 | "2023-03-24T20:21:45Z" | python | "2023-04-05T11:17:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,287 | ["airflow/providers/amazon/aws/transfers/redshift_to_s3.py", "tests/providers/amazon/aws/transfers/test_redshift_to_s3.py"] | RedshiftToS3 Operator Wrapping Query in Quotes Instead of $$ | ### Apache Airflow version
2.5.2
### What happened
When passing a select_query into the RedshiftToS3 Operator, the query will error out if it contains any single quotes because the body of the UNLOAD statement is being wrapped in single quotes.
### What you think should happen instead
Instead, it's better practice to use the double dollar sign or dollar quoting to signify the start and end of the statement to run. This removes the need to escape any special characters and avoids the statement throwing an error in the common case of using single quotes to wrap string literals.
### How to reproduce
Running the RedshiftToS3 Operator with the sql_query: `SELECT 'Single Quotes Break this Operator'` will throw the error
### Operating System
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com//"
### Versions of Apache Airflow Providers
apache-airflow[package-extra]==2.4.3
apache-airflow-providers-amazon
### Deployment
Amazon (AWS) MWAA
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30287 | https://github.com/apache/airflow/pull/35986 | e0df7441fa607645d0a379c2066ca4ab16f5cb95 | 04a781666be2955ed518780ea03bc13a1e3bd473 | "2023-03-24T18:31:54Z" | python | "2023-12-04T19:19:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,280 | ["airflow/www/static/css/dags.css", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py", "docs/apache-airflow/core-concepts/dag-run.rst", "tests/www/views/test_views_home.py"] | Feature request - filter for dags with running status in the main page | ### Description
Feature request to filter by running dags (or by other statuses too). We have over 100 dags and we were having some performance problems. we wanted to see all the running Dags from the main page and found that we couldn't. We can see the light green circle in the runs (and that involves a lot of scrolling) but no way to filter for it.
We use SQL Server and it's job scheduling tool (SQL Agent) has this feature. The implementation for airflow shouldn't necessarily be like this but just presenting this as an example that it's a helpful feature implemented in other tools.
<img width="231" alt="image" src="https://user-images.githubusercontent.com/286903/227529646-97ac2e8e-52de-421a-8328-072f35ccdff2.png">
I'll leave implementation details for someone else.
on v2.2.5
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30280 | https://github.com/apache/airflow/pull/30429 | c25251cde620481592392e5f82f9aa8a259a2f06 | dbe14c31d52a345aa82e050cc0a91ee60d9ee567 | "2023-03-24T13:11:24Z" | python | "2023-05-22T16:05:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,251 | ["airflow/cli/cli_config.py", "airflow/dag_processing/manager.py", "airflow/jobs/dag_processor_job.py", "airflow/models/__init__.py"] | DagProcessor restart constantly when it running as standalone process | ### Apache Airflow version
2.5.2
### What happened
I'm running Airflow locally in my minikube cluster. For the deployment I use Official Apache Airflow Helm Chart (1.8.0) with the follow values.yaml (helm install airflow-release -f values.yaml apache-airflow/airflow) :
defaultAirflowTag: "2.5.2"
airflowVersion: "2.5.2"
dagProcessor:
enabled: true
replicas: 1
env:
name: "AIRFLOW__CORE__LOAD_EXAMPLES"
value: "True"
All component is deployed correctly but dag processor pod is restarting each 5 minutes. When I inspect this pod I found that the liveness probe failed due to timeout. The command executed by the pod is "sh -c CONNECTION_CHECK_MAX_COUNT=0 AIRFLOW__LOGGING__LOGGING_LEVEL=ERROR exec /entrypoint \\\nairflow jobs check --hostname $(hostname)\n".
The following message error is reported:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/loading.py", line 1241, in configure_subclass_mapper
sub_mapper = mapper.polymorphic_map[discriminator]
KeyError: 'DagProcessorJob'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/jobs_command.py", line 47, in check
jobs: list[BaseJob] = query.all()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 2773, in all
return self._iter().all()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/result.py", line 1476, in all
return self._allrows()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/result.py", line 401, in _allrows
rows = self._fetchall_impl()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/result.py", line 1389, in _fetchall_impl
return self._real_result._fetchall_impl()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/result.py", line 1813, in _fetchall_impl
return list(self.iterator)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/loading.py", line 151, in chunks
rows = [proc(row) for row in fetch]
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/loading.py", line 151, in <listcomp>
rows = [proc(row) for row in fetch]
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/loading.py", line 1269, in polymorphic_instance
_instance = polymorphic_instances[discriminator]
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/_collections.py", line 746, in __missing__
self[key] = val = self.creator(key)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/loading.py", line 1244, in configure_subclass_mapper
"No such polymorphic_identity %r is defined" % discriminator
AssertionError: No such polymorphic_identity 'DagProcessorJob' is defined
### What you think should happen instead
I think there is an error in the file /airflow/airflow/cli/cli_parser.py (tag 2.5.2 commit). In line 919 I found this:
ARG_JOB_TYPE_FILTER = Arg(
("--job-type",),
choices=("BackfillJob", "LocalTaskJob", "SchedulerJob", "TriggererJob"),
action="store",
help="The type of job(s) that will be checked.",
)
How we can see, DagProcessorJob does not appear in choices. I think that this could belong to the problem.
PD: In recent version of code, cli_parser.py is split in cli_config.py for that we found this code in it.
### How to reproduce
Deploy Airflow with Official Helm Chart (1.8.0) on minikube cluster with the configuration indicate on "What happened".
### Operating System
Ubuntu 20.04.6 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30251 | https://github.com/apache/airflow/pull/30278 | df49ad179bddcdb098b3eccbf9bb6361cfbafc36 | c858509d186929965219f0d6dce6621dd8edf154 | "2023-03-23T11:09:38Z" | python | "2023-03-24T17:31:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,247 | ["chart/values.schema.json", "chart/values.yaml", "tests/charts/airflow_core/test_pdb_scheduler.py", "tests/charts/other/test_pdb_pgbouncer.py", "tests/charts/webserver/test_pdb_webserver.py"] | Pod Disruption Budget doesn't allow additional properties | ### Official Helm Chart version
1.8.0 (latest released)
### Apache Airflow version
2
### Kubernetes Version
>1.21
### Helm Chart configuration
```yaml
webserver:
podDisruptionBudget:
enabled: true
config:
minAvailable: 1
```
### Docker Image customizations
_No response_
### What happened
if you use the next values the chart you will not ablt to install the chart, the problem is in the schema in this line
https://github.com/apache/airflow/blob/main/chart/values.schema.json#L3320
### What you think should happen instead
_No response_
### How to reproduce
use the this values
```yaml
webserver:
podDisruptionBudget:
enabled: true
config:
minAvailable: 1
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30247 | https://github.com/apache/airflow/pull/30603 | 3df0be0f6fe9786a5fcb85151fb83167649ee163 | 75f5f53ed0aa8df516c9d861153cab4f73318317 | "2023-03-23T04:48:42Z" | python | "2023-05-08T08:16:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,242 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/connection.py", "airflow/secrets/metastore.py", "airflow/serialization/enums.py", "airflow/serialization/serialized_objects.py", "tests/api_internal/endpoints/test_rpc_api_endpoint.py"] | AIP-44 Migrate MetastoreBackend to Internal API | Used by Variable/Connection.
https://github.com/apache/airflow/blob/894741e311ffd642e036b80d3b1b5d53c3747cad/airflow/secrets/metastore.py#L32
| https://github.com/apache/airflow/issues/30242 | https://github.com/apache/airflow/pull/33829 | 0e4d3001397ba2005b2172ad401f9938d5d6aaf8 | 0cb875b7ec1cebb101866581166cd7b97047f941 | "2023-03-22T15:56:40Z" | python | "2023-08-29T10:24:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,240 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/api_internal/internal_api_call.py", "airflow/serialization/enums.py", "airflow/serialization/serialized_objects.py", "tests/api_internal/endpoints/test_rpc_api_endpoint.py", "tests/api_internal/test_internal_api_call.py", "tests/serialization/test_serialized_objects.py"] | AIP-44 Implement conversion to Pydantic-ORM objects in Internal API | null | https://github.com/apache/airflow/issues/30240 | https://github.com/apache/airflow/pull/30282 | 7aca81ceaa6cb640dff9c5d7212adc4aeb078a2f | 41c8e58deec2895b0a04879fcde5444b170e679e | "2023-03-22T15:26:50Z" | python | "2023-04-05T08:54:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,229 | ["docs/apache-airflow/howto/operator/python.rst"] | Update Python operator how-to with @task.sensor example | ### Body
The current [how-to documentation for the `PythonSensor`](https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html#pythonsensor) does not include any references to the existing `@task.sensor` TaskFlow decorator. It would be nice to see how uses together in this doc.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/30229 | https://github.com/apache/airflow/pull/30344 | 4e4e563d3fc68d1becdc1fc5ec1d1f41f6c24dd3 | 2a2ccfc27c3d40caa217ad8f6f0ba0d394ac2806 | "2023-03-22T01:19:01Z" | python | "2023-04-11T09:12:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,225 | ["airflow/decorators/base.py", "airflow/decorators/setup_teardown.py", "airflow/models/baseoperator.py", "airflow/utils/setup_teardown.py", "airflow/utils/task_group.py", "tests/decorators/test_setup_teardown.py", "tests/serialization/test_dag_serialization.py", "tests/utils/test_setup_teardown.py"] | Ensure setup/teardown tasks can be reused/works with task.override | Ensure that this works:
```python
@setup
def mytask():
print("I am a setup task")
with dag_maker() as dag:
mytask.override(task_id='newtask')
assert len(dag.task_group.children) == 1
setup_task = dag.task_group.children["newtask"]
assert setup_task._is_setup
```
and teardown also works | https://github.com/apache/airflow/issues/30225 | https://github.com/apache/airflow/pull/30342 | 28f73e42721bba5c5ad40bb547be9c057ca81030 | c76555930aee9692d2a839b9c7b9e2220717b8a0 | "2023-03-21T21:01:26Z" | python | "2023-03-28T18:15:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,220 | ["airflow/models/dag.py", "airflow/www/static/js/api/useMarkFailedTask.ts", "airflow/www/static/js/api/useMarkSuccessTask.ts", "airflow/www/static/js/api/useMarkTaskDryRun.ts", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/static/js/dag/details/taskInstance/taskActions/MarkInstanceAs.tsx", "airflow/www/views.py", "tests/models/test_dag.py", "tests/www/views/test_views.py"] | set tasks as successful/failed at their task-group level. | ### Description
Ability to clear or mark task groups as success/failure and have that propagate to the tasks within that task group. Sometimes there is a need to adjust the status of tasks within a task group, which can get unwieldy depending on the number of tasks in that task group. A great quality of life upgrade, and something that seems like an intuitive feature, would be the ability to clear or change the status of all tasks at their taskgroup level through the UI.
### Use case/motivation
In the event a large number of tasks, or a whole task group in this case, need to be cleared or their status set to success/failure this would be a great improvement. For example, a manual DAG run triggered through the UI or the API that has a number of task sensors or tasks that otherwise don't matter for that DAG run - instead of setting each one as success by hand, doing so for each task group would be great.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30220 | https://github.com/apache/airflow/pull/30478 | decaaa3df2b3ef0124366033346dc21d62cff057 | 1132da19e5a7d38bef98be0b1f6c61e2c0634bf9 | "2023-03-21T18:06:34Z" | python | "2023-04-27T16:10:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,200 | ["chart/templates/_helpers.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_airflow_common.py"] | Support for providing SHA digest for the image in Helm chart | ### Description
This is my configuration:
```yaml
images:
airflow:
repository: <REPO>
tag: <TAG>
```
I'd like to be able to do the following:
```yaml
images:
airflow:
repository: <REPO>
digest: <SHA_DIGEST>
```
Additionally, I've tried supplying only the repository, or placing the digest as the tag, but both don't work because of [this](https://github.com/apache/airflow/blob/c44c7e1b481b7c1a0d475265835a23b0f507506c/chart/templates/_helpers.yaml#L252). The formatting is done by `repo:tag` while I need `repo@digest`.
### Use case/motivation
I'm using Terraform to deploy Airflow.
I'm using the data source of [`aws_ecr_image`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecr_image) in order to pick the `latest` image.
I want to supply to the [`helm_release`](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) of Airflow the image's digest rather than `latest` as according to the docs, it's bad practice.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30200 | https://github.com/apache/airflow/pull/30214 | 78ab400d7749c683c5c122bcec0a023ded7a9603 | e95f83ef374367d7ac8e75162ebe4ae1abae487f | "2023-03-20T17:07:22Z" | python | "2023-04-10T16:37:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,196 | ["airflow/www/utils.py", "airflow/www/views.py"] | delete dag run times out | ### Apache Airflow version
2.5.2
### What happened
when trying to delete a dag run with many tasks (>1000) the operation times out and the dag run is not deleted.
### What you think should happen instead
_No response_
### How to reproduce
attempt to delete a dag run that contains >1000 tasks (in my case 10k) using the dagrun/list/ page results in a timeout:
![image](https://user-images.githubusercontent.com/7373236/226325567-5a87efa1-4744-417e-9995-b97dd1791401.png)
code for dag (however it fails on any dag with > 1000 tasks):
```
import json
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
from datetime import datetime, timedelta
from airflow.decorators import dag, task
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'retries': 0,
'retry_delay': timedelta(minutes=1),
'start_date': datetime(2023, 2, 26),
'is_delete_operator_pod': True,
'get_logs': True
}
@dag('system_test', schedule=None, default_args=default_args, catchup=False, tags=['maintenance'])
def run_test_airflow():
stress_image = 'dockerhub.prod.evogene.host/progrium/stress'
@task
def create_cmds():
commands = []
for i in range(10000):
commands.append(["stress --cpu 4 --io 1 --vm 2 --vm-bytes 6000M --timeout 60s"])
return commands
KubernetesPodOperator.partial(
image=stress_image ,
task_id=f'test_airflow',
name=f'test_airflow',
cmds=["/bin/sh", "-c"],
log_events_on_failure=True,
pod_template_file=f'/opt/airflow/dags/repo/templates/cpb_cpu_4_mem_16'
).expand(arguments=create_cmds())
run_test_airflow()
```
### Operating System
kubernetes deployment
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30196 | https://github.com/apache/airflow/pull/30330 | a1b99fe5364977739b7d8f22a880eeb9d781958b | 4e4e563d3fc68d1becdc1fc5ec1d1f41f6c24dd3 | "2023-03-20T11:27:46Z" | python | "2023-04-11T07:58:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,169 | ["airflow/providers/google/cloud/hooks/looker.py", "tests/providers/google/cloud/hooks/test_looker.py"] | Potential issue with use of serialize in Looker SDK | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==8.11.0
apache-airflow-providers-http==4.2.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-sqlite==3.3.1
### Apache Airflow version
2
### Operating System
OS X (same issue on AWS)
### Deployment
Amazon (AWS) MWAA
### Deployment details
_No response_
### What happened
I wrote a mod on top of LookerHook to access the `scheduled_plan_run_once` endpoint. The result was the following error.
```Traceback (most recent call last):
File "/usr/local/airflow/dags/utils/looker_operators_mod.py", line 125, in execute
resp = self.hook.run_scheduled_plan_once(
File "/usr/local/airflow/dags/utils/looker_hook_mod.py", line 136, in run_scheduled_plan_once
resp = sdk.scheduled_plan_run_once(plan_to_send)
File "/usr/local/lib/python3.9/site-packages/looker_sdk/sdk/api40/methods.py", line 10273, in scheduled_plan_run_once
self.post(
File "/usr/local/lib/python3.9/site-packages/looker_sdk/rtl/api_methods.py", line 171, in post
serialized = self._get_serialized(body)
File "/usr/local/lib/python3.9/site-packages/looker_sdk/rtl/api_methods.py", line 156, in _get_serialized
serialized = self.serialize(api_model=body) # type: ignore
TypeError: serialize() missing 1 required keyword-only argument: 'converter'
```
I was able to get past the error by rewriting the `get_looker_sdk` function in LookerHook to initialize with `looker_sdk.init40` instead, which resolved the serialize() issue.
### What you think should happen instead
I don't know why the serialization piece is part of the SDK initialization - would love some further context!
### How to reproduce
As far as I can tell, any call to sdk.scheduled_plan_run_once() causes this issue. I tried it with a variety of different dict plans. I only resolved it by changing how I initialized the SDK
### Anything else
n/a
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30169 | https://github.com/apache/airflow/pull/34678 | 3623b77d22077b4f78863952928560833bfba2f4 | 562b98a6222912d3a3d859ca3881af3f768ba7b5 | "2023-03-17T18:50:15Z" | python | "2023-10-02T20:31:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,167 | ["airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/operators/ssh.py", "tests/providers/ssh/hooks/test_ssh.py", "tests/providers/ssh/operators/test_ssh.py"] | SSHOperator - Allow specific command timeout | ### Description
Following #29282, command timeout is set at the `SSHHook` level while it used to be able to set at the `SSHOperator` level.
I will work on a PR as soon as i can.
### Use case/motivation
Ideally, i think we could have a default value set on `SSHHook`, but with the possibility of overriding it at the `SSHOperator` level.
### Related issues
#29282
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30167 | https://github.com/apache/airflow/pull/30190 | 2a42cb46af66c7d6a95a718726cb9206258a0c14 | fe727f985b1053b838433b817458517c0c0f2480 | "2023-03-17T15:56:30Z" | python | "2023-03-21T20:32:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,153 | ["airflow/providers/neo4j/hooks/neo4j.py", "tests/providers/neo4j/hooks/test_neo4j.py"] | Issue with Neo4j provider using some schemes | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Hi,
I've run into some issues when using the neo4j operator.
I've tried running a simple query and got an exception from the driver itself.
**Using: Airflow 2.2.2**
### What you think should happen instead
The exception stated that when using bolt+ssc URI scheme, it is not allowed to use the `encrypted` parameter which is mandatory in the hook (but actually not mandatory when using the driver standalone).
The exception:
neo4j.exceptions.ConfigurationError: The config settings "encrypted", "trust", "trusted_certificates", and "ssl_context" can only be used with the URI schemes ['bolt', 'neo4j']. Use the other URI schemes ['bolt+ssc', 'bolt+s', 'neo4j+ssc', 'neo4j+s'] for setting encryption settings.
In my opinion:
if there's a URI scheme with bolt+ssc, and a GraphDatabase.driver was chosen in the connection settings, it should not be used with the `encrypted` parameter.
I did edit the hook myself and tried this, worked great for me.
### How to reproduce
install the neo4j provider (I used v3.1.0)
Create a neo4j connection in the UI.
Add your host, user/login, password and extras.
In the extras:
{
"encrypted": false,
"neo4j_scheme": false,
"certs_self_signed": true
}
### Operating System
Linux
### Versions of Apache Airflow Providers
pyairtable==1.0.0
tableauserverclient==0.17.0
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-salesforce==3.3.0
apache-airflow-providers-slack==4.1.0
apache-airflow-providers-tableau==2.1.2
apache-airflow-providers-postgres==2.3.0
apache-airflow-providers-jdbc==2.0.1
apache-airflow-providers-neo4j==3.1.0
mysql-connector-python==8.0.27
slackclient>=1.0.0,<2.0.0
boto3==1.20.26
cached-property==1.5.2
### Deployment
Amazon (AWS) MWAA
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30153 | https://github.com/apache/airflow/pull/30418 | 93a5422c5677a42b3329c329d65ff2b38b1348c2 | cd458426c66aca201e43506c950ee68c2f6c3a0a | "2023-03-16T19:47:42Z" | python | "2023-04-21T22:01:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,146 | ["airflow/exceptions.py", "airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | on_failure_callback call multiple times and when task has retries | ### Apache Airflow version
2.5.2
### What happened
The changes in https://github.com/apache/airflow/pull/29743 adds a new places where the `on_failure_callback` is called. This leads to two incorrect behaviors.
1. The `on_failure_callback` is incorrectly called when a task has retries and goes in `UP_FOR_RETRY`
2. The `on_failure_callback` is sometimes called twice
### What you think should happen instead
The on_failure_callback should only be called once when the task goes into a failed state.
### How to reproduce
These two patches (https://github.com/eejbyfeldt/airflow/commit/b0e7a0ae3b2c494bb75772866466110c6b3b7e8f, https://github.com/eejbyfeldt/airflow/commit/c48ca448ac3419d7b2d840405ed0b4699b8ccc02) modifies and existing test case to show that it now in correctly and the second one adds a test case showing it now gets called more than once.
```
From b0e7a0ae3b2c494bb75772866466110c6b3b7e8f Mon Sep 17 00:00:00 2001
From: Emil Ejbyfeldt <[email protected]>
Date: Thu, 16 Mar 2023 14:46:04 +0100
Subject: [PATCH 1/2] Modify spec to show that callback is now incorrectly
called
---
tests/models/test_taskinstance.py | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/tests/models/test_taskinstance.py b/tests/models/test_taskinstance.py
index 50cac05296..d4a0328513 100644
--- a/tests/models/test_taskinstance.py
+++ b/tests/models/test_taskinstance.py
@@ -448,7 +448,7 @@ class TestTaskInstance:
ti.run()
assert State.SKIPPED == ti.state
- def test_task_sigterm_works_with_retries(self, dag_maker):
+ def test_task_sigterm_works_with_retries(self, dag_maker, caplog):
"""
Test that ensures that tasks are retried when they receive sigterm
"""
@@ -462,6 +462,7 @@ class TestTaskInstance:
python_callable=task_function,
retries=1,
retry_delay=datetime.timedelta(seconds=2),
+ on_failure_callback=lambda context: context["ti"].log.info("on_failure_callback called"),
)
dr = dag_maker.create_dagrun()
@@ -471,6 +472,7 @@ class TestTaskInstance:
ti.run()
ti.refresh_from_db()
assert ti.state == State.UP_FOR_RETRY
+ assert "on_failure_callback called" not in caplog.text
def test_task_sigterm_calls_on_failure_callack(self, dag_maker, caplog):
"""
--
2.39.2
```
```
From c48ca448ac3419d7b2d840405ed0b4699b8ccc02 Mon Sep 17 00:00:00 2001
From: Emil Ejbyfeldt <[email protected]>
Date: Thu, 16 Mar 2023 16:10:53 +0100
Subject: [PATCH 2/2] Add test case for on_failure_callback only being called
once
---
tests/models/test_taskinstance.py | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/tests/models/test_taskinstance.py b/tests/models/test_taskinstance.py
index d4a0328513..8d7d8f4861 100644
--- a/tests/models/test_taskinstance.py
+++ b/tests/models/test_taskinstance.py
@@ -474,6 +474,32 @@ class TestTaskInstance:
assert ti.state == State.UP_FOR_RETRY
assert "on_failure_callback called" not in caplog.text
+ def test_task_sigterm_call_on_failure_callback_only_once(self, dag_maker, caplog):
+ """
+ Test that ensures on_failure_callback is called once on sigterm
+ """
+
+ def task_function(ti):
+ os.kill(ti.pid, signal.SIGTERM)
+
+ with dag_maker("test_mark_failure_2"):
+ task = PythonOperator(
+ task_id="test_on_failure",
+ python_callable=task_function,
+ retries=0,
+ retry_delay=datetime.timedelta(seconds=2),
+ on_failure_callback=lambda context: context["ti"].log.info("on_failure_callback called"),
+ )
+
+ dr = dag_maker.create_dagrun()
+ ti = dr.task_instances[0]
+ ti.task = task
+ with pytest.raises(AirflowException):
+ ti.run()
+ ti.refresh_from_db()
+ assert ti.state == State.FAILED
+ assert caplog.text.count("on_failure_callback called") == 1
+
def test_task_sigterm_calls_on_failure_callack(self, dag_maker, caplog):
"""
Test that ensures that tasks call on_failure_callback when they receive sigterm
--
2.39.2
```
Reverting the code changes from https://github.com/apache/airflow/pull/29743 both of these specs passes and the new spec add in that PR also succeeds without the code changes in it. So it not clear it solves the bug it intended to solve.
### Operating System
Fedora 37
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30146 | https://github.com/apache/airflow/pull/30165 | a6581937dd6c8ad45a23f3fef6d5ab9202de586d | 869c1e3581fa163bbaad11a2d5ddaf8cf433296d | "2023-03-16T15:42:35Z" | python | "2023-03-17T10:52:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,124 | ["airflow/models/taskinstance.py", "airflow/utils/state.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py", "tests/models/test_cleartasks.py", "tests/models/test_dagrun.py"] | DagRun's start_date updated when user clears task of the running Dagrun | ### Apache Airflow version
2.5.1
### What happened
DagRun state and start_date are reset if somebody is clearing a task of the running DagRun.
### What you think should happen instead
I think we should not reset DagRun `state` and `start_date` in it's in the running or queued states because it doesn't make any sense for me. `state` and `start_date` of the DgRun should remain the same in case somebody's clearing a task of the running DagRun
### How to reproduce
Let's say we have a Dag with 2 tasks in it - short one and the long one:
```
dag = DAG(
'dummy-dag',
schedule_interval='@once',
catchup=False,
)
DagContext.push_context_managed_dag(dag)
bash_success = BashOperator(
task_id='bash-success',
bash_command='echo "Start and finish"; exit 0',
retries=0,
)
date_ind_success = BashOperator(
task_id='bash-long-success',
bash_command='echo "Start and finish"; sleep 300; exit 0',
)
```
Let's day we have a running Dagrun of this DAG. First task finishes in a second and the long one is still running. We have a start_date and duration set and the Dagrun is still running. It runs for example for a 30 secs (pic 1 and 2)
<img width="486" alt="image" src="https://user-images.githubusercontent.com/23456894/225335210-c2223ad1-771b-459d-b8ed-8f0aacb9b890.png">
<img width="492" alt="image" src="https://user-images.githubusercontent.com/23456894/225335272-ad737aef-2051-4e27-ae36-38c76d720c95.png">
Then we are clearing the short task. It causes clear of the Dagrun state (to `queued`) and clears `start_date` like we have a new Dagrun (pic 3 and 4)
<img width="407" alt="image" src="https://user-images.githubusercontent.com/23456894/225335397-6c7e0df7-a26a-46ed-8eaa-56ff928fc01a.png">
<img width="498" alt="image" src="https://user-images.githubusercontent.com/23456894/225335491-4d6a860a-e923-4878-b212-a6ccb4b590a3.png">
### Operating System
Unix/MacOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30124 | https://github.com/apache/airflow/pull/30125 | 0133f6806dbfb60b84b5bea4ce0daf073c246d52 | 070ecbd87c5ac067418b2814f554555da0a4f30c | "2023-03-15T14:26:30Z" | python | "2023-04-26T15:27:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,097 | ["airflow/jobs/triggerer_job.py", "tests/jobs/test_triggerer_job.py"] | KPO (async) log full config_dict in triggerer | ### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==5.2.2
### Apache Airflow version
2.5.2rc2
### Operating System
ubuntu 22.04
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
the log of the triggerer process show the full config_dict with the K8S credentials
I replaced the credentials with XXXXXX in this example ->
```log
2023-03-14 12:43:32,213] {triggerer_job.py:359} INFO - Trigger <airflow.providers.cncf.kubernetes.triggers.kubernetes_pod.KubernetesPodTrigger pod_name=airflow-test-pod-7uscirwh, pod_namespace=default, base_container_name=base, kubernetes_conn_id=kubernetes_default, poll_interval=2, cluster_context=None, config_dict={'apiVersion': 'v1', 'clusters': [{'cluster': {'certificate-authority-data': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', 'server': 'https://kind-control-plane:6443'}, 'name': 'kind-kind'}], 'contexts': [{'context': {'cluster': 'kind-kind', 'user': 'kind-kind'}, 'name': 'kind-kind'}], 'current-context': 'kind-kind', 'kind': 'Config', 'preferences': {}, 'users': [{'name': 'kind-kind', 'user': {'client-certificate-data': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', 'client-key-data': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'}}]}, in_cluster=None, should_delete_pod=True, get_logs=True, startup_timeout=120, trigger_start_time=2023-03-14T12:43:31.189834+00:00> (ID 4) starting
2023-03-14T12:43:32.214686035Z [2023-03-14 12:43:32,214] {kubernetes_pod.py:122} INFO - Checking pod 'airflow-test-pod-7uscirwh' in namespace 'default'.
2023-03-14T12:43:32.218581616Z [2023-03-14 12:43:32,218] {base.py:73} INFO - Using connection ID 'kubernetes_default' for task execution.
2023-03-14T12:43:32.249594574Z [2023-03-14 12:43:32,249] {kubernetes_pod.py:147} INFO - Container is not completed and still working.
```
probably related to https://github.com/apache/airflow/pull/29498
### What you think should happen instead
_No response_
### How to reproduce
```python
from pendulum import today
from airflow import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
dag = DAG(
dag_id="kubernetes_dag",
schedule_interval="0 0 * * *",
start_date=today("UTC").add(days=-1)
)
with dag:
KubernetesPodOperator(
task_id="task-one",
namespace="default",
kubernetes_conn_id="kubernetes_default",
config_file="/opt/airflow/include/.kube/config", # bug of deferrable -> https://github.com/apache/airflow/pull/29498
name="airflow-test-pod",
image="alpine:3.16.2",
cmds=["sh", "-c", "echo toto"],
is_delete_operator_pod=True,
deferrable=True,
get_logs=True,
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30097 | https://github.com/apache/airflow/pull/30110 | 89579a4ef7970879290e01611ed558e6540e56b6 | 274d9c3508179ae8b0f705d9787e8200be7718e1 | "2023-03-14T12:48:59Z" | python | "2023-04-06T11:59:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,089 | ["airflow/www/views.py", "tests/www/views/test_views_rendered.py"] | Connection password values appearing unmasked in the "Task Instance Details" -> "Environment" field | ### Apache Airflow version
Airflow 2.5.1
### What happened
Connection password values appearing in the "Task Instance Details" -> "Task Attributes" -> environment field.
We are setting environment variables for the docker_operator with values from the password field in a connection.
The values from the password field are masked in the "Rendered Template" section and in the logs but it's showing the values in the "environment" field under Task Instance Details.
### What you think should happen instead
These password values should be masked like they are in the "Rendered Template" and logs.
### How to reproduce
Via this DAG, can run off any image.
Create a connection called "DATABASE_CONFIG" with a password in the password field.
Run this DAg and then check its Task Instance Details.
DAG Code:
```
from airflow import DAG
from docker.types import Mount
from airflow.providers.docker.operators.docker import DockerOperator
from datetime import timedelta
from airflow.models import Variable
from airflow.hooks.base_hook import BaseHook
import pendulum
import json
# Amount of times to retry job on failure
retries = 0
environment_config = {
"DB_WRITE_PASSWORD": BaseHook.get_connection("DATABASE_CONFIG").password,
}
# Setup default args for the job
default_args = {
"owner": "airflow",
"start_date": pendulum.datetime(2023, 1, 1, tz="Australia/Sydney"),
"retries": retries,
}
# Create the DAG
dag = DAG(
"test_dag", # DAG ID
default_args=default_args,
schedule_interval="* * * * *",
catchup=False,
)
# # Create the DAG object
with dag as dag:
docker_task = DockerOperator(
task_id="task",
image="<image>",
execution_timeout=timedelta(minutes=2),
environment=environment_config,
command="<command>",
api_version="auto",
docker_url="tcp://docker.for.mac.localhost:2375",
)
```
Rendered Template is good:
![image](https://user-images.githubusercontent.com/41356007/224928676-4c1de3d9-90dc-40dc-bb27-aa10661537ba.png)
In "Task Instance Details"
![image](https://user-images.githubusercontent.com/41356007/224928510-0dc4fc40-f675-49fd-a299-2c2f42feef5b.png)
### Operating System
centOS Linux and MAC
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Running on a docker via the airflow docker-compose
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30089 | https://github.com/apache/airflow/pull/31125 | db359ee2375dd7208583aee09b9eae00f1eed1f1 | ffe3a68f9ada2d9d35333d6a32eac2b6ac9c70d6 | "2023-03-14T04:35:49Z" | python | "2023-05-08T14:59:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,075 | ["airflow/api_connexion/openapi/v1.yaml"] | Unable to set DagRun state in create Dagrun endpoint ("Property is read-only - 'state'") | ### Apache Airflow version
main (development)
### What happened
While working on another change I noticed that the example [POST from the API docs](https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/post_dag_run) actually leads to a request Error:
```
curl -X POST -H "Cookie: session=xxxx" localhost:8080/api/v1/dags/data_warehouse_dag_5by1a2rogu/dagRuns -d '{"dag_run_id":"string2","logical_date":"2019-08-24T14:15:24Z","execution_date":"2019-08-24T14:15:24Z","conf":{},"state":"queued","note":"strings"}' -H 'Content-Type: application/json'
{
"detail": "Property is read-only - 'state'",
"status": 400,
"title": "Bad Request",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest"
}
```
I believe that this comes from the DagRunSchema marking this field as dump_only:
https://github.com/apache/airflow/blob/478fd826522b6192af6b86105cfa0686583e34c2/airflow/api_connexion/schemas/dag_run_schema.py#L69
So either -
1) The documentation / API spec is incorrect and this field cannot be set in the request
2) The marshmallow schema is incorrect and this field is incorrectly marked as `dump_only`
I think that its the former, as there's [even a test to ensure that this field can't be set in a request](https://github.com/apache/airflow/blob/751a995df55419068f11ebabe483dba3302916ed/tests/api_connexion/endpoints/test_dag_run_endpoint.py#L1247-L1257) - I can look into this and fix it soon.
### What you think should happen instead
The API should accept requested which follow examples from the documentation.
### How to reproduce
Spin up breeze and POST a create dagrun request which attempts to set the DagRun state.
### Operating System
Breeze
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30075 | https://github.com/apache/airflow/pull/30149 | f01140141f1fe51b6ee1eba5b02ab7516a67c9c7 | e01c14661a4ec4bee3a2066ac1323fbd8a4386f1 | "2023-03-13T17:28:20Z" | python | "2023-03-21T18:26:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,073 | ["airflow/models/taskinstance.py", "tests/ti_deps/deps/test_trigger_rule_dep.py"] | Task group expand fails on empty list at get_relevant_upstream_map_indexes | ### Apache Airflow version
2.5.1
### What happened
Expanding of task group fails when the list is empty and there is a task which references mapped index in xcom pull of that group.
![image](https://user-images.githubusercontent.com/114723574/224769499-4a094b0c-8bbe-455f-9034-70c1cbfe2e3a.png)
throws below error
Traceback (most recent call last):
File "/opt/bitnami/airflow/venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/utils/cli.py", line 108, in wrapper
return f(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 73, in scheduler
_run_scheduler_job(args=args)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 43, in _run_scheduler_job
job.run()
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 258, in run
self._execute()
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 759, in _execute
self._run_scheduler_loop()
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 885, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 964, in _do_scheduling
callback_tuples = self._schedule_all_dag_runs(guard, dag_runs, session)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/utils/retries.py", line 78, in wrapped_function
for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs):
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 384, in __iter__
do = self.iter(retry_state=retry_state)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 351, in iter
return fut.result()
File "/opt/bitnami/python/lib/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/opt/bitnami/python/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/utils/retries.py", line 87, in wrapped_function
return func(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1253, in _schedule_all_dag_runs
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1322, in _schedule_dag_run
schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/models/dagrun.py", line 563, in update_state
info = self.task_instance_scheduling_decisions(session)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/models/dagrun.py", line 710, in task_instance_scheduling_decisions
schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/models/dagrun.py", line 793, in _get_ready_tis
if not schedulable.are_dependencies_met(session=session, dep_context=dep_context):
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1070, in are_dependencies_met
for dep_status in self.get_failed_dep_statuses(dep_context=dep_context, session=session):
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1091, in get_failed_dep_statuses
for dep_status in dep.get_dep_statuses(self, session, dep_context):
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/ti_deps/deps/base_ti_dep.py", line 107, in get_dep_statuses
yield from self._get_dep_statuses(ti, session, cxt)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/ti_deps/deps/trigger_rule_dep.py", line 93, in _get_dep_statuses
yield from self._evaluate_trigger_rule(ti=ti, dep_context=dep_context, session=session)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/ti_deps/deps/trigger_rule_dep.py", line 219, in _evaluate_trigger_rule
.filter(or_(*_iter_upstream_conditions()))
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/ti_deps/deps/trigger_rule_dep.py", line 191, in _iter_upstream_conditions
map_indexes = _get_relevant_upstream_map_indexes(upstream_id)
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/ti_deps/deps/trigger_rule_dep.py", line 138, in _get_relevant_upstream_map_indexes
return ti.get_relevant_upstream_map_indexes(
File "/opt/bitnami/airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2652, in get_relevant_upstream_map_indexes
ancestor_map_index = self.map_index * ancestor_ti_count // ti_count
### What you think should happen instead
In case of empty list all the task group should be skipped
### How to reproduce
from airflow.operators.bash import BashOperator
from airflow.operators.python import get_current_context
import pendulum
from airflow.decorators import dag, task, task_group
from airflow.operators.empty import EmptyOperator
@dag(dag_id="test", start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
schedule=None, catchup=False,
render_template_as_native_obj=True
)
def testdag():
task1 =EmptyOperator(task_id="get_attribute_can_json_mapping")
@task
def lkp_schema_output_mapping(**context):
return 1
@task
def task2(**context):
return 2
@task
def task3(table_list, **context):
return []
[task2(), task1,
group2.expand(file_name=task3(table_list=task2()))]
@task_group(
group_id="group2"
)
def group2(file_name):
@task
def get_table_name(name):
return "testing"
table_name = get_table_name(file_name)
run_this = BashOperator(
task_id="run_this",
bash_command="echo {{task_instance.xcom_pull(task_ids='copy_to_staging.get_table_name',"
"map_indexes=task_instance.map_index)}}",
)
table_name >> run_this
dag = testdag()
if __name__ == "__main__":
dag.test()
### Operating System
Debian GNU/Linux 11
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==7.1.0
apache-airflow-providers-apache-cassandra==3.1.0
apache-airflow-providers-apache-drill==2.3.1
apache-airflow-providers-apache-druid==3.3.1
apache-airflow-providers-apache-hdfs==3.2.0
apache-airflow-providers-apache-hive==5.1.1
apache-airflow-providers-apache-pinot==4.0.1
apache-airflow-providers-arangodb==2.1.0
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cloudant==3.1.0
apache-airflow-providers-cncf-kubernetes==5.1.1
apache-airflow-providers-common-sql==1.3.3
apache-airflow-providers-databricks==4.0.0
apache-airflow-providers-docker==3.4.0
apache-airflow-providers-elasticsearch==4.3.3
apache-airflow-providers-exasol==4.1.3
apache-airflow-providers-ftp==3.3.0
apache-airflow-providers-google==8.8.0
apache-airflow-providers-grpc==3.1.0
apache-airflow-providers-hashicorp==3.2.0
apache-airflow-providers-http==4.1.1
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-influxdb==2.1.0
apache-airflow-providers-microsoft-azure==5.1.0
apache-airflow-providers-microsoft-mssql==3.3.2
apache-airflow-providers-mongo==3.1.1
apache-airflow-providers-mysql==4.0.0
apache-airflow-providers-neo4j==3.2.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-presto==4.2.1
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-sendgrid==3.1.0
apache-airflow-providers-sftp==4.2.1
apache-airflow-providers-slack==7.2.0
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-ssh==3.4.0
apache-airflow-providers-trino==4.3.1
apache-airflow-providers-vertica==3.3.1
### Deployment
Other
### Deployment details
_No response_
### Anything else
I have manually changed below in the taskinstance.py(get_relevant_upstream_map_indexes method) and it ran fine. Please check if you can implement the same
if ti_count is None or ti_count == 0:
return None
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30073 | https://github.com/apache/airflow/pull/30084 | 66b5f90f4536329ba1fe0e54e3f15ec98c1e2730 | 8d22828e2519a356e9e38c78c3efee1d13b45675 | "2023-03-13T16:55:34Z" | python | "2023-03-15T22:58:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,071 | ["chart/templates/cleanup/cleanup-cronjob.yaml", "chart/templates/statsd/statsd-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_cleanup_pods.py", "tests/charts/test_statsd.py"] | Helm Chart: allow setting annotations for resource controllers (CronJob, Deployment) | ### Description
The Helm Chart allows setting annotations for the pods created by the `CronJob` [but not the CronJob controller itself](https://github.com/apache/airflow/blob/helm-chart/1.8.0/chart/templates/cleanup/cleanup-cronjob.yaml).
The values file should offer an option to provide custom annotations for the `CronJob` controller, similarly to how the DB migrations job exposes `.Values.migrateDatabaseJob.jobAnnotations`
In the same fashion, other `Deployment` templates expose custom annotations, but [statsd deployment doesn't](https://github.com/apache/airflow/blob/helm-chart/1.8.0/chart/templates/statsd/statsd-deployment.yaml).
### Use case/motivation
Other tools e.g. ArgoCD may require the use of annotations, for example:
* [ArgoCD Sync Options](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options)
* [ArgoCD Sync Phases and Waves](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/)
Example use case:
_Set the cleanup CronJob to be synced after the webserver and scheduler deployments have been synced with ArgoCD_
### Related issues
https://github.com/apache/airflow/issues/25446 originally mentioned the issue regarding the StatsD deployment, but the accepted fix was https://github.com/apache/airflow/pull/25732 which allows setting annotations for the pod template, not the `Deployment` itself
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30071 | https://github.com/apache/airflow/pull/30126 | c1aa4b9500f417e6669a79fbf59c11ae6e6993a2 | 8b634ffa6aa5a83e1f87f1a62bfa07e78147f5c5 | "2023-03-13T12:13:38Z" | python | "2023-03-16T19:09:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,042 | ["airflow/www/utils.py", "airflow/www/views.py"] | Search/filter by note in List Dag Run | ### Description
Going to Airflow web UI, Browse>DAG Run displays the list of runs, but there is no way to search or filter based on the text in the "Note" column.
### Use case/motivation
It is possible to do a free text search for the "Run Id" field. The Note field may contain pieces of information that may be relevant to find, or to filter on the basis of these notes.
### Related issues
Sorting by Note in List Dag Run fails:
https://github.com/apache/airflow/issues/30041
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30042 | https://github.com/apache/airflow/pull/31455 | f00c131cbf5b2c19c817d1a1945326b80f8c79e7 | 5794393c95156097095e6fbf76d7faeb6ec08072 | "2023-03-11T14:16:02Z" | python | "2023-05-25T18:17:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,041 | ["airflow/www/views.py"] | Sorting by Note in List Dag Run fails | ### Apache Airflow version
2.5.1
### What happened
Going to Airflow web UI, Browse>DAG Run displays the list of runs:
http://0.0.0.0:8084/dagrun/list/
Clicking on the columns headers allow to sort the data, except for the 'Note' field. This opens
http://0.0.0.0:8084/dagrun/list/?_oc_DagRunModelView=note&_od_DagRunModelView=asc
and displays an error page:
" Ooops!
Something bad has happened.
...
Python version: 3.7.16
Airflow version: 2.5.1
Node: 80277a0dd39e
-------------------------------------------------------------------------------
Error! Please contact server admin."
### What you think should happen instead
It should sort the data by Note.
### How to reproduce
Run the docker stack. Click Browse>DAG Run to display the list of runs. Then click on the "Note" column header.
### Operating System
Docker image "FROM apache/airflow:2.5.1-python3.7" (with Ubuntu host)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Following https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html
Docker file is:
#!/usr/bin/env -S docker build . --tag=airflow_python_r_v1 --network=host --file
ARG AIRFLOW_VERSION=2.5.1
ARG PYTHON_RUNTIME_VERSION=3.7
FROM apache/airflow:${AIRFLOW_VERSION}-python${PYTHON_RUNTIME_VERSION}
SHELL ["/bin/bash", "-o", "pipefail", "-e", "-u", "-x", "-c"]
USER root
ENV TZ=Europe/Paris
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
USER airflow
### Anything else
Logs form the airflow-webserver container:
```
127.0.0.1 - - [11/Mar/2023:14:58:11 +0100] "GET /health HTTP/1.1" 200 141 "-" "curl/7.74.0"
[2023-03-11 14:58:17,322] {app.py:1742} ERROR - Exception on /dagrun/list/ [GET]
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/security/decorators.py", line 133, in wraps
return f(self, *args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/views.py", line 554, in list
widgets = self._list()
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/baseviews.py", line 1169, in _list
page_size=page_size,
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/baseviews.py", line 1068, in _get_list_widget
page_size=page_size,
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 469, in query
select_columns,
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 424, in apply_all
aliases_mapping=aliases_mapping,
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 371, in _apply_inner_all
query, order_column, order_direction, aliases_mapping=aliases_mapping
File "/home/airflow/.local/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 207, in apply_order_by
query = query.order_by(asc(_order_column))
File "<string>", line 2, in asc
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 3599, in _create_asc
coercions.expect(roles.ByOfRole, column),
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/coercions.py", line 177, in expect
element = element.__clause_element__()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/ext/associationproxy.py", line 442, in __clause_element__
"The association proxy can't be used as a plain column "
NotImplementedError: The association proxy can't be used as a plain column expression; it only works inside of a comparison expression
172.31.0.1 - - [11/Mar/2023:14:58:17 +0100] "GET /dagrun/list/?_oc_DagRunModelView=note&_od_DagRunModelView=asc HTTP/1.1" 500 1544 "http://0.0.0.0:8084/dagrun/list/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0"
127.0.0.1 - - [11/Mar/2023:14:58:42 +0100] "GET /health HTTP/1.1" 200 141 "-" "curl/7.74.0"
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30041 | https://github.com/apache/airflow/pull/30043 | ac0e666fa74ff3bacaae912862558dd704a7ebbf | 12b88ccf3fa486d8ba0d72e75090f76aed53b733 | "2023-03-11T14:02:37Z" | python | "2023-03-14T21:01:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,023 | ["docs/apache-airflow/best-practices.rst"] | Variable with template is ambiguous, especially for new users | ### What do you see as an issue?
In the doc below, it states `Make sure to use variable with template in operator, not in the top level code.`
https://github.com/apache/airflow/blob/main/docs/apache-airflow/best-practices.rst
It then gives this example as a Good Example.
**Good Example**
```
bash_use_variable_good = BashOperator(
task_id="bash_use_variable_good",
bash_command="echo variable foo=${foo_env}",
env={"foo_env": "{{ var.value.get('foo') }}"},
)
```
example below, since `{{ var.value.get('foo') }}` is in the top level code (since the `__init__` method is run every time the dag file is parsed.
This can be ambiguous for users, especially new users, to understand the true difference between templated and non-templated variables.
The difference between the two examples below isn't that one of them is using top-level code and the other isn't, it's that one is jinja templated and the other isn't. There is a great opportunity here to showcase the utility of jinja templating.
```
bash_use_variable_bad_3 = BashOperator(
task_id="bash_use_variable_bad_3",
bash_command="echo variable foo=${foo_env}",
env={"foo_env": Variable.get("foo")}, # DON'T DO THAT
)
```
and
```
bash_use_variable_good = BashOperator(
task_id="bash_use_variable_good",
bash_command="echo variable foo=${foo_env}",
env={"foo_env": "{{ var.value.get('foo') }}"},
)
```
### Solving the problem
Replacing `Make sure to use variable with template in operator, not in the top level code.` with a sentence that is more in line with the examples following it will not only show alignment but also highlight the benefits of jinja templating in top level code.
Perhaps:
```
In top-level code, variables using jinja templates do not produce a request until runtime, whereas, `Variable.get()` produces a request every time the dag file is parsed by the scheduler. This will lead to suboptimal performance for the scheduler and can cause the dag file to timeout before it is fully parsed.
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30023 | https://github.com/apache/airflow/pull/30040 | 8d22828e2519a356e9e38c78c3efee1d13b45675 | f1e40cf799c5ae73ec6f7991efe604f2088d8622 | "2023-03-10T14:13:32Z" | python | "2023-03-16T00:06:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,010 | ["airflow/providers/snowflake/CHANGELOG.rst", "airflow/providers/snowflake/operators/snowflake.py"] | SnowflakeOperator default autocommit flipped to False | ### Apache Airflow Provider(s)
snowflake
### Versions of Apache Airflow Providers
This started with apache-airflow-providers-snowflake==4.0.0 and is still an issue with 4.0.4
### Apache Airflow version
2.5.1
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Astronomer
### Deployment details
This is affecting both local and hosted deployments
### What happened
We are testing out several updated packages, and one thing that broke was the SnowflakeOperator when it was executing a stored procedure. The specific error points to autocommit being set to False:
`Stored procedure execution error: Scoped transaction started in stored procedure is incomplete and it was rolled back.`
Whereas this used to work in version 3.2.0:
```
copy_data_snowflake = SnowflakeOperator(
task_id=f'copy_{table_name}_snowflake',
sql=query,
)
```
In order for it to work now, we have to specify autocommit=True:
```
copy_data_snowflake = SnowflakeOperator(
task_id=f'copy_{table_name}_snowflake',
sql=query,
autocommit=True,
)
```
[The code](https://github.com/apache/airflow/blob/599c587e26d5e0b8fa0a0967f3dc4fa92d257ed0/airflow/providers/snowflake/operators/snowflake.py#L45) still indicates that the default is True, but I believe [this commit](https://github.com/apache/airflow/commit/ecd4d6654ff8e0da4a7b8f29fd23c37c9c219076#diff-e9f45fcabfaa0f3ed0c604e3bf2215fed1c9d3746e9c684b89717f9cd75f1754L98) broke it.
### What you think should happen instead
The default for autocommit should revert to the previous behavior, matching the documentation.
### How to reproduce
In Snowflake:
```
CREATE OR REPLACE TABLE PUBLIC.FOO (BAR VARCHAR);
CREATE OR REPLACE PROCEDURE PUBLIC.FOO()
RETURNS VARCHAR
LANGUAGE SQL
AS $$
INSERT INTO PUBLIC.FOO VALUES('bar');
$$
;
```
In Airflow, this fails:
```
copy_data_snowflake = SnowflakeOperator(
task_id='call_foo',
sql="call public.foo()",
)
```
But this succeeds:
```
copy_data_snowflake = SnowflakeOperator(
task_id='call_foo',
sql="call public.foo()",
autocommit=True,
)
```
### Anything else
It looks like this may be an issue with stored procedures specifically. If I instead do this:
```
copy_data_snowflake = SnowflakeOperator(
task_id='call_foo',
sql="INSERT INTO PUBLIC.FOO VALUES('bar');",
)
```
The logs show that although autocommit is confusingly set to False, a `COMMIT` statement is executed:
```
[2023-03-09, 18:43:09 CST] {cursor.py:727} INFO - query: [ALTER SESSION SET autocommit=False]
[2023-03-09, 18:43:09 CST] {cursor.py:740} INFO - query execution done
[2023-03-09, 18:43:09 CST] {cursor.py:878} INFO - Number of results in first chunk: 1
[2023-03-09, 18:43:09 CST] {sql.py:375} INFO - Running statement: INSERT INTO PUBLIC.FOO VALUES('bar');, parameters: None
[2023-03-09, 18:43:09 CST] {cursor.py:727} INFO - query: [INSERT INTO PUBLIC.FOO VALUES('bar');]
[2023-03-09, 18:43:09 CST] {cursor.py:740} INFO - query execution done
[2023-03-09, 18:43:09 CST] {sql.py:384} INFO - Rows affected: 1
[2023-03-09, 18:43:09 CST] {snowflake.py:380} INFO - Rows affected: 1
[2023-03-09, 18:43:09 CST] {snowflake.py:381} INFO - Snowflake query id: 01aad76b-0606-feb5-0000-26b511d0ba02
[2023-03-09, 18:43:09 CST] {cursor.py:727} INFO - query: [COMMIT]
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30010 | https://github.com/apache/airflow/pull/30020 | 26c6a1c11bcd463d1923bbd9622cbe0682bc9e8a | b9c231ceb0f3053a27744b80e95f08ac0684fe38 | "2023-03-10T01:05:10Z" | python | "2023-03-10T17:47:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,980 | ["airflow/providers/microsoft/azure/hooks/data_lake.py"] | ADLS Gen2 Hook incorrectly forms account URL when using Active Directory authentication method (Azure Data Lake Storage V2) | ### Apache Airflow Provider(s)
microsoft-azure
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-azure 5.2.1
### Apache Airflow version
2.5.1
### Operating System
Ubuntu 18.04
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When attempting to use Azure Active Directory application to connect to Azure Data Lake Storage Gen2 hook, the generated account URL sent to the DataLakeServiceClient is incorrect.
It substitutes in the Client ID (`login` field) where the storage account name should be.
### What you think should happen instead
The `host` field on the connection form should be used to store the storage account name and should be used to fill the account URL for both Active Directory and Key-based authentication.
### How to reproduce
1. Create an "Azure Data Lake Storage V2" connection (adls) and put the AAD application Client ID into `login` field, Client secret into `password` field and Tenant ID into `tenant_id` field.
2. Attempt to perform any operations with the `AzureDataLakeStorageV2Hook` hook.
3. Notice how it fails, and that the URL in the logs is incorrectly `https://{client_id}.dfs.core.windows.net/...`, when it should be `https://{storage_account}.dfs.core.windows.net/...`
This can be fixed by:
1. Making your own copy of the hook.
2. Entering the storage account name into the `host` field (currently labelled "Account Name (Active Directory Auth)").
3. Editing the `get_conn` method to substitute `conn.host` into the `account_url` (instead of `conn.login`).
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29980 | https://github.com/apache/airflow/pull/29981 | def1f89e702d401f67a94f34a01f6a4806ea92e6 | 008f52444a84ceaa2de7c2166b8f253f55ca8c21 | "2023-03-08T15:42:36Z" | python | "2023-03-10T12:11:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,974 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | Inconsistent behavior of EmptyOperator between start and end tasks | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We are using Airflow 2.4.3.
When looking at the documentation for the EmptyOperator, it says explicitly that it is never processed by the executor.
However what I notice is that in our cases it differs between start and end EmptyOperators. The start tasks are not processed by the executor but for some reason the end tasks are for some reason.
This results in unexpected behavior and is inefficient as it creates a pod on kubernetes in our case for no reason. Additionally, it causes some weird behavior in our lineage graphs.
For the start task we see no logs:
```
*** Log file does not exist: /opt/airflow/logs/dag_id=dbt-datahub/run_id=scheduled__2023-03-07T00:00:00+00:00/task_id=initial_task_start/attempt=1.log
*** Fetching from: http://:8793/log/dag_id=dbt-datahub/run_id=scheduled__2023-03-07T00:00:00+00:00/task_id=initial_task_start/attempt=1.log
*** Failed to fetch log file from worker. Request URL is missing an 'http://' or 'https://' protocol.
```
```
dbtdatahubend-dc6d51700abc41e0974b46caafd857ac
*** Reading local file: /opt/airflow/logs/dag_id=dbt-datahub/run_id=manual__2023-03-07T16:56:07.937548+00:00/task_id=end/attempt=1.log
[2023-03-07, 16:56:31 UTC] {taskinstance.py:1165} INFO - Dependencies all met for <TaskInstance: dbt-datahub.end manual__2023-03-07T16:56:07.937548+00:00 [queued]>
[2023-03-07, 16:56:31 UTC] {taskinstance.py:1165} INFO - Dependencies all met for <TaskInstance: dbt-datahub.end manual__2023-03-07T16:56:07.937548+00:00 [queued]>
[2023-03-07, 16:56:31 UTC] {taskinstance.py:1362} INFO -
--------------------------------------------------------------------------------
[2023-03-07, 16:56:31 UTC] {taskinstance.py:1363} INFO - Starting attempt 1 of 1
[2023-03-07, 16:56:31 UTC] {taskinstance.py:1364} INFO -
--------------------------------------------------------------------------------
[2023-03-07, 16:56:31 UTC] {taskinstance.py:1383} INFO - Executing <Task(EmptyOperator): end> on 2023-03-07 16:56:07.937548+00:00
[2023-03-07, 16:56:31 UTC] {standard_task_runner.py:55} INFO - Started process 19 to run task
[2023-03-07, 16:56:31 UTC] {standard_task_runner.py:82} INFO - Running: ['airflow', 'tasks', 'run', 'dbt-datahub', 'end', 'manual__2023-03-07T16:56:07.937548+00:00', '--job-id', '24', '--raw', '--subdir', 'DAGS_FOLDER/dbt-datahub/dbt-datahub.py', '--cfg-path', '/tmp/tmpdr42kl3k']
[2023-03-07, 16:56:31 UTC] {standard_task_runner.py:83} INFO - Job 24: Subtask end
[2023-03-07, 16:56:31 UTC] {task_command.py:376} INFO - Running <TaskInstance: dbt-datahub.end manual__2023-03-07T16:56:07.937548+00:00 [running]> on host dbtdatahubend-dc6d51700abc41e0974b46caafd857ac
[2023-03-07, 16:56:31 UTC] {taskinstance.py:1590} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=Conveyor
AIRFLOW_CTX_DAG_ID=dbt-datahub
AIRFLOW_CTX_TASK_ID=end
AIRFLOW_CTX_EXECUTION_DATE=2023-03-07T16:56:07.937548+00:00
AIRFLOW_CTX_TRY_NUMBER=1
AIRFLOW_CTX_DAG_RUN_ID=manual__2023-03-07T16:56:07.937548+00:00
[2023-03-07, 16:56:31 UTC] {taskinstance.py:1401} INFO - Marking task as SUCCESS. dag_id=dbt-datahub, task_id=end, execution_date=20230307T165607, start_date=20230307T165631, end_date=20230307T165631
[2023-03-07, 16:56:31 UTC] {base.py:71} INFO - Using connection ID 'datahub_rest_default' for task execution.
[2023-03-07, 16:56:31 UTC] {base.py:71} INFO - Using connection ID 'datahub_rest_default' for task execution.
[2023-03-07, 16:56:31 UTC] {_plugin.py:147} INFO - Emitting Datahub Dataflow: DataFlow(urn=<datahub.utilities.urns.data_flow_urn.DataFlowUrn object at 0x7fb9ced397c0>, id='dbt-datahub', orchestrator='airflow', cluster='prod', name=None, description='None\n\n', properties={'_access_control': 'None', '_default_view': "'grid'", 'catchup': 'True', 'fileloc': "'/opt/airflow/dags/dbt-datahub/dbt-datahub.py'", 'is_paused_upon_creation': 'None', 'start_date': 'None', 'tags': '[]', 'timezone': "Timezone('UTC')"}, url='https://app.dev.datafy.cloud/environments/datahubtest/airflow/tree?dag_id=dbt-datahub', tags=set(), owners={'Conveyor'})
[2023-03-07, 16:56:31 UTC] {_plugin.py:165} INFO - Emitting Datahub Datajob: DataJob(id='end', urn=<datahub.utilities.urns.data_job_urn.DataJobUrn object at 0x7fb9cecbbfa0>, flow_urn=<datahub.utilities.urns.data_flow_urn.DataFlowUrn object at 0x7fb9cecbf910>, name=None, description=None, properties={'depends_on_past': 'False', 'email': '[]', 'label': "'end'", 'execution_timeout': 'None', 'sla': 'None', 'task_id': "'end'", 'trigger_rule': "<TriggerRule.ALL_SUCCESS: 'all_success'>", 'wait_for_downstream': 'False', 'downstream_task_ids': 'set()', 'inlets': '[]', 'outlets': '[]'}, url='https://app.dev.datafy.cloud/environments/datahubtest/airflow/taskinstance/list/?flt1_dag_id_equals=dbt-datahub&_flt_3_task_id=end', tags=set(), owners={'Conveyor'}, group_owners=set(), inlets=[], outlets=[], upstream_urns=[<datahub.utilities.urns.data_job_urn.DataJobUrn object at 0x7fb9cecbbc10>])
[2023-03-07, 16:56:31 UTC] {_plugin.py:179} INFO - Emitted Start Datahub Dataprocess Instance: DataProcessInstance(id='dbt-datahub_end_manual__2023-03-07T16:56:07.937548+00:00', urn=<datahub.utilities.urns.data_process_instance_urn.DataProcessInstanceUrn object at 0x7fb9cecbb040>, orchestrator='airflow', cluster='prod', type='BATCH_AD_HOC', template_urn=<datahub.utilities.urns.data_job_urn.DataJobUrn object at 0x7fb9cecbbfa0>, parent_instance=None, properties={'run_id': 'manual__2023-03-07T16:56:07.937548+00:00', 'duration': '0.163779', 'start_date': '2023-03-07 16:56:31.157871+00:00', 'end_date': '2023-03-07 16:56:31.321650+00:00', 'execution_date': '2023-03-07 16:56:07.937548+00:00', 'try_number': '1', 'hostname': 'dbtdatahubend-dc6d51700abc41e0974b46caafd857ac', 'max_tries': '0', 'external_executor_id': 'None', 'pid': '19', 'state': 'success', 'operator': 'EmptyOperator', 'priority_weight': '1', 'unixname': 'airflow', 'log_url': 'https://app.dev.datafy.cloud/environments/datahubtest/airflow/log?execution_date=2023-03-07T16%3A56%3A07.937548%2B00%3A00&task_id=end&dag_id=dbt-datahub&map_index=-1'}, url='https://app.dev.datafy.cloud/environments/datahubtest/airflow/log?execution_date=2023-03-07T16%3A56%3A07.937548%2B00%3A00&task_id=end&dag_id=dbt-datahub&map_index=-1', inlets=[], outlets=[], upstream_urns=[])
[2023-03-07, 16:56:31 UTC] {_plugin.py:191} INFO - Emitted Completed Data Process Instance: DataProcessInstance(id='dbt-datahub_end_manual__2023-03-07T16:56:07.937548+00:00', urn=<datahub.utilities.urns.data_process_instance_urn.DataProcessInstanceUrn object at 0x7fb9ced39700>, orchestrator='airflow', cluster='prod', type='BATCH_SCHEDULED', template_urn=<datahub.utilities.urns.data_job_urn.DataJobUrn object at 0x7fb9cecbbfa0>, parent_instance=None, properties={}, url=None, inlets=[], outlets=[], upstream_urns=[])
[2023-03-07, 16:56:31 UTC] {local_task_job.py:159} INFO - Task exited with return code 0
[2023-03-07, 16:56:31 UTC] {taskinstance.py:2623} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
Airflow scheduler logs for the dag:
```
[2023-03-08 13:25:28,870] {scheduler_job.py:346} INFO - 1 tasks up for execution:
<TaskInstance: dbt-datahub3.dbt-run manual__2023-03-08T13:25:26.874182+00:00 [scheduled]>
[2023-03-08 13:25:28,870] {scheduler_job.py:411} INFO - DAG dbt-datahub3 has 0/32 running and queued tasks
[2023-03-08 13:25:28,870] {scheduler_job.py:497} INFO - Setting the following tasks to queued state:
<TaskInstance: dbt-datahub3.dbt-run manual__2023-03-08T13:25:26.874182+00:00 [scheduled]>
[2023-03-08 13:25:28,873] {scheduler_job.py:536} INFO - Sending TaskInstanceKey(dag_id='dbt-datahub3', task_id='dbt-run', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1) to executor with priority 2 and queue default
[2023-03-08 13:25:28,873] {base_executor.py:95} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'dbt-datahub3', 'dbt-run', 'manual__2023-03-08T13:25:26.874182+00:00', '--local', '--subdir', 'DAGS_FOLDER/dbt-datahub/dbt-datahub.py']
[2023-03-08 13:25:28,875] {kubernetes_executor.py:551} INFO - Add task TaskInstanceKey(dag_id='dbt-datahub3', task_id='dbt-run', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1) with command ['airflow', 'tasks', 'run', 'dbt-datahub3', 'dbt-run', 'manual__2023-03-08T13:25:26.874182+00:00', '--local', '--subdir', 'DAGS_FOLDER/dbt-datahub/dbt-datahub.py'] with executor_config {}
[2023-03-08 13:25:28,876] {kubernetes_executor.py:305} INFO - Kubernetes job is TaskInstanceKey(dag_id='dbt-datahub3', task_id='dbt-run', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1)
[2023-03-08 13:25:28,972] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 had an event of type ADDED
[2023-03-08 13:25:28,972] {kubernetes_executor.py:207} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 Pending
[2023-03-08 13:25:28,976] {scheduler_job.py:588} INFO - Executor reports execution of dbt-datahub3.dbt-run run_id=manual__2023-03-08T13:25:26.874182+00:00 exited with status queued for try_number 1
[2023-03-08 13:25:28,981] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 had an event of type MODIFIED
[2023-03-08 13:25:28,981] {kubernetes_executor.py:207} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 Pending
[2023-03-08 13:25:28,985] {scheduler_job.py:621} INFO - Setting external_id for <TaskInstance: dbt-datahub3.dbt-run manual__2023-03-08T13:25:26.874182+00:00 [queued]> to 42
[2023-03-08 13:25:29,002] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 had an event of type MODIFIED
[2023-03-08 13:25:29,002] {kubernetes_executor.py:207} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 Pending
[2023-03-08 13:25:29,707] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 had an event of type MODIFIED
[2023-03-08 13:25:29,707] {kubernetes_executor.py:207} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 Pending
[2023-03-08 13:25:30,721] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 had an event of type MODIFIED
[2023-03-08 13:25:30,721] {kubernetes_executor.py:207} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 Pending
[2023-03-08 13:25:31,721] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 had an event of type MODIFIED
[2023-03-08 13:25:31,722] {kubernetes_executor.py:219} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 is Running
[2023-03-08 13:25:44,671] {scheduler_job.py:346} INFO - 1 tasks up for execution:
<TaskInstance: dbt-datahub3.end_task manual__2023-03-08T13:25:26.874182+00:00 [scheduled]>
[2023-03-08 13:25:44,671] {scheduler_job.py:411} INFO - DAG dbt-datahub3 has 0/32 running and queued tasks
[2023-03-08 13:25:44,671] {scheduler_job.py:497} INFO - Setting the following tasks to queued state:
<TaskInstance: dbt-datahub3.end_task manual__2023-03-08T13:25:26.874182+00:00 [scheduled]>
[2023-03-08 13:25:44,673] {scheduler_job.py:536} INFO - Sending TaskInstanceKey(dag_id='dbt-datahub3', task_id='end_task', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1) to executor with priority 1 and queue default
[2023-03-08 13:25:44,674] {base_executor.py:95} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'dbt-datahub3', 'end_task', 'manual__2023-03-08T13:25:26.874182+00:00', '--local', '--subdir', 'DAGS_FOLDER/dbt-datahub/dbt-datahub.py']
[2023-03-08 13:25:44,676] {kubernetes_executor.py:551} INFO - Add task TaskInstanceKey(dag_id='dbt-datahub3', task_id='end_task', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1) with command ['airflow', 'tasks', 'run', 'dbt-datahub3', 'end_task', 'manual__2023-03-08T13:25:26.874182+00:00', '--local', '--subdir', 'DAGS_FOLDER/dbt-datahub/dbt-datahub.py'] with executor_config {}
[2023-03-08 13:25:44,676] {kubernetes_executor.py:305} INFO - Kubernetes job is TaskInstanceKey(dag_id='dbt-datahub3', task_id='end_task', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1)
[2023-03-08 13:25:44,749] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 had an event of type MODIFIED
[2023-03-08 13:25:44,749] {kubernetes_executor.py:219} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 is Running
[2023-03-08 13:25:44,756] {scheduler_job.py:588} INFO - Executor reports execution of dbt-datahub3.end_task run_id=manual__2023-03-08T13:25:26.874182+00:00 exited with status queued for try_number 1
[2023-03-08 13:25:44,759] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 had an event of type ADDED
[2023-03-08 13:25:44,759] {kubernetes_executor.py:207} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 Pending
[2023-03-08 13:25:44,763] {scheduler_job.py:621} INFO - Setting external_id for <TaskInstance: dbt-datahub3.end_task manual__2023-03-08T13:25:26.874182+00:00 [queued]> to 42
[2023-03-08 13:25:44,765] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 had an event of type MODIFIED
[2023-03-08 13:25:44,765] {kubernetes_executor.py:207} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 Pending
[2023-03-08 13:25:44,774] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 had an event of type MODIFIED
[2023-03-08 13:25:44,774] {kubernetes_executor.py:207} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 Pending
[2023-03-08 13:25:45,748] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 had an event of type MODIFIED
[2023-03-08 13:25:45,748] {kubernetes_executor.py:207} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 Pending
[2023-03-08 13:25:46,763] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 had an event of type MODIFIED
[2023-03-08 13:25:46,763] {kubernetes_executor.py:207} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 Pending
[2023-03-08 13:25:46,775] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 had an event of type MODIFIED
[2023-03-08 13:25:46,775] {kubernetes_executor.py:212} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 Succeeded
[2023-03-08 13:25:46,962] {kubernetes_executor.py:383} INFO - Attempting to finish pod; pod_id: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1; state: None; annotations: {'dag_id': 'dbt-datahub3', 'task_id': 'dbt-run', 'execution_date': None, 'run_id': 'manual__2023-03-08T13:25:26.874182+00:00', 'try_number': '1'}
[2023-03-08 13:25:46,963] {kubernetes_executor.py:598} INFO - Changing state of (TaskInstanceKey(dag_id='dbt-datahub3', task_id='dbt-run', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1), None, 'dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1', 'datahubtest', '184454484') to None
[2023-03-08 13:25:46,988] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 had an event of type MODIFIED
[2023-03-08 13:25:46,988] {kubernetes_executor.py:212} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 Succeeded
[2023-03-08 13:25:46,997] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 had an event of type DELETED
[2023-03-08 13:25:46,997] {kubernetes_executor.py:212} INFO - Event: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1 Succeeded
[2023-03-08 13:25:47,001] {kubernetes_executor.py:696} INFO - Deleted pod: TaskInstanceKey(dag_id='dbt-datahub3', task_id='dbt-run', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1) in namespace datahubtest
[2023-03-08 13:25:47,001] {scheduler_job.py:588} INFO - Executor reports execution of dbt-datahub3.dbt-run run_id=manual__2023-03-08T13:25:26.874182+00:00 exited with status None for try_number 1
[2023-03-08 13:25:47,078] {kubernetes_executor.py:383} INFO - Attempting to finish pod; pod_id: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1; state: None; annotations: {'dag_id': 'dbt-datahub3', 'task_id': 'dbt-run', 'execution_date': None, 'run_id': 'manual__2023-03-08T13:25:26.874182+00:00', 'try_number': '1'}
[2023-03-08 13:25:47,079] {kubernetes_executor.py:383} INFO - Attempting to finish pod; pod_id: dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1; state: None; annotations: {'dag_id': 'dbt-datahub3', 'task_id': 'dbt-run', 'execution_date': None, 'run_id': 'manual__2023-03-08T13:25:26.874182+00:00', 'try_number': '1'}
[2023-03-08 13:25:47,079] {kubernetes_executor.py:598} INFO - Changing state of (TaskInstanceKey(dag_id='dbt-datahub3', task_id='dbt-run', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1), None, 'dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1', 'datahubtest', '184454492') to None
[2023-03-08 13:25:47,085] {kubernetes_executor.py:696} INFO - Deleted pod: TaskInstanceKey(dag_id='dbt-datahub3', task_id='dbt-run', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1) in namespace datahubtest
[2023-03-08 13:25:47,085] {kubernetes_executor.py:598} INFO - Changing state of (TaskInstanceKey(dag_id='dbt-datahub3', task_id='dbt-run', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1), None, 'dbtdatahub3dbtrun-43aa890b165342d09555ed1555b5f7c1', 'datahubtest', '184454493') to None
[2023-03-08 13:25:47,090] {kubernetes_executor.py:696} INFO - Deleted pod: TaskInstanceKey(dag_id='dbt-datahub3', task_id='dbt-run', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1) in namespace datahubtest
[2023-03-08 13:25:47,090] {scheduler_job.py:588} INFO - Executor reports execution of dbt-datahub3.dbt-run run_id=manual__2023-03-08T13:25:26.874182+00:00 exited with status None for try_number 1
[2023-03-08 13:25:47,757] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 had an event of type MODIFIED
[2023-03-08 13:25:47,757] {kubernetes_executor.py:219} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 is Running
[2023-03-08 13:25:52,768] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 had an event of type MODIFIED
[2023-03-08 13:25:52,768] {kubernetes_executor.py:219} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 is Running
[2023-03-08 13:25:53,077] {dagrun.py:597} INFO - Marking run <DagRun dbt-datahub3 @ 2023-03-08 13:25:26.874182+00:00: manual__2023-03-08T13:25:26.874182+00:00, state:running, queued_at: 2023-03-08 13:25:26.882341+00:00. externally triggered: True> successful
[2023-03-08 13:25:53,078] {dagrun.py:644} INFO - DagRun Finished: dag_id=dbt-datahub3, execution_date=2023-03-08 13:25:26.874182+00:00, run_id=manual__2023-03-08T13:25:26.874182+00:00, run_start_date=2023-03-08 13:25:27.768180+00:00, run_end_date=2023-03-08 13:25:53.078112+00:00, run_duration=25.309932, state=success, external_trigger=True, run_type=manual, data_interval_start=2023-03-07 00:00:00+00:00, data_interval_end=2023-03-08 00:00:00+00:00, dag_hash=2e078fcb467b387d8c788854319f9b3a
[2023-03-08 13:25:53,083] {dag.py:3336} INFO - Setting next_dagrun for dbt-datahub3 to 2023-03-08T00:00:00+00:00, run_after=2023-03-09T00:00:00+00:00
[2023-03-08 13:25:54,777] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 had an event of type MODIFIED
[2023-03-08 13:25:54,777] {kubernetes_executor.py:212} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 Succeeded
[2023-03-08 13:25:54,824] {kubernetes_executor.py:383} INFO - Attempting to finish pod; pod_id: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07; state: None; annotations: {'dag_id': 'dbt-datahub3', 'task_id': 'end_task', 'execution_date': None, 'run_id': 'manual__2023-03-08T13:25:26.874182+00:00', 'try_number': '1'}
[2023-03-08 13:25:54,824] {kubernetes_executor.py:598} INFO - Changing state of (TaskInstanceKey(dag_id='dbt-datahub3', task_id='end_task', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1), None, 'dbtdatahub3endtask-da871afe935944a8b6f344d991242e07', 'datahubtest', '184454541') to None
[2023-03-08 13:25:54,846] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 had an event of type MODIFIED
[2023-03-08 13:25:54,846] {kubernetes_executor.py:212} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 Succeeded
[2023-03-08 13:25:54,853] {kubernetes_executor.py:696} INFO - Deleted pod: TaskInstanceKey(dag_id='dbt-datahub3', task_id='end_task', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1) in namespace datahubtest
[2023-03-08 13:25:54,854] {scheduler_job.py:588} INFO - Executor reports execution of dbt-datahub3.end_task run_id=manual__2023-03-08T13:25:26.874182+00:00 exited with status None for try_number 1
[2023-03-08 13:25:54,855] {kubernetes_executor.py:150} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 had an event of type DELETED
[2023-03-08 13:25:54,855] {kubernetes_executor.py:212} INFO - Event: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07 Succeeded
[2023-03-08 13:25:54,905] {kubernetes_executor.py:383} INFO - Attempting to finish pod; pod_id: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07; state: None; annotations: {'dag_id': 'dbt-datahub3', 'task_id': 'end_task', 'execution_date': None, 'run_id': 'manual__2023-03-08T13:25:26.874182+00:00', 'try_number': '1'}
[2023-03-08 13:25:54,905] {kubernetes_executor.py:383} INFO - Attempting to finish pod; pod_id: dbtdatahub3endtask-da871afe935944a8b6f344d991242e07; state: None; annotations: {'dag_id': 'dbt-datahub3', 'task_id': 'end_task', 'execution_date': None, 'run_id': 'manual__2023-03-08T13:25:26.874182+00:00', 'try_number': '1'}
[2023-03-08 13:25:54,906] {kubernetes_executor.py:598} INFO - Changing state of (TaskInstanceKey(dag_id='dbt-datahub3', task_id='end_task', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1), None, 'dbtdatahub3endtask-da871afe935944a8b6f344d991242e07', 'datahubtest', '184454542') to None
[2023-03-08 13:25:54,910] {kubernetes_executor.py:696} INFO - Deleted pod: TaskInstanceKey(dag_id='dbt-datahub3', task_id='end_task', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1) in namespace datahubtest
[2023-03-08 13:25:54,910] {kubernetes_executor.py:598} INFO - Changing state of (TaskInstanceKey(dag_id='dbt-datahub3', task_id='end_task', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1), None, 'dbtdatahub3endtask-da871afe935944a8b6f344d991242e07', 'datahubtest', '184454543') to None
[2023-03-08 13:25:54,915] {kubernetes_executor.py:696} INFO - Deleted pod: TaskInstanceKey(dag_id='dbt-datahub3', task_id='end_task', run_id='manual__2023-03-08T13:25:26.874182+00:00', try_number=1, map_index=-1) in namespace datahubtest
[2023-03-08 13:25:54,915] {scheduler_job.py:588} INFO - Executor reports execution of dbt-datahub3.end_task run_id=manual__2023-03-08T13:25:26.874182+00:00 exited with status None for try_number 1
```
Dag code used:
```
default_args = {
"owner": "someone",
"depends_on_past": False,
"start_date": datetime(year=2023, month=3, day=6),
"email": [],
"email_on_failure": False,
"email_on_retry": False,
"retries": 0,
"retry_delay": timedelta(minutes=5),
}
dag = DAG(
"dbt-datahub3", default_args=default_args, schedule_interval="@daily", max_active_runs=1
)
dummyStart = DummyOperator(
dag=dag,
task_id="start_task",
)
job = ConveyorContainerOperatorV2(
dag=dag,
task_id="dbt-run",
arguments=["build", "--target", "datahubtest"],
)
dummyEnd = DummyOperator(
dag=dag,
task_id="end_task",
)
dummyStart >> job >> dummyEnd
```
### What you think should happen instead
I expect it to be consistent and that no matter whether the EmptyOperator is in your dag, the same behavior is observed (it is never processed by the executor-.
### How to reproduce
Create 1 dag containing:
- a start emptyOperator task
- a random task (in our case a simple containerTask)
- an end emptyOperator task
### Operating System
kubernetes
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.0.2
apache-airflow-providers-common-sql==1.3.3
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.3.0
apache-airflow-providers-google==8.4.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.1.1
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==4.3.0
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-odbc==3.1.2
apache-airflow-providers-opsgenie==3.1.0
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-ssh==3.2.0
### Deployment
Other Docker-based deployment
### Deployment details
/
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29974 | https://github.com/apache/airflow/pull/29979 | 0d3d0e2e746dd07ab04752800f0cb7f860f6ac46 | a15792dd4216a1ae8c83c8c18ab255d2c558636c | "2023-03-08T11:02:38Z" | python | "2023-03-08T22:28:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,967 | ["chart/dockerfiles/pgbouncer-exporter/build_and_push.sh", "chart/dockerfiles/pgbouncer/build_and_push.sh", "chart/newsfragments/30054.significant.rst"] | Build our supporting images for chart in multi-platform versions | ### Body
The supporting images of ours are built using one platform only but they could be multiplatform.
The scripts to build those should be updated to support multi-platform builds.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/29967 | https://github.com/apache/airflow/pull/30054 | 5a3be7256b2a848524d3635d7907b6829a583101 | 39cfc67cad56afa3b2434bc8e60bcd0676d41fc1 | "2023-03-08T00:22:45Z" | python | "2023-03-15T22:19:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,960 | ["airflow/providers/amazon/aws/hooks/glue.py", "airflow/providers/amazon/aws/operators/glue.py", "tests/providers/amazon/aws/hooks/test_glue.py"] | GlueJobOperator failing with Invalid type for parameter RoleName after updating provider version. | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon = "7.3.0"
### Apache Airflow version
2.5.1
### Operating System
Debian GNU/Linux
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
After updating the provider version to 7.3.0 from 6.0.0, our glue jobs started failing. We currently use the GlueJobOperator to run existing Glue jobs that we manage in Terraform. The full traceback is below:
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/amazon/aws/operators/glue.py", line 150, in execute
glue_job_run = glue_job.initialize_job(self.script_args, self.run_job_kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 165, in initialize_job
job_name = self.create_or_update_glue_job()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 325, in create_or_update_glue_job
config = self.create_glue_job_config()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 108, in create_glue_job_config
execution_role = self.get_iam_execution_role()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 143, in get_iam_execution_role
glue_execution_role = iam_client.get_role(RoleName=self.role_name)
File "/home/airflow/.local/lib/python3.9/site-packages/botocore/client.py", line 530, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/botocore/client.py", line 919, in _make_api_call
request_dict = self._convert_to_request_dict(
File "/home/airflow/.local/lib/python3.9/site-packages/botocore/client.py", line 990, in _convert_to_request_dict
request_dict = self._serializer.serialize_to_request(
File "/home/airflow/.local/lib/python3.9/site-packages/botocore/validate.py", line 381, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid type for parameter RoleName, value: None, type: <class 'NoneType'>, valid types: <class 'str'>
```
### What you think should happen instead
The operator creates a new job run for a glue job without additional configuration.
### How to reproduce
Create a DAG with a GlueJobOperator without using `iam_role_name`. Example:
```python
task = GlueJobOperator(task_id="glue-task", job_name=<glue-job-name>)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29960 | https://github.com/apache/airflow/pull/30162 | fe727f985b1053b838433b817458517c0c0f2480 | 46d9a0c294ea72574a79f0fb567eb9dc97cf96c1 | "2023-03-07T16:44:40Z" | python | "2023-03-21T20:50:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,959 | ["airflow/jobs/local_task_job_runner.py", "airflow/jobs/scheduler_job_runner.py", "airflow/models/dagrun.py", "airflow/models/taskinstance.py", "airflow/serialization/pydantic/job.py"] | expand dynamic mapped tasks in batches | ### Description
expanding tasks in batches to allow mapped tasks spawn more than 1024 processes.
### Use case/motivation
Maximum length of a list is limited to 1024 by `max_map_length (AIRFLOW__CORE__MAX_MAP_LENGTH)`.
during scheduling of the new tasks, an UPDATE query is ran that tries to set all the new tasks at once. Increasing `max_map_length` more than 4K makes airflow scheduler completely unresponsive.
Also, Postgres throws `stack depth limit exceeded` error which can be fixed by updating to a newer version and setting `max_stack_depth` higher. But it doesn't really matter because airflow scheduler freezes up.
As a workaround, I split the dag runs into subdag runs which works but it would be much nicer if we didn't have to worry about exceeding `max_map_length`.
### Related issues
It was discussed here:
[Increasing 'max_map_length' leads to SQL 'max_stack_depth' error with 5000 dags to be spawned #28478](https://github.com/apache/airflow/discussions/28478)
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29959 | https://github.com/apache/airflow/pull/30372 | 5f2628d36cb8481ee21bd79ac184fd8fdce3e47d | ed39b6fab7a241e2bddc49044c272c5f225d6692 | "2023-03-07T16:12:04Z" | python | "2023-04-22T19:10:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,958 | ["airflow/providers/google/cloud/transfers/bigquery_to_gcs.py", "airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator does not respect the destination project ID | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.10.0
### Apache Airflow version
2.3.4
### Operating System
Ubuntu 18.04.6 LTS
### Deployment
Google Cloud Composer
### Deployment details
Google Cloud Composer 2.1.2
### What happened
[`GCSToBigQueryOperator`](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L58) does not respect the BigQuery project ID specified in [`destination_project_dataset_table`](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L74-L77) argument. Instead, it prioritizes the project ID defined in the [Airflow connection](https://i.imgur.com/1tTIlQF.png).
### What you think should happen instead
The project ID specified via [`destination_project_dataset_table`](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L74-L77) should be respected.
**Use case:** Suppose our Composer environment and service account (SA) live in `project-A`, and we want to transfer data into foreign projects `B`, `C`, and `D`. We don't have credentials (and thus don't have Airflow connections defined) for projects `B`, `C`, and `D`. Instead, all transfers are executed by our singular SA in `project-A`. (Assume this SA has cross-project IAM policies). Thus, we want to use a _single_ SA and _single_ [Airflow connection](https://i.imgur.com/1tTIlQF.png) (i.e. `gcp_conn_id=google_cloud_default`) to send data into 3+ destination projects. I imagine this is a fairly common setup for sending data across GCP projects.
**Root cause:** I've been studying the source code, and I believe the bug is caused by [line 309](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L309). Experimentally, I have verified that `hook.project_id` traces back to the [Airflow connection's project ID](https://i.imgur.com/1tTIlQF.png). If no destination project ID is explicitly specified, then it makes sense to _fall back_ on the connection's project. However, if the destination project is explicitly provided, surely the operator should honor that. I think this bug can be fixed by amending line 309 as follows:
```python
project=passed_in_project or hook.project_id
```
This pattern is used successfully in many other areas of the repo: [example](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/operators/gcs.py#L154).
### How to reproduce
Admittedly, this bug is difficult to reproduce, because it requires two GCP projects, i.e. a service account in `project-A`, and inbound GCS files and a destination BigQuery table in `project-B`. Also, you need an Airflow server with a `google_cloud_default` connection that points to `project-A` like [this](https://i.imgur.com/1tTIlQF.png). Assuming all that exists, the bug can be reproduced via the following Airflow DAG:
```python
from airflow import DAG
from airflow.providers.google.cloud.transfers.gcs_to_bigquery import GCSToBigQueryOperator
from datetime import datetime
GCS_BUCKET='my_bucket'
GCS_PREFIX='path/to/*.json'
BQ_PROJECT='project-B'
BQ_DATASET='my_dataset'
BQ_TABLE='my_table'
SERVICE_ACCOUNT='[email protected]'
with DAG(
dag_id='my_dag',
start_date=datetime(2023, 1, 1),
schedule_interval=None,
) as dag:
task = GCSToBigQueryOperator(
task_id='gcs_to_bigquery',
bucket=GCS_BUCKET,
source_objects=GCS_PREFIX,
source_format='NEWLINE_DELIMITED_JSON',
destination_project_dataset_table='{}.{}.{}'.format(BQ_PROJECT, BQ_DATASET, BQ_TABLE),
impersonation_chain=SERVICE_ACCOUNT,
)
```
Stack trace:
```
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/executors/debug_executor.py", line 79, in _run_task
ti.run(job_id=ti.job_id, **params)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1797, in run
self._run_raw_task(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1464, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1612, in _execute_task_with_callbacks
result = self._execute_task(context, task_orig)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1673, in _execute_task
result = execute_callable(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", line 387, in execute
job = self._submit_job(self.hook, job_id)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", line 307, in _submit_job
return hook.insert_job(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 468, in inner_wrapper
return func(self, *args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 1549, in insert_job
job._begin()
File "/opt/python3.8/lib/python3.8/site-packages/google/cloud/bigquery/job/base.py", line 510, in _begin
api_response = client._call_api(
File "/opt/python3.8/lib/python3.8/site-packages/google/cloud/bigquery/client.py", line 782, in _call_api
return call()
File "/opt/python3.8/lib/python3.8/site-packages/google/api_core/retry.py", line 283, in retry_wrapped_func
return retry_target(
File "/opt/python3.8/lib/python3.8/site-packages/google/api_core/retry.py", line 190, in retry_target
return target()
File "/opt/python3.8/lib/python3.8/site-packages/google/cloud/_http/__init__.py", line 494, in api_request
raise exceptions.from_http_response(response)
google.api_core.exceptions.Forbidden: 403 POST https://bigquery.googleapis.com/bigquery/v2/projects/{project-A}/jobs?prettyPrint=false: Access Denied: Project {project-A}: User does not have bigquery.jobs.create permission in project {project-A}.
```
From the stack trace, notice the operator is (incorrectly) attempting to insert into `project-A` rather than `project-B`.
### Anything else
Perhaps out-of-scope, but the inverse direction also suffers from this same problem, i.e. [BigQueryToGcsOperator](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/transfers/bigquery_to_gcs.py#L38) and [line 192](https://github.com/apache/airflow/blob/3374fdfcbddb630b4fc70ceedd5aed673e6c0a0d/airflow/providers/google/cloud/transfers/bigquery_to_gcs.py#L192).
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29958 | https://github.com/apache/airflow/pull/30053 | 732fcd789ddecd5251d391a8d9b72f130bafb046 | af4627fec988995537de7fa172875497608ef710 | "2023-03-07T16:07:36Z" | python | "2023-03-20T08:34:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,957 | ["chart/templates/scheduler/scheduler-deployment.yaml", "chart/templates/webserver/webserver-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_scheduler.py", "tests/charts/test_webserver.py"] | hostAliases for scheduler and webserver | ### Description
I am not sure why this PR was not merged (https://github.com/apache/airflow/pull/23558) but I think it would be great to add hostAliases not just to the workers, but the scheduler and webserver too.
### Use case/motivation
Be able to modify /etc/hosts in webserver and scheduler.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29957 | https://github.com/apache/airflow/pull/30051 | 5c15b23023be59a87355c41ab23a46315cca21a5 | f07d300c4c78fa1b2becb4653db8d25b011ea273 | "2023-03-07T15:25:15Z" | python | "2023-03-12T14:22:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,939 | ["airflow/providers/amazon/aws/links/emr.py", "airflow/providers/amazon/aws/operators/emr.py", "airflow/providers/amazon/aws/sensors/emr.py", "tests/providers/amazon/aws/operators/test_emr_add_steps.py", "tests/providers/amazon/aws/operators/test_emr_create_job_flow.py", "tests/providers/amazon/aws/operators/test_emr_modify_cluster.py", "tests/providers/amazon/aws/operators/test_emr_terminate_job_flow.py", "tests/providers/amazon/aws/sensors/test_emr_job_flow.py", "tests/providers/amazon/aws/sensors/test_emr_step.py"] | AWS EMR Operators: Add Log URI in task logs to speed up debugging | ### Description
Airflow is widely used to launch, interact and submit jobs on AWS EMR Clusters. Existing EMR operators do not provide links to the EMR logs (Job Flow/Step logs), as a result in case of failures the users need to switch to EMR Console or go to AWS S3 console to locate the logs for EMR Jobs and Steps using the job_flow_id available in the EMR Operators and in Xcom.
It will be really convenient and help with debugging if the EMR log links are present in Operator Task logs, it will obviate the need to switch to AWS S3 or AWS EMR consoles from Airflow and lookup the logs using job_flow_ids. It will be a nice improvement for the developer experience.
LogUri for Cluster is available in [DescribeCluster](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/emr/client/describe_cluster.html)
LogFile path for Steps in case of failure is available in [ListSteps](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/emr/client/list_steps.html)
### Use case/motivation
Ability to go to EMR logs directly from Airflow EMR Task logs.
### Related issues
N/A
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29939 | https://github.com/apache/airflow/pull/31032 | 6c92efbe8b99e172fe3b585114e1924c0bb2f26b | 2d5166f9829835bdfd6479aa789c8a27147288d6 | "2023-03-06T18:03:55Z" | python | "2023-05-03T23:18:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,912 | ["airflow/providers/google/cloud/transfers/bigquery_to_gcs.py", "tests/providers/google/cloud/transfers/test_bigquery_to_gcs.py"] | BigQueryToGCSOperator does not wait for completion | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==7.0.0
### Apache Airflow version
2.3.2
### Operating System
Debian GNU/Linux
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
[Deferrable mode for BigQueryToGCSOperator #27683](https://github.com/apache/airflow/pull/27683) changed the functionality of the `BigQueryToGCSOperator` so that it no longer waits for the completion of the operation. This is because the `nowait=True` parameter is now [being set](https://github.com/apache/airflow/pull/27683/files#diff-23c5b2e773487f9c28b75b511dbf7269eda1366f16dec84a349d95fa033ffb3eR191).
### What you think should happen instead
This is unexpected behavior. Any downstream tasks of the `BigQueryToGCSOperator` that expect the CSVs to have been written by the time they are called may result in errors (and have done so in our own operations).
The property should at least be configurable.
### How to reproduce
1. Leverage the `BigQueryToGcsOperator` in your DAG.
2. Have it write a large table to a CSV somewhere in GCS
3. Notice that the task completes almost immediately but the CSVs may not exist in GCS until later.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29912 | https://github.com/apache/airflow/pull/29925 | 30b2e6c185305a56f9fd43683f1176f01fe4e3f6 | 464ab1b7caa78637975008fcbb049d5b52a8b005 | "2023-03-03T23:29:15Z" | python | "2023-03-05T10:40:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,903 | ["airflow/models/baseoperator.py", "tests/models/test_mappedoperator.py"] | Task-level retries overrides from the DAG-level default args are not respected when using `partial` | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
When running a DAG that is structured like:
```
@dag{dag_id="my_dag", default_args={"retries":0"}}
def dag():
op = MyOperator.partial(task_id="my_task", retries=3).expand(...)
```
The following test fails:
```
def test_retries(self) -> None:
dag_bag = DagBag(dag_folder=DAG_FOLDER, include_examples=False)
dag = dag_bag.dags["my_dag"]
for task in dag.tasks:
if "my_task" in task.task_id:
self.assertEqual(3, task.retries) # fails - this is 0
```
When printing out `task.partial_kwargs`, and looking at how the default args and partial args are merged, it seems like the default args are always taking precedence, even though in the `partial` global function, the `retries` do get set later on with the task-level parameter value. This doesn't seem to be respected though.
### What you think should happen instead
_No response_
### How to reproduce
If you run my above unit test for a test DAG, on version 2.4.3, it should show up as a test failure.
### Operating System
OS Ventura
### Versions of Apache Airflow Providers
_No response_
### Deployment
Google Cloud Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29903 | https://github.com/apache/airflow/pull/29913 | 57c09e59ee9273ff64cd4a85b020a4df9b1d9eca | f01051a75e217d5f20394b8c890425915383101f | "2023-03-03T19:22:23Z" | python | "2023-04-14T12:16:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,900 | ["airflow/models/dag.py", "airflow/timetables/base.py", "airflow/timetables/simple.py", "docs/apache-airflow/core-concepts/dag-run.rst", "tests/models/test_dag.py", "tests/timetables/test_continuous_timetable.py"] | Add continues scheduling option | ### Body
There are some use cases where users want to trigger new DAG run as soon as one finished. This is a request I've seen several times with some variations (for example like this [Stackoverflow question](https://stackoverflow.com/q/75623153/14624409)) but the basic request is the same.
The workaround users do to get such functionality is place `TriggerDagRunOperator` as last task of their DAG invoking the same DAG:
```
from datetime import datetime
from airflow import DAG
from airflow.operators.empty import EmptyOperator
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
with DAG(
dag_id="example",
start_date=datetime(2023, 1, 1,),
catchup=False,
schedule=None,
) as dag:
task = EmptyOperator(task_id="first")
trigger = TriggerDagRunOperator(
task_id="trigger",
trigger_dag_id="example",
)
task >> trigger
```
As you can see this works nicely:
![Screenshot 2023-03-03 at 14 20 51](https://user-images.githubusercontent.com/45845474/222718862-dfde8a41-24b6-4991-b318-b7f9784514f6.png)
My suggestion is to add first class support for this use case, so the above example will be changed to:
```
from datetime import datetime
from airflow import DAG
from airflow.operators.empty import EmptyOperator
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
with DAG(
dag_id="example",
start_date=datetime(2023, 1, 1,),
catchup=False,
schedule="@continues",
) as dag:
task = EmptyOperator(task_id="first")
```
I guess it won't exactly be `"@continues"` but more likely new [ScheduleArg](https://github.com/apache/airflow/blob/8b8552f5c4111fe0732067d7af06aa5285498a79/airflow/models/dag.py#L127) type but I show it like that just for simplification of the idea.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/29900 | https://github.com/apache/airflow/pull/29909 | 70680ded7a4056882008b019f5d1a8f559a301cd | c1aa4b9500f417e6669a79fbf59c11ae6e6993a2 | "2023-03-03T12:30:33Z" | python | "2023-03-16T19:08:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,875 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/connection_command.py", "docs/apache-airflow/howto/connection.rst", "tests/cli/commands/test_connection_command.py"] | Airflow Connection Testing Using Airflow CLI | ### Description
Airflow Connection testing using airflow CLI would be very useful , where users can quick add test function to test connection in their applications. It will benefit CLI user to create and test new connections right from instance and reduce time on troubleshooting any connection issue.
### Use case/motivation
airflow connection testing using airflow CLI , similar function as we have in Airflow CLI.
example: airflow connection test "hello_id"
### Related issues
N/A
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29875 | https://github.com/apache/airflow/pull/29892 | a3d59c8c759582c27f5a234ffd4c33a9daeb22a9 | d2e5b097e6251e31fb4c9bb5bf16dc9c77b56f75 | "2023-03-02T14:13:55Z" | python | "2023-03-09T09:26:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,858 | ["airflow/www/package.json", "airflow/www/static/js/api/index.ts", "airflow/www/static/js/api/useDag.ts", "airflow/www/static/js/api/useDagCode.ts", "airflow/www/static/js/dag/details/dagCode/CodeBlock.tsx", "airflow/www/static/js/dag/details/dagCode/index.tsx", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/templates/airflow/dag.html", "airflow/www/yarn.lock"] | Migrate DAG Code page to Grid Details | - [ ] Use REST API to render DAG Code in the grid view as a tab when a user has no runs/tasks selected
- [ ] Redirect all urls to new code
- [ ] delete the old code view | https://github.com/apache/airflow/issues/29858 | https://github.com/apache/airflow/pull/31113 | 3363004450355582712272924fac551dc1f7bd56 | 4beb89965c4ee05498734aa86af2df7ee27e9a51 | "2023-03-02T00:38:49Z" | python | "2023-05-17T16:27:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,843 | ["airflow/models/taskinstance.py", "tests/www/views/test_views.py"] | The "Try Number" filter under task instances search is comparing integer with non-integer object | ### Apache Airflow version
2.5.1
### What happened
The `Try Number` filter is comparing the given integer with an instance of a "property" object
* screenshots
![2023-03-01_11-30](https://user-images.githubusercontent.com/14293802/222210209-fc17c634-4005-4f3d-bee1-30ed23403e71.png)
![2023-03-01_11-31](https://user-images.githubusercontent.com/14293802/222210227-53ef42b7-0b43-4ee1-ad76-cf31b504b4a3.png)
* text version
```
Something bad has happened.
Airflow is used by many users, and it is very likely that others had similar problems and you can easily find
a solution to your problem.
Consider following these steps:
* gather the relevant information (detailed logs with errors, reproduction steps, details of your deployment)
* find similar issues using:
* [GitHub Discussions](https://github.com/apache/airflow/discussions)
* [GitHub Issues](https://github.com/apache/airflow/issues)
* [Stack Overflow](https://stackoverflow.com/questions/tagged/airflow)
* the usual search engine you use on a daily basis
* if you run Airflow on a Managed Service, consider opening an issue using the service support channels
* if you tried and have difficulty with diagnosing and fixing the problem yourself, consider creating a [bug report](https://github.com/apache/airflow/issues/new/choose).
Make sure however, to include all relevant details and results of your investigation so far.
Python version: 3.8.16
Airflow version: 2.5.1
Node: kip-airflow-8b665fdd7-lcg6q
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/security/decorators.py", line 133, in wraps
return f(self, *args, **kwargs)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/views.py", line 554, in list
widgets = self._list()
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/baseviews.py", line 1164, in _list
widgets = self._get_list_widget(
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/baseviews.py", line 1063, in _get_list_widget
count, lst = self.datamodel.query(
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py", line 461, in query
count = self.query_count(query, filters, select_columns)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py", line 382, in query_count
return self._apply_inner_all(
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py", line 368, in _apply_inner_all
query = self.apply_filters(query, inner_filters)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py", line 223, in apply_filters
return filters.apply_all(query)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/filters.py", line 300, in apply_all
query = flt.apply(query, value)
File "/usr/local/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/filters.py", line 169, in apply
return query.filter(field > value)
TypeError: '>' not supported between instances of 'property' and 'int'
```
### What you think should happen instead
The "Try Number" search should compare integer with integer
### How to reproduce
1. Go to "Browse" -> "Task Instances"
2. "Search" -> "Add Filter" -> choose "Dag Id" and "Try Number"
3. Choose "Greater than" in the drop-down and enter an integer
4. Click "Search"
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29843 | https://github.com/apache/airflow/pull/29850 | 00a2c793c7985f8165c2bef9106fc81ee66e07bb | a3c9902bc606f0c067a45f09e9d3d152058918e9 | "2023-03-01T17:45:26Z" | python | "2023-03-10T12:01:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,841 | ["setup.cfg"] | high memory leak, cannot start even webserver | ### Apache Airflow version
2.5.1
### What happened
I'd used airflow 2.3.1 and everything was fine.
Then I decided to move to airflow 2.5.1.
I can't start even webserver, airflow on my laptop consumes the entire memory (32Gb) and OOM killer comes.
I investigated a bit. So it starts with airflow 2.3.4. Only using official docker image (apache/airflow:2.3.4) and only on linux laptop, mac is ok.
Memory leak starts when source code tries to import for example `airflow.cli.commands.webserver_command` module using `airflow.utils.module_loading.import_string`.
I dived deeply and found that it happens when "import daemon" is performed.
You can reproduce it with this command: `docker run --rm --entrypoint="" apache/airflow:2.3.4 /bin/bash -c "python -c 'import daemon'"`. Once again, reproducec only on linux (my kernel is 6.1.12).
That's weird considering `daemon` hasn't been changed since 2018.
### What you think should happen instead
_No response_
### How to reproduce
docker run --rm --entrypoint="" apache/airflow:2.3.4 /bin/bash -c "python -c 'import daemon'"
### Operating System
Arch Linux (kernel 6.1.12)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29841 | https://github.com/apache/airflow/pull/29916 | 864ff2e3ce185dfa3df0509a4bd3c6b5169e907f | c8cc49af2d011f048ebea8a6559ddd5fca00f378 | "2023-03-01T15:36:01Z" | python | "2023-03-04T15:27:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,839 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | Calling endpoint dags/{dag_id}/dagRuns for removed DAG returns "500 Internal Server Error" instead of "404 Not Found" | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Apache Airflow version: 2.4.0
I remove DAG from storage then trigger it:
curl -X POST 'http://localhost:8080/api/dags/<DAG_ID>/dag_runs' --header 'Content-Type: application/json' --data '{"dag_run_id":"my_id"}'
it returns:
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/airflow/.local/lib/python3.8/site-packages/connexion/decorators/decorator.py", line 68, in wrapper
response = function(request)
File "/home/airflow/.local/lib/python3.8/site-packages/connexion/decorators/uri_parsing.py", line 149, in wrapper
response = function(request)
File "/home/airflow/.local/lib/python3.8/site-packages/connexion/decorators/validation.py", line 196, in wrapper
response = function(request)
File "/home/airflow/.local/lib/python3.8/site-packages/connexion/decorators/validation.py", line 399, in wrapper
return function(request)
File "/home/airflow/.local/lib/python3.8/site-packages/connexion/decorators/response.py", line 112, in wrapper
response = function(request)
File "/home/airflow/.local/lib/python3.8/site-packages/connexion/decorators/parameter.py", line 120, in wrapper
return function(**kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/api_connexion/security.py", line 51, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/api_connexion/endpoints/dag_run_endpoint.py", line 310, in post_dag_run
dag_run = dag.create_dagrun(
AttributeError: 'NoneType' object has no attribute 'create_dagrun'
```
### What you think should happen instead
should response with 404 "A specified resource is not found."
### How to reproduce
- remove existing DAG file from storage
- create a new DAG run using API endpoint /api/dags/<DAG_ID>/dag_runs for that deleted DAG
### Operating System
18.04.1 Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29839 | https://github.com/apache/airflow/pull/29860 | fcd3c0149f17b364dfb94c0523d23e3145976bbe | 751a995df55419068f11ebabe483dba3302916ed | "2023-03-01T13:51:58Z" | python | "2023-03-03T14:40:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,836 | ["airflow/www/forms.py", "airflow/www/validators.py", "tests/www/test_validators.py", "tests/www/views/test_views_connection.py"] | Restrict allowed characters in connection ids | ### Description
I bumped into a bug where a connection id was suffixed with a whitespace e.g. "myconn ". When referencing the connection id "myconn" (without whitespace), you get a connection not found error.
To avoid such human errors, I suggest restricting the characters allowed for connection ids.
Some suggestions:
- There's an `airflow.utils.helpers.validate_key` function for validating the DAG id. Probably a good idea to reuse this.
- I believe variable ids are also not validated, would be good to check those too.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29836 | https://github.com/apache/airflow/pull/31140 | 85482e86f5f93015487938acfb0cca368059e7e3 | 5cb8ef80a0bd84651fb660c552563766d8ec0ea1 | "2023-03-01T11:58:40Z" | python | "2023-05-12T10:25:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,819 | ["airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py"] | DAG fails serialization if template_field contains execution_timeout | ### Apache Airflow version
2.5.1
### What happened
If an Operator specifies a template_field with `execution_timeout` then the DAG will serialize correctly but throw an error during deserialization. This causes the entire scheduler to crash and breaks the application.
### What you think should happen instead
The scheduler should never go down because of some code someone wrote, this should probably throw an error during serialization.
### How to reproduce
Define an operator like this
```
class ExecutionTimeoutOperator(BaseOperator):
template_fields = ("execution_timeout", )
def __init__(self, execution_timeout: timedelta, **kwargs):
super().__init__(**kwargs)
self.execution_timeout = execution_timeout
```
then make a dag like this
```
dag = DAG(
"serialize_with_default",
schedule_interval="0 12 * * *",
start_date=datetime(2023, 2, 28),
catchup=False,
default_args={
"execution_timeout": timedelta(days=4),
},
)
with dag:
execution = ExecutionTimeoutOperator(task_id="execution", execution_timeout=timedelta(hours=1))
```
that will break the scheduler, you can force the stack trace by doing this
```
from airflow.models import DagBag
db = DagBag('dags/', read_dags_from_db=True)
db.get_dag('serialize_with_default')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 190, in get_dag
self._add_dag_from_db(dag_id=dag_id, session=session)
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 265, in _add_dag_from_db
dag = row.dag
File "/usr/local/lib/python3.9/site-packages/airflow/models/serialized_dag.py", line 218, in dag
dag = SerializedDAG.from_dict(self.data)
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1287, in from_dict
return cls.deserialize_dag(serialized_obj["dag"])
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1194, in deserialize_dag
v = {task["task_id"]: SerializedBaseOperator.deserialize_operator(task) for task in v}
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1194, in <dictcomp>
v = {task["task_id"]: SerializedBaseOperator.deserialize_operator(task) for task in v}
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 955, in deserialize_operator
cls.populate_operator(op, encoded_op)
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 864, in populate_operator
v = cls._deserialize_timedelta(v)
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 513, in _deserialize_timedelta
return datetime.timedelta(seconds=seconds)
TypeError: unsupported type for timedelta seconds component: str
```
### Operating System
Mac 13.1 (22C65)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==5.1.0
apache-airflow-providers-apache-hdfs==3.2.0
apache-airflow-providers-apache-hive==5.1.1
apache-airflow-providers-apache-spark==4.0.0
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==5.1.1
apache-airflow-providers-common-sql==1.3.3
apache-airflow-providers-datadog==3.1.0
apache-airflow-providers-ftp==3.3.0
apache-airflow-providers-http==4.1.1
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-jdbc==3.3.0
apache-airflow-providers-jenkins==3.2.0
apache-airflow-providers-mysql==4.0.0
apache-airflow-providers-pagerduty==3.1.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-presto==4.2.1
apache-airflow-providers-slack==7.2.0
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-ssh==3.4.0
### Deployment
Docker-Compose
### Deployment details
I could repro this with docker-compose and in a helm backed deployment so I don't think it's really related to the deployment details
### Anything else
In the serialization code there are two pieces of logic that are in
direct conflict with each other. The first dictates how template fields
are serialized, from the code
```
# Store all template_fields as they are if there are JSON Serializable
# If not, store them as strings
```
and the second special cases a few names of arguments that need to be
deserialized in a specific way
```
elif k in {"retry_delay", "execution_timeout", "sla", "max_retry_delay"}:
v = cls._deserialize_timedelta(v)
```
so during serialization airflow sees that execution_timeout is a
template field, serializes it as a string, then during deserialization
it is a special name that forces the deserialization as timedelta and BOOM!
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29819 | https://github.com/apache/airflow/pull/29821 | 6d2face107f24b7e7dce4b98ae3def1178e1fc4c | 7963360b8d43a15791a6b7d4335f482fce1d82d2 | "2023-02-28T18:48:13Z" | python | "2023-03-04T18:19:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,817 | ["chart/values.yaml"] | Chart config section doesn't add kubernetes_executor to airflow.cfg | ### Official Helm Chart version
1.8.0 (latest released)
### Apache Airflow version
2.5.0
### Kubernetes Version
1.25.5
### Helm Chart configuration
```yaml
# In our overrides.yml (to solve the issue for now). Essentially a copy of what is under the config kubernetes section in the values.yaml.
config:
kubernetes_executor:
namespace: '{{ .Release.Namespace }}'
airflow_configmap: '{{ include "airflow_config" . }}'
airflow_local_settings_configmap: '{{ include "airflow_config" . }}'
pod_template_file: '{{ include "airflow_pod_template_file" . }}/pod_template_file.yaml'
worker_container_repository: '{{ .Values.images.airflow.repository | default .Values.defaultAirflowRepository }}'
worker_container_tag: '{{ .Values.images.airflow.tag | default .Values.defaultAirflowTag }}'
multi_namespace_mode: '{{ ternary "True" "False" .Values.multiNamespaceMode }}'
```
### Docker Image customizations
None
### What happened
The 1.8.0 chart adds a [kubernetes] section by default from the charts values.yaml which then gets added to the airflow.cfg file.
### What you think should happen instead
The 1.8.0 chart should add a [kubernetes_executor] section by default from the charts values.yaml which then gets added to the airflow.cfg file.
### How to reproduce
Deploy the chart normally and the scheduler health check fails.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29817 | https://github.com/apache/airflow/pull/29818 | 698773b6477166694263750a0d9283b49f60d9a8 | 4f3751aab677904f043d3c0657eb8283d93a9bbd | "2023-02-28T18:26:46Z" | python | "2023-03-17T22:25:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,803 | ["airflow/utils/db.py"] | Run DAG in isolated session | ### Apache Airflow version
2.5.1
### What happened
Trying the new `airflow.models.DAG.test` function to run e2e tests on a DAG in a `pytest` fashion I find there's no way to force to write to a different db other than the configured one.
This should create an alchemy session for an inmemory db, initialise the db and then use it for the test
```python
@fixture(scope="session")
def airflow_db():
# in-memory database
engine = create_engine(f"sqlite://")
with Session(engine) as db_session:
initdb(session=db_session, load_connections=False)
yield db_session
def test_dag_runs_default(airflow_db):
dag.test(session=airflow_db)
```
However `initdb` never receives the `engine` from `settings` that has been initialised before. It uses the engine **from `settings` instead of the engine from the session**.
https://github.com/apache/airflow/blob/main/airflow/utils/db.py#L694-L695
```python
with create_global_lock(session=session, lock=DBLocks.MIGRATIONS):
Base.metadata.create_all(settings.engine)
Model.metadata.create_all(settings.engine)
```
Then `_create_flask_session_tbl()` reads again the database from the config (which might be the same as when settings was initialised or not) and creates all Airflow tables in a database different from the provided in the session again.
### What you think should happen instead
The sql alchemy base, models and airflow tables should be created in the database provided by the session.
In case the session is injected then, this will match the config. But if a session is provided, it should use this session instead
### How to reproduce
This inits the db specified in the config (defaults to `${HOME}/airflow/airflow.db`), then the test tries to use the in-memory one and breaks
```python
@fixture(scope="session")
def airflow_db():
# in-memory database
engine = create_engine(f"sqlite://")
with Session(engine) as db_session:
initdb(session=db_session, load_connections=False)
yield db_session
def test_dag_runs_default(airflow_db):
dag.test(session=airflow_db)
```
### Operating System
MacOs
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29803 | https://github.com/apache/airflow/pull/29804 | 7ce3b66237fbdb1605cf1f7cec06f0b823c455a1 | 0975560dfa48f43b340c4db9c03658a11ae7c666 | "2023-02-28T13:56:11Z" | python | "2023-04-10T08:06:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,781 | ["airflow/providers/sftp/hooks/sftp.py", "airflow/providers/sftp/sensors/sftp.py", "tests/providers/sftp/hooks/test_sftp.py", "tests/providers/sftp/sensors/test_sftp.py"] | newer_than and file_pattern don't work well together in SFTPSensor | ### Apache Airflow Provider(s)
sftp
### Versions of Apache Airflow Providers
4.2.3
### Apache Airflow version
2.5.1
### Operating System
macOS Ventura 13.2.1
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
I wanted to use `file_pattern` and `newer_than` in `SFTPSensor` to find only the files that landed in SFTP after the data interval of the prior successful DAG run (`{{ prev_data_interval_end_success }}`).
I have four text files (`file.txt`, `file1.txt`, `file2.txt` and `file3.txt`) but only `file3.txt` has the last modification date after the data interval of the prior successful DAG run. I use the following file pattern: `"*.txt"`.
The moment the first file (`file.txt`) was matched and the modification date did not meet the requirement, the task changed the status to `up_for_reschedule`.
### What you think should happen instead
The other files matching the pattern should be checked as well.
### How to reproduce
```python
import pendulum
from airflow import DAG
from airflow.providers.sftp.sensors.sftp import SFTPSensor
with DAG(
dag_id="sftp_test",
start_date=pendulum.datetime(2023, 2, 1, tz="UTC"),
schedule="@once",
render_template_as_native_obj=True,
):
wait_for_file = SFTPSensor(
task_id="wait_for_file",
sftp_conn_id="sftp_default",
path="/upload/",
file_pattern="*.txt",
newer_than="{{ prev_data_interval_end_success }}",
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29781 | https://github.com/apache/airflow/pull/29794 | 60d98a1bc2d54787fcaad5edac36ecfa484fb42b | 9357c81828626754c990c3e8192880511a510544 | "2023-02-27T12:25:27Z" | python | "2023-02-28T05:45:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,759 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py"] | Improve code in `KubernetesPodOperator._render_nested_template_fields` | ### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==5.2.1
### Apache Airflow version
2.5.1
### Operating System
Arch Linux
### Deployment
Other
### Deployment details
_No response_
### What happened
Not really showing a failure in operation, but the code in the [`KubernetesPodOperator._render_nested_template_fields`](https://github.com/apache/airflow/blob/d26dc223915c50ff58252a709bb7b33f5417dfce/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py#L373-L403) function could be improved.
The current code is formed by 6 conditionals checking the type of the `content` variable. Even when the 1st of the succeed, the other 5 conditionals are still checked, which is inefficient because the function could end right there, saving time and resources.
### What you think should happen instead
The conditionals flow could be fixed with a simple map, using a dictionary to immediately get the value or fallback to the default one.
### How to reproduce
There is no bug _per se_ to reproduce. It's just making the code cleaner and more efficient, avoiding to keep computing conditionals even when the condition has been resolved.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29759 | https://github.com/apache/airflow/pull/29760 | 9357c81828626754c990c3e8192880511a510544 | 1e536eb43de4408612bf7bb7d9d2114470c6f43a | "2023-02-25T09:33:25Z" | python | "2023-02-28T05:46:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,754 | ["airflow/example_dags/example_dynamic_task_mapping_with_no_taskflow_operators.py", "docs/apache-airflow/authoring-and-scheduling/dynamic-task-mapping.rst", "tests/serialization/test_dag_serialization.py", "tests/www/views/test_views_acl.py"] | Add classic operator example for dynamic task mapping "reduce" task | ### What do you see as an issue?
The [documentation for Dynamic Task Mapping](https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/dynamic-task-mapping.html#simple-mapping
) does not include an example of a "reduce" task (e.g. `sum_it` in the [examples](https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/dynamic-task-mapping.html#simple-mapping)) using the classic (or non-TaskFlow) operators. It only includes an example that uses the TaskFlow operators.
When I attempted to write a "reduce" task using classic operators for my DAG, I found that there wasn't an obvious approach.
### Solving the problem
We should add an example of a "reduce" task that uses the classic (non-TaskFlow) operators.
For example, for the given `sum_it` example:
```
"""Example DAG demonstrating the usage of dynamic task mapping reduce using classic
operators.
"""
from __future__ import annotations
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
from airflow.operators.python import PythonOperator
def add_one(x: int):
return x + 1
def sum_it(values):
total = sum(values)
print(f"Total was {total}")
with DAG(dag_id="example_dynamic_task_mapping_reduce", start_date=datetime(2022, 3, 4)):
add_one_task = PythonOperator.partial(
task_id="add_one",
python_callable=add_one,
).expand(
op_kwargs=[
{"x": 1},
{"x": 2},
{"x": 3},
]
)
sum_it_task = PythonOperator(
task_id="sum_it",
python_callable=sum_it,
op_kwargs={"values": add_one_task.output},
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29754 | https://github.com/apache/airflow/pull/29762 | c9607d44de5a3c9674a923a601fc444ff957ac7e | 4d4c2b9d8b5de4bf03524acf01a298c162e1d9e4 | "2023-02-24T23:35:25Z" | python | "2023-05-31T05:47:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,746 | ["airflow/providers/databricks/operators/databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | DatabricksSubmitRunOperator does not support passing output of another task to `base_parameters` | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==4.0.0
### Apache Airflow version
2.4.3
### Operating System
MAC OS
### Deployment
Virtualenv installation
### Deployment details
The issue is consistent across multiple Airflow deployments (locally on Docker Compose, remotely on MWAA in AWS, locally using virualenv)
### What happened
Passing `base_parameters` key into `notebook_task` parameter for `DatabricksSubmitRunOperator` as output of a previous task (TaskFlow paradigm) does not work.
After inspection of `DatabricksSubmitRunOperator.init` it seems that the problem relies on the fact that it uses `utils.databricks.normalise_json_content` to validate input parameters and, given that the input parameter is of type `PlainXComArg`, it fails to parse.
The workaround I found is to call it using `partial` and `expand`, which is a bit hacky and much less legible
### What you think should happen instead
`DatabricksSubmitRunOperator` should accept `PlainXComArg` arguments on init and eventually validate on `execute`, prior to submitting job run.
### How to reproduce
This DAG fails to parse:
```python3
with DAG(
"dag_erroring",
start_date=days_ago(1),
params={"param_1": "", "param_2": ""},
) as dag:
@task
def from_dag_params_to_notebook_params(**context):
# Transform/Validate DAG input parameters to sth expected by Notebook
notebook_param_1 = context["dag_run"].conf["param_1"] + "abcd"
notebook_param_2 = context["dag_run"].conf["param_2"] + "efgh"
return {"some_param": notebook_param_1, "some_other_param": notebook_param_2}
DatabricksSubmitRunOperator(
task_id="my_notebook_task",
new_cluster={
"cluster_name": "single-node-cluster",
"spark_version": "7.6.x-scala2.12",
"node_type_id": "i3.xlarge",
"num_workers": 0,
"spark_conf": {
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "[*, 4]",
},
"custom_tags": {"ResourceClass": "SingleNode"},
},
notebook_task={
"notebook_path": "some/path/to/a/notebook",
"base_parameters": from_dag_params_to_notebook_params(),
},
libraries=[],
databricks_retry_limit=3,
timeout_seconds=86400,
polling_period_seconds=20,
)
```
This one does not:
```python3
with DAG(
"dag_parsing_fine",
start_date=days_ago(1),
params={"param_1": "", "param_2": ""},
) as dag:
@task
def from_dag_params_to_notebook_params(**context):
# Transform/Validate DAG input parameters to sth expected by Notebook
notebook_param_1 = context["dag_run"].conf["param_1"] + "abcd"
notebook_param_2 = context["dag_run"].conf["param_2"] + "efgh"
return [{"notebook_path": "some/path/to/a/notebook", "base_parameters":{"some_param": notebook_param_1, "some_other_param": notebook_param_2}}]
DatabricksSubmitRunOperator.partial(
task_id="my_notebook_task",
new_cluster={
"cluster_name": "single-node-cluster",
"spark_version": "7.6.x-scala2.12",
"node_type_id": "i3.xlarge",
"num_workers": 0,
"spark_conf": {
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "[*, 4]",
},
"custom_tags": {"ResourceClass": "SingleNode"},
},
libraries=[],
databricks_retry_limit=3,
timeout_seconds=86400,
polling_period_seconds=20,
).expand(notebook_task=from_dag_params_to_notebook_params())
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29746 | https://github.com/apache/airflow/pull/29840 | c95184e8bc0f974ea8d2d51cbe3ca67e5f4516ac | c405ecb63e352c7a29dd39f6f249ba121bae7413 | "2023-02-24T15:50:14Z" | python | "2023-03-07T15:03:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,733 | ["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/operators/databricks.py", "airflow/providers/databricks/provider.yaml", "docs/apache-airflow-providers-databricks/operators/jobs_create.rst", "tests/providers/databricks/hooks/test_databricks.py", "tests/providers/databricks/operators/test_databricks.py", "tests/system/providers/databricks/example_databricks.py"] | Databricks create/reset then run-now | ### Description
Allow an Airflow DAG to define a Databricks job with the `api/2.1/jobs/create` (or `api/2.1/jobs/reset`) endpoint then run that same job with the `api/2.1/jobs/run-now` endpoint. This would give similar capabilities as the DatabricksSubmitRun operator, but the `api/2.1/jobs/create` endpoint supports additional parameters that the `api/2.1/jobs/runs/submit` doesn't (e.g. `job_clusters`, `email_notifications`, etc.).
### Use case/motivation
Create and run a Databricks job all in the Airflow DAG. Currently, DatabricksSubmitRun operator uses the `api/2.1/jobs/runs/submit` endpoint which doesn't support all features and creates runs that aren't tied to a job in the Databricks UI. Also, DatabricksRunNow operator requires you to define the job either directly in the Databricks UI or through a separate CI/CD pipeline causing the headache of having to change code in multiple places.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29733 | https://github.com/apache/airflow/pull/35156 | da2fdbb7609f7c0e8dd1d1fd9efaec31bb937fe8 | a8784e3c352aafec697d3778eafcbbd455b7ba1d | "2023-02-23T21:01:27Z" | python | "2023-10-27T18:52:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,712 | ["airflow/providers/amazon/aws/hooks/emr.py", "tests/providers/amazon/aws/hooks/test_emr.py"] | EMRHook.get_cluster_id_by_name() doesn't use pagination | ### Apache Airflow version
2.5.1
### What happened
When using EMRHook.get_cluster_id_by_name or any any operator that depends on it (e.g. EMRAddStepsOperator), if the results of the ListClusters API call is paginated (e.g. if your account has more than 50 clusters in the current region), and the desired cluster is in the 2nd page of results, None will be returned instead of the cluster ID.
### What you think should happen instead
Boto's pagination API should be used and the cluster ID should be returned.
### How to reproduce
Use `EmrAddStepsOperator` with the `job_flow_name` parameter on an `aws_conn_id` with more than 50 EMR clusters in the current region.
### Operating System
Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==7.2.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29712 | https://github.com/apache/airflow/pull/29732 | 607068f4f0d259b638743db5b101660da1b43d11 | 9662fd8cc05f69f51ca94b495b14f907aed0d936 | "2023-02-23T00:39:37Z" | python | "2023-05-01T18:45:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,702 | ["airflow/api_connexion/endpoints/connection_endpoint.py", "airflow/api_connexion/endpoints/update_mask.py", "airflow/api_connexion/endpoints/variable_endpoint.py", "tests/api_connexion/endpoints/test_update_mask.py", "tests/api_connexion/endpoints/test_variable_endpoint.py"] | Updating Variables Description via PATCH in Airflow API is Clearing the existing description field of the variable and unable to update description field | ### Apache Airflow version
2.5.1
### What happened
When i made these patch requests to update description of the variable via axios
### 1) Trying to modify new value and description
```javascript
let payload ={ key : "example_variable", value : "new_value", description: "new_Description" }
axios.patch("https://localhost:8080/api/v1/variables/example_variable" , payload ,
{
auth : {
username : "username",
password : "password"
},
headers: {
"Content-Type" : "application/json",
}
});
```
following response received and In the airflow , Existing Variable's ```Description``` is cleared and set to ```None```
```html
response body : {
"description" : "new_Description",
"key": "example_variable",
"value" : "new_value"
}
```
### 2) Trying to update Description with update_mask
```javascript
let payload ={ key : "example_variable", value : "value", description: "new_Description" }
axios.patch("https://localhost:8080/api/v1/variables/example_variable?update_mask=description" , payload ,
{
auth : {
username : "username",
password : "password"
},
headers: {
"Content-Type" : "application/json",
}
});
```
following response received
```html
response body : {
"detail" : null,
"status": 400,
"detail" : "No field to update",
"type" : "https://airflow.apache.org/docs/apache-airflow/2.5.0/stable-rest-api-ref.html#section/Errors/BadRequest"
}
```
### What you think should happen instead
The filed "description" is ignored both while setting the Variable (L113) and ```update_mask``` (L107-111).
https://github.com/apache/airflow/blob/1768872a0085ba423d0a34fe6cc4e1e109f3adeb/airflow/api_connexion/endpoints/variable_endpoint.py#L97-L115
Also in Variable setter its set to ```None``` if input doesn't contain description field
https://github.com/apache/airflow/blob/1768872a0085ba423d0a34fe6cc4e1e109f3adeb/airflow/models/variable.py#L156-L165
### How to reproduce
## PATCH in Airflow REST API
### API call
"https://localhost:8080/api/v1/variables/example_variable?update_mask=description"
### payload
{ key : "example_variable", value : "value", description: "new_Description" }
### headers
"Content-Type" : "application/json"
OR
### API call
"https://localhost:8080/api/v1/variables/example_variable
### payload
{ key : "example_variable", value : "new_value", description: "new_Description" }
### headers
"Content-Type" : "application/json"
### Operating System
Ubuntu 22.04.1 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
Possible Solution to update Description field can be like
https://github.com/apache/airflow/blob/1768872a0085ba423d0a34fe6cc4e1e109f3adeb/airflow/api_connexion/endpoints/connection_endpoint.py#L134-L145
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29702 | https://github.com/apache/airflow/pull/29711 | 3f6b5574c61ef9765d077bdd08ccdaba14013e4a | de8e07dc6fea620541e0daa67131e8fe21dbd5fe | "2023-02-22T19:21:40Z" | python | "2023-03-18T21:03:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,687 | ["airflow/models/renderedtifields.py"] | Deadlock when airflow try to update 'k8s_pod_yaml' in 'rendered_task_instance_fields' table | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
**Airflow 2.4.2**
We run into a problem, where HttpSensor has an error because of deadlock. We are running 3 different dags with 12 max_active_runs, that call api and check for response if it should reshedule it or go to next task. All these sensors have 1 minutes poke interval, so 36 of them are running at the same time. Sometimes (like once in 20 runs) we get following deadlock error:
`Task failed with exception Traceback (most recent call last): File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1803, in _execute_context cursor, statement, parameters, context File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute cursor.execute(statement, parameters) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 206, in execute res = self._query(query) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 319, in _query db.query(q) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/connections.py", line 254, in query _mysql.connection.query(self, query) MySQLdb.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1457, in _run_raw_task self._execute_task_with_callbacks(context, test_mode) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1579, in _execute_task_with_callbacks RenderedTaskInstanceFields.write(rtif) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 75, in wrapper return func(*args, session=session, **kwargs) File "/usr/local/lib/python3.7/contextlib.py", line 119, in __exit__ next(self.gen) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 36, in create_session session.commit() File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1428, in commit self._transaction.commit(_to_root=self.future) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 829, in commit self._prepare_impl() File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl self.session.flush() File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3345, in flush self._flush(objects) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3485, in _flush transaction.rollback(_capture_exception=True) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__ with_traceback=exc_tb, File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 3445, in _flush flush_context.execute() File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute rec.execute(self) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 633, in execute uow, File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 241, in save_obj update, File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 1001, in _emit_update_statements statement, multiparams, execution_options=execution_options File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1614, in _execute_20 return meth(self, args_10style, kwargs_10style, execution_options) File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 326, in _execute_on_connection self, multiparams, params, execution_options File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1491, in _execute_clauseelement cache_hit=cache_hit, File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context e, statement, parameters, cursor, context File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2027, in _handle_dbapi_exception sqlalchemy_exception, with_traceback=exc_info[2], from_=e File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_ raise exception File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1803, in _execute_context cursor, statement, parameters, context File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 719, in do_execute cursor.execute(statement, parameters) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 206, in execute res = self._query(query) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/cursors.py", line 319, in _query db.query(q) File "/home/airflow/.local/lib/python3.7/site-packages/MySQLdb/connections.py", line 254, in query _mysql.connection.query(self, query) sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') [SQL: UPDATE rendered_task_instance_fields SET k8s_pod_yaml=%s WHERE rendered_task_instance_fields.dag_id = %s AND rendered_task_instance_fields.task_id = %s AND rendered_task_instance_fields.run_id = %s AND rendered_task_instance_fields.map_index = %s] [parameters: ('{"metadata": {"annotations": {"dag_id": "bidder-joiner", "task_id": "capitest", "try_number": "1", "run_id": "scheduled__2023-02-15T14:15:00+00:00"}, ... (511 characters truncated) ... e": "AIRFLOW_IS_K8S_EXECUTOR_POD", "value": "True"}], "image": "artifactorymaster.outbrain.com:5005/datainfra/airflow:8cbd2a3d8c", "name": "base"}]}}', 'bidder-joiner', 'capitest', 'scheduled__2023-02-15T14:15:00+00:00', -1)] (Background on this error at: https://sqlalche.me/e/14/e3q8)
`
`Failed to execute job 3966 for task capitest ((MySQLdb.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') [SQL: UPDATE rendered_task_instance_fields SET k8s_pod_yaml=%s WHERE rendered_task_instance_fields.dag_id = %s AND rendered_task_instance_fields.task_id = %s AND rendered_task_instance_fields.run_id = %s AND rendered_task_instance_fields.map_index = %s] [parameters: ('{"metadata": {"annotations": {"dag_id": "bidder-joiner", "task_id": "capitest", "try_number": "1", "run_id": "scheduled__2023-02-15T14:15:00+00:00"}, ... (511 characters truncated) ... e": "AIRFLOW_IS_K8S_EXECUTOR_POD", "value": "True"}], "image": "artifactorymaster.outbrain.com:5005/datainfra/airflow:8cbd2a3d8c", "name": "base"}]}}', 'bidder-joiner', 'capitest', 'scheduled__2023-02-15T14:15:00+00:00', -1)] (Background on this error at: https://sqlalche.me/e/14/e3q8); 68)
`
I checked MySql logs and deadlock is caused by query:
```
DELETE FROM rendered_task_instance_fields WHERE rendered_task_instance_fields.dag_id = 'bidder-joiner-raw_data_2nd_pass_delay' AND rendered_task_instance_fields.task_id = 'is_data_ready' AND ((rendered_task_instance_fields.dag_id, rendered_task_instance_fields.task_id, rendered_task_instance_fields.run_id) NOT IN (SELECT subq2.dag_id, subq2.task_id, subq2.run_id
FROM (SELECT subq1.dag_id AS dag_id, subq1.task_id AS task_id, subq1.run_id AS run_id
FROM (SELECT DISTINCT rendered_task_instance_fields.dag_id AS dag_id, rendered_task_instance_fields.task_id AS task_id, rendered_task_instance_fields.run_id AS run_id, dag_run.execution_date AS execution_date
FROM rendered_task_instance_fields INNER JOIN dag_run ON rendered_task_instance_fields.dag_id = dag_run.dag_id AND rendered_task_instance_fields.run_id = dag_run.run_id
WHERE rendered_task_instance_fields.dag_id = 'bidder-joiner-raw_data
```
### What you think should happen instead
I found similar issue open on github (https://github.com/apache/airflow/issues/25765) so I think it should be resolved in the same way - adding @retry_db_transaction annotation to function that is executing this query
### How to reproduce
Create 3 dags with 12 max_active_runs that use HttpSensor at the same time, same poke interval and mode reschedule.
### Operating System
Ubuntu 20
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql>=1.2.0
mysql-connector-python>=8.0.11
mysqlclient>=1.3.6
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-http==4.0.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-apache-spark==3.0.0
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29687 | https://github.com/apache/airflow/pull/32341 | e53320d62030a53c6ffe896434bcf0fc85803f31 | c8a3c112a7bae345d37bb8b90d68c8d6ff2ef8fc | "2023-02-22T09:00:28Z" | python | "2023-07-05T11:28:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,679 | ["tests/cli/commands/test_internal_api_command.py"] | Fix Quarantined `test_cli_internal_api_background` | ### Body
Recently, [this test](https://github.com/apache/airflow/blob/9de301da2a44385f57be5407e80e16ee376f3d39/tests/cli/commands/test_internal_api_command.py#L134-L137) began to failed with timeout error and it has affected all tests in single CI run. As temporary solution this test was marked as `quarantined`.
We should figure out why it happen and try to resolve it.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/29679 | https://github.com/apache/airflow/pull/29688 | f99d27e5bde8e76fdb504fa213b9eb898c4bc903 | 946bded31af480d03cb2d45a3f8cdd0a9c32838d | "2023-02-21T21:47:56Z" | python | "2023-02-23T07:08:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,677 | ["airflow/providers/amazon/aws/operators/lambda_function.py", "docs/apache-airflow-providers-amazon/operators/lambda.rst", "tests/always/test_project_structure.py", "tests/providers/amazon/aws/operators/test_lambda_function.py", "tests/system/providers/amazon/aws/example_lambda.py"] | Rename AWS lambda related resources | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.5.0
### Operating System
MacOS
### Deployment
Virtualenv installation
### Deployment details
AWS Lambda in Amazon provider package do not follow the convention #20296. Hook, operators and sensors related to AWS lambda need to be renamed to follow this convention. Here are the proposed changes in order to fix it:
- Rename `airflow/providers/amazon/aws/operators/lambda_function.py` to `airflow/providers/amazon/aws/operators/lambda.py`
- Rename `airflow/providers/amazon/aws/sensors/lambda_function.py` to `airflow/providers/amazon/aws/sensors/lambda.py`
- Rename `airflow/providers/amazon/aws/hooks/lambda_function.py` to `airflow/providers/amazon/aws/hooks/lambda.py`
- Rename `AwsLambdaInvokeFunctionOperator` to `LambdaInvokeFunctionOperator`
Since all these changes are breaking changes, it will have to be done following the deprecation pattern:
- Copy/paste the files with the new name
- Update the existing hook, operators and sensors to inherit from these new classes
- Deprecate these classes by sending deprecation warnings. See an example [here](airflow/providers/amazon/aws/operators/aws_lambda.py)
### What happened
_No response_
### What you think should happen instead
_No response_
### How to reproduce
N/A
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29677 | https://github.com/apache/airflow/pull/29749 | b2ecaf9d2c6ccb94ae97728a2d54d31bd351f11e | 38b901ec3f07e6e65880b11cc432fb8ad6243629 | "2023-02-21T19:36:46Z" | python | "2023-02-24T21:40:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,671 | ["tests/providers/openlineage/extractors/test_default_extractor.py"] | Adapt OpenLineage default extractor to properly accept all OL implementation | ### Body
Adapt default extractor to accept any valid type returned from Operators `get_openlineage_facets_*` method.
This needs to ensure compatibility with operators made with external extractors for current openlineage-airflow integration.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/29671 | https://github.com/apache/airflow/pull/31381 | 89bed231db4807826441930661d79520250f3075 | 4e73e47d546bf3fd230f93056d01e12f92274433 | "2023-02-21T18:43:14Z" | python | "2023-06-13T19:09:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,666 | ["airflow/providers/hashicorp/_internal_client/vault_client.py", "airflow/providers/hashicorp/secrets/vault.py", "tests/providers/hashicorp/_internal_client/test_vault_client.py", "tests/providers/hashicorp/secrets/test_vault.py"] | Multiple Mount Points for Hashicorp Vault Back-end | ### Description
Support mounting to multiple namespaces with the Hashicorp Vault Secrets Back-end
### Use case/motivation
As a data engineer I wish to utilize secrets stored in multiple mount paths (to support connecting to multiple namespaces) without having to mount to a higher up namespace.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29666 | https://github.com/apache/airflow/pull/29734 | d0783744fcae40b0b6b2e208a555ea5fd9124dfb | dff425bc3d92697bb447010aa9f3b56519a59f1e | "2023-02-21T16:44:08Z" | python | "2023-02-24T09:48:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,663 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/stats.py", "tests/core/test_stats.py"] | Option to Disable High Cardinality Metrics on Statsd | ### Description
With recent PRs enabling tags-support on Statsd metrics, we gained a deeper understanding into the issue of publishing high cardinality metrics. Through this issue, I hope to facilitate the discussion in categorizing metric cardinality of Airflow specific events and tags, and finding a way to disable high cardinality metrics and including it into 2.6.0 release
In the world of Observability & Metrics, cardinality is broadly defined as the following:
`number of unique metric names * number of unique application tag pairs`
This means that events with _unbounded_ number of tag-pairs (key value pair of tags) as well as events with _unbounded_ number of unique metric names will incur expensive storage requirements on the metrics backend.
Let's take a look at the following metric:
`local_task_job.task_exit.<job_id>.<dag_id>.<task_id>.<return_code>`
Here, we have 4 different variable/tag-like attributes embedded into the metric name that I think we can categorize into 3 levels of cardinality.
1. High cardinality / Unbounded metric
2. Medium cardinality / semi-bounded metric
3. Low cardinality / categorically-bounded metric
### High Cardinality / Unbounded Metric
Example tag: <job_id>
This category of metrics are strictly unbounded, and incorporates a monotonically increasing attribute like <job_id> or <run_id>. To demonstrate just how explosive the growth of these metrics can be, let's take an example. In an Airflow instance with 1000 daily jobs, with a metric retention period of 10 days, we are increasing the cardinality of our metrics by 10,000 on just one single metric just by adding this tag alone. If we add this tag to a few other metrics, that could easily result in an explosion of metric cardinality. As a benchmark,[ DataDog's Enterprise level pricing plan only has 200 custom metrics per host included](https://www.datadoghq.com/pricing/), and anything beyond that needs to be added at a premium. These metrics should be avoided at all costs.
### Medium Cardinality / semi-bounded metric
Example tag: <dag_id>, <task_id>
This category of metrics are semi-bounded. They are not bounded by a pre-defined category of enums, but they are bounded by the number of dags or tasks there are within an Airflow infrastructure. This means that although these metrics can lead to increasing levels of cardinality in an Airflow cluster with increasing number of dags, cardinality will still be temporarily bounded. I.e. a given cluster will maintain its level of cardinality over time.
### Low Cardinality / categorically-bounded metric
Example tag: <return_code>
This category of metrics is strictly bounded by a category of enums. <return_code> and <task_state> are good examples of attributes with low cardinality. Ideally, we would only want to publish metrics with this level of cardinality.
Using above definition of High Cardinality, I've identified the following metrics as examples that fall under this criteria.
https://github.com/apache/airflow/blob/main/airflow/jobs/local_task_job.py#L292
https://github.com/apache/airflow/blob/main/airflow/dag_processing/processor.py#L444
https://github.com/apache/airflow/blob/main/airflow/jobs/scheduler_job.py#L691
https://github.com/apache/airflow/blob/main/airflow/jobs/scheduler_job.py#L1584
https://github.com/apache/airflow/blob/main/airflow/models/dag.py#L1331
https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1258
https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1577
https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1847
I would like to propose that we need to provide the option to disable 'Unbounded metrics' with 2.6.0 release. In order to ensure backward compatibility, we could leave the default behavior to publish all metrics, but implement a single Boolean flag to disable these high cardinality metrics.
### Use case/motivation
_No response_
### Related issues
https://github.com/apache/airflow/pull/28961
https://github.com/apache/airflow/pull/29093
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29663 | https://github.com/apache/airflow/pull/29881 | 464ab1b7caa78637975008fcbb049d5b52a8b005 | 86cd79ffa76d4e4d4abe3fe829d7797852a713a5 | "2023-02-21T16:12:58Z" | python | "2023-03-06T06:20:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,662 | ["airflow/www/decorators.py"] | Audit Log is unclear when using Azure AD login | ### Apache Airflow version
2.5.1
### What happened
We're using an Azure OAUTH based login in our Airflow implementation, and everything works great. This is more of a visual problem than an actual bug.
In the Audit logs, the `owner` key is mapped to the username, which in most cases is airflow. But, in situations where we manually pause a DAG or enable it, it is mapped to our generated username, which doesn't really tell one who it is unless they were to look up that string in the users list. Example:
![image](https://user-images.githubusercontent.com/102953522/220382349-102f897b-52c4-4a92-a3e1-5b8a1b1082ff.png)
It would be nice if it were possible to include the user's first and last name alongside the username. I could probably give this one a go myself, if I could get a hint on where to look.
I've found the dag_audit_log.html template, but not sure where to change log.owner.
### What you think should happen instead
It would be good to get a representation such as username (FirstName LastName).
### How to reproduce
N/A
### Operating System
Debian GNU/Linux 11
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Deployed with Helm chart v1.7.0, and Azure OAUTH for login.
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29662 | https://github.com/apache/airflow/pull/30185 | 0b3b6704cb12a3b8f22da79d80b3db85528418b7 | a03f6ccb153f9b95f624d5bc3346f315ca3f0211 | "2023-02-21T15:10:30Z" | python | "2023-05-17T20:15:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,621 | ["chart/templates/dags-persistent-volume-claim.yaml", "chart/values.yaml"] | Fix adding annotations for dag persistence PVC | ### Official Helm Chart version
1.8.0 (latest released)
### Apache Airflow version
2.5.0
### Kubernetes Version
v1.25.4
### Helm Chart configuration
The dags persistence section doesn't have a default value for annotations and the usage looks like:
```
annotations:
{{- if .Values.dags.persistence.annotations}}
{{- toYaml .Values.dags.persistence.annotations | nindent 4 }}
{{- end }}
```
### Docker Image customizations
_No response_
### What happened
As per the review comments here: https://github.com/apache/airflow/pull/29270#pullrequestreview-1304890651, due to this design, the upgrades might suffer. Fix them to be helm upgrade friendly
### What you think should happen instead
The design should be written in an helm upgrade friendly way, refer to this suggestion https://github.com/apache/airflow/pull/29270#pullrequestreview-1304890651
### How to reproduce
-
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29621 | https://github.com/apache/airflow/pull/29622 | 5835b08e8bc3e11f4f98745266d10bbae510b258 | 901774718c5d7ff7f5ddc6f916701d281bb60a4b | "2023-02-20T03:20:25Z" | python | "2023-02-20T22:58:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,593 | ["airflow/providers/common/sql/operators/sql.py", "tests/providers/common/sql/operators/test_sql.py"] | Cannot disable XCom push in SnowflakeOperator | ### Apache Airflow Provider(s)
snowflake
### Versions of Apache Airflow Providers
4.0.3
### Apache Airflow version
2.5.0
### Operating System
docker/linux
### Deployment
Astronomer
### Deployment details
Normal Astro CLI
### What happened
```
>>> SnowflakeOperator(
...,
do_xcom_push=False
).execute()
ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 272, in execute
return self._process_output([output], hook.descriptions)[-1]
File "/usr/local/lib/python3.9/site-packages/airflow/providers/snowflake/operators/snowflake.py", line 118, in _process_output
for row in result_list:
TypeError: 'NoneType' object is not iterable
```
### What you think should happen instead
XCom's should be able to be turned off
### How to reproduce
1)
```
astro dev init
```
2) `dags/snowflake_test.py`
```
import os
from datetime import datetime
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
os.environ["AIRFLOW_CONN_SNOWFLAKE"] = "snowflake://......."
with DAG('snowflake_test', schedule=None, start_date=datetime(2023, 1, 1)):
SnowflakeOperator(
task_id='snowflake_test',
snowflake_conn_id="snowflake",
sql="select 1;",
do_xcom_push=False
)
```
3)
```
astro run -d dags/snowflake_test.py snowflake_test
```
```
Loading DAGs...
Running snowflake_test... [2023-02-17 18:45:33,537] {connection.py:280} INFO - Snowflake Connector for Python Version: 2.9.0, Python Version: 3.9.16, Platform: Linux-5.15.49-linuxkit-aarch64-with-glibc2.31
...
[2023-02-17 18:45:34,608] {cursor.py:727} INFO - query: [select 1]
[2023-02-17 18:45:34,698] {cursor.py:740} INFO - query execution done
...
[2023-02-17 18:45:34,785] {connection.py:581} INFO - closed
[2023-02-17 18:45:34,841] {connection.py:584} INFO - No async queries seem to be running, deleting session
FAILED
'NoneType' object is not iterable
```
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29593 | https://github.com/apache/airflow/pull/29599 | 2bc1081ea6ca569b4e7fc538bfc827d74e8493ae | 19f1e7c27b85e297497842c73f13533767ebd6ba | "2023-02-17T16:27:19Z" | python | "2023-02-22T09:33:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,585 | ["airflow/providers/docker/decorators/docker.py", "tests/providers/docker/decorators/test_docker.py"] | template_fields not working in the decorator `task.docker` | ### Apache Airflow Provider(s)
docker
### Versions of Apache Airflow Providers
apache-airflow-providers-docker 3.4.0
### Apache Airflow version
2.5.1
### Operating System
Linux
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
The templated fields are not working under `task.docker`
```python
@task.docker(image="python:3.9-slim-bullseye", container_name='python_{{macros.datetime.now() | ts_nodash}}', multiple_outputs=True)
def transform(order_data_dict: dict):
"""
#### Transform task
A simple Transform task which takes in the collection of order data and
computes the total order value.
"""
total_order_value = 0
for value in order_data_dict.values():
total_order_value += value
return {"total_order_value": total_order_value}
```
Will throws error with un-templated `container_name`
`Bad Request ("Invalid container name (python_{macros.datetime.now() | ts_nodash}), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed")`
### What you think should happen instead
All these fields should work with docker operator:
https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/_api/airflow/providers/docker/operators/docker/index.html
```
template_fields: Sequence[str]= ('image', 'command', 'environment', 'env_file', 'container_name')
```
### How to reproduce
with the example above
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29585 | https://github.com/apache/airflow/pull/29586 | 792416d4ad495f1e5562e6170f73f4d8f1fa2eff | 7bd87e75def1855d8f5b91e9ab1ffbbf416709ec | "2023-02-17T09:32:11Z" | python | "2023-02-17T17:51:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,578 | ["airflow/jobs/scheduler_job_runner.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst", "newsfragments/30374.significant.rst"] | scheduler.tasks.running metric is always 0 | ### Apache Airflow version
2.5.1
### What happened
I'd expect the `scheduler.tasks.running` metric to represent the number of running tasks, but it is always zero. It appears that #10956 broke this when it removed [the line that increments `num_tasks_in_executor`](https://github.com/apache/airflow/pull/10956/files#diff-bde85feb359b12bdd358aed4106ef4fccbd8fa9915e16b9abb7502912a1c1ab3L1363). Right now that variable is set to 0, never incremented, and the emitted as a gauge.
### What you think should happen instead
`scheduler.tasks.running` should either represent the number of tasks running or be removed altogether.
### How to reproduce
_No response_
### Operating System
Ubuntu 18.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29578 | https://github.com/apache/airflow/pull/30374 | d8af20f064b8d8abc9da1f560b2d7e1ac7dd1cc1 | cce9b2217b86a88daaea25766d0724862577cc6c | "2023-02-16T17:59:47Z" | python | "2023-04-13T11:04:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,576 | ["airflow/triggers/temporal.py", "tests/triggers/test_temporal.py"] | DateTimeSensorAsync breaks if target_time is timezone-aware | ### Apache Airflow version
2.5.1
### What happened
`DateTimeSensorAsync` fails with the following error if `target_time` is aware:
```
[2022-06-29, 05:09:11 CDT] {taskinstance.py:1889} ERROR - Task failed with exception
Traceback (most recent call last):a
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/sensors/time_sensor.py", line 60, in execute
trigger=DateTimeTrigger(moment=self.target_datetime),
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/triggers/temporal.py", line 42, in __init__
raise ValueError(f"The passed datetime must be using Pendulum's UTC, not {moment.tzinfo!r}")
ValueError: The passed datetime must be using Pendulum's UTC, not Timezone('America/Chicago')
```
### What you think should happen instead
Given the fact that `DateTimeSensor` correctly handles timezones, this seems like a bug. `DateTimeSensorAsync` should be a drop-in replacement for `DateTimeSensor`, and therefore should have the same timezone behavior.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
from airflow.decorators import dag
from airflow.sensors.date_time import DateTimeSensor, DateTimeSensorAsync
import pendulum
@dag(
start_date=datetime.datetime(2022, 6, 29),
schedule='@daily',
)
def datetime_sensor_dag():
naive_time1 = datetime.datetime(2023, 2, 16, 0, 1)
aware_time1 = datetime.datetime(2023, 2, 16, 0, 1).replace(tzinfo=pendulum.local_timezone())
naive_time2 = pendulum.datetime(2023, 2, 16, 23, 59)
aware_time2 = pendulum.datetime(2023, 2, 16, 23, 59).replace(tzinfo=pendulum.local_timezone())
DateTimeSensor(task_id='naive_time1', target_time=naive_time1, mode='reschedule')
DateTimeSensor(task_id='naive_time2', target_time=naive_time2, mode='reschedule')
DateTimeSensor(task_id='aware_time1', target_time=aware_time1, mode='reschedule')
DateTimeSensor(task_id='aware_time2', target_time=aware_time2, mode='reschedule')
DateTimeSensorAsync(task_id='async_naive_time1', target_time=naive_time1)
DateTimeSensorAsync(task_id='async_naive_time2', target_time=naive_time2)
DateTimeSensorAsync(task_id='async_aware_time1', target_time=aware_time1) # fails
DateTimeSensorAsync(task_id='async_aware_time2', target_time=aware_time2) # fails
datetime_sensor_dag()
```
This can also happen if the `target_time` is naive and `core.default_timezone = system`.
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone
### Anything else
This appears to be nearly identical to #24736. Probably worth checking other time-related sensors as well.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29576 | https://github.com/apache/airflow/pull/29606 | fd000684d05a993ade3fef38b683ef3cdfdfc2b6 | 79c07e3fc5d580aea271ff3f0887291ae9e4473f | "2023-02-16T16:03:25Z" | python | "2023-02-19T20:27:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,556 | ["airflow/providers/amazon/aws/hooks/ecs.py", "airflow/providers/amazon/aws/operators/ecs.py", "airflow/providers/amazon/aws/waiters/ecs.json", "tests/providers/amazon/aws/operators/test_ecs.py", "tests/providers/amazon/aws/waiters/test_custom_waiters.py"] | Different AWS ECS Operators use inner Sensor and do not propagate connection arguments | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
main/develop
### Apache Airflow version
main/develop
### Operating System
Not relevant
### Deployment
Other
### Deployment details
Not relevant
### What happened
[`EcsCreateClusterOperator`](https://github.com/apache/airflow/blob/2a34dc9e8470285b0ed2db71109ef4265e29688b/airflow/providers/amazon/aws/operators/ecs.py#L112-L118), [`EcsDeleteClusterOperator`](https://github.com/apache/airflow/blob/2a34dc9e8470285b0ed2db71109ef4265e29688b/airflow/providers/amazon/aws/operators/ecs.py#L153-L161), [`EcsDeregisterTaskDefinitionOperator`](https://github.com/apache/airflow/blob/2a34dc9e8470285b0ed2db71109ef4265e29688b/airflow/providers/amazon/aws/operators/ecs.py#L191-L198), [`EcsRegisterTaskDefinitionOperator`](https://github.com/apache/airflow/blob/2a34dc9e8470285b0ed2db71109ef4265e29688b/airflow/providers/amazon/aws/operators/ecs.py#L255-L260)
### What you think should happen instead
We should use boto3 waiters / hook methods instead of use other operators inside of execute methods.
I do not sure is it safe or not propagate context from one operator to another
### How to reproduce
This is follow up of this discussion: https://github.com/apache/airflow/discussions/29504#discussioncomment-4982216
1. Create AWS Connection with name differ than `aws_default`
2. Make sure that `aws_default` connection not exists
3. Make sure that Airflow environment can't obtain somehow AWS Credentials (so need exclude Environment Variables, shared credential file, IAM profile, ECS Task Execution Role and etc)
4. Try to execute one of the selected ECS operators with `wait_for_completion` set to `True`
+ EcsCreateClusterOperator
+ EcsDeleteClusterOperator
+ EcsDeregisterTaskDefinitionOperator
+ EcsRegisterTaskDefinitionOperator
```python
from airflow import DAG
from airflow.utils.timezone import datetime
from airflow.providers.amazon.aws.operators.ecs import EcsCreateClusterOperator
CUSTOM_AWS_CONN_ID = "aws-custom"
REGION_NAME="eu-west-1"
assert CUSTOM_AWS_CONN_ID != "aws_default", "CUSTOM_AWS_CONN_ID should not be defined as 'aws_default'"
assert CUSTOM_AWS_CONN_ID
with DAG(
"discussion_29504",
start_date=datetime(2023, 2, 14),
schedule_interval=None,
catchup=False,
tags=["amazon", "ecs", "discussion-29504"]
) as dag:
EcsCreateClusterOperator(
task_id="whooooops",
aws_conn_id=CUSTOM_AWS_CONN_ID,
region=REGION_NAME,
cluster_name="discussion-29504",
wait_for_completion=True,
)
```
### Anything else
Every time when
- `wait_for_completion` set to `True` for ECS operators listed before
- `aws_default` not exist or do not have permission to ECS operations
- Fallback to `boto3` default credential strategy can't valid credentials with access to ECS operations
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29556 | https://github.com/apache/airflow/pull/29761 | 5de47910f3ebd803453b8fb5ca6e4f26ad611375 | 181a8252597e314e5675e2b9655cb44da412eeb2 | "2023-02-15T17:27:11Z" | python | "2023-03-01T19:50:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,552 | ["airflow/providers/google/suite/hooks/drive.py", "airflow/providers/google/suite/transfers/local_to_drive.py", "tests/providers/google/suite/hooks/test_drive.py", "tests/providers/google/suite/transfers/test_local_to_drive.py"] | Google provider doesn't let uploading file to a shared drive | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
```
apache-airflow-providers-google: version 8.9.0
```
### Apache Airflow version
2.5.1
### Operating System
Linux - official airflow image from docker hub apache/airflow:slim-2.5.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
Not sure if it's a bug or a feature.
Originally I have used `LocalFilesystemToGoogleDriveOperator` to try uploading a file into a shared Google Drive without success.
Provider didn't find a directory with a given name so it created a new one without browsing `shared drives`. Method call that doesn't allow to upload it is here:
https://github.com/apache/airflow/blob/main/airflow/providers/google/suite/hooks/drive.py#L223
### What you think should happen instead
It would be nice if there was an optional parameter to provide a `drive_id` into which user would like to upload a file. With the same directory check behaviour that already exists but extended to `shared drives`
### How to reproduce
1. Create a directory on the shared google drive
2. Fill `LocalFilesystemToGoogleDriveOperator` constructor with the arguments.
3. Execute the function
### Anything else
I am willing to submit a PR but I would need to know more details, your thoughts, expectations of the implementation to make as little iteration on it possible.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29552 | https://github.com/apache/airflow/pull/29477 | 0222f7d91cee80cc1a464f277f99e69e845c52db | f37772adfdfdee8763147e0563897e4d5d5657c8 | "2023-02-15T08:52:31Z" | python | "2023-02-18T19:29:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,538 | ["airflow/providers/google/CHANGELOG.rst", "airflow/providers/google/marketing_platform/hooks/campaign_manager.py", "airflow/providers/google/marketing_platform/operators/campaign_manager.py", "airflow/providers/google/marketing_platform/sensors/campaign_manager.py", "airflow/providers/google/provider.yaml", "docs/apache-airflow-providers-google/operators/marketing_platform/campaign_manager.rst", "tests/providers/google/marketing_platform/hooks/test_campaign_manager.py", "tests/providers/google/marketing_platform/operators/test_campaign_manager.py", "tests/providers/google/marketing_platform/sensors/test_campaign_manager.py", "tests/system/providers/google/marketing_platform/example_campaign_manager.py"] | GoogleCampaignManagerReportSensor not working correctly on API Version V4 | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Hello,
My organization has been running Airflow 2.3.4 and we have run into a problem in regard to the Google Campaign Manager Report Sensor. The purpose of this sensor is to check if a report has finished processing and is ready to be downloaded. If we use API Version: v3.5 it works flawlessly. Unfortunately, if we use API Version v4, the sensor malfunctions. It always succeeds regardless of whether the report is ready to download or not. This causes the job to fail downstream because it makes it impossible to download a file that it is not ready.
At first this doesn't seem like a big problem in just using v3.5. However, Google announced that they are going to only let you use API version v4 starting in a week. Is there a way we can get this resolved 😭?
Thanks!
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
linux?
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29538 | https://github.com/apache/airflow/pull/30598 | 5b42aa3b8d0ec069683e22c2cb3b8e8e6e5fee1c | da2749cae56d6e0da322695b3286acd9393052c8 | "2023-02-14T15:13:33Z" | python | "2023-04-15T13:34:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,537 | ["airflow/cli/commands/config_command.py", "tests/cli/commands/test_config_command.py"] | Docker image fails to start if celery config section is not defined | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Using Airflow `2.3.4`
We removed any config values we did not explicitly set from `airflow.cfg`. This was to make future upgrades less involved, as we could only compare configuration values we explicitly set, rather than all permutations of versions. This has been [recommended in slack](https://apache-airflow.slack.com/archives/CCQB40SQJ/p1668441275427859?thread_ts=1668394200.637899&cid=CCQB40SQJ) as an approach.
e.g. we set `AIRFLOW__CELERY__BROKER_URL` as an environment variable - we do not set this in `airflow.cfg`, so we removed the `[celery]` section from the Airflow configuration.
We set `AIRFLOW__CORE__EXECUTOR=CeleryExecutor`, so we are using the Celery executor.
Upon starting the Airflow scheduler, it exited with code `1`, and this message:
```
The section [celery] is not found in config.
```
Upon adding back in an empty
```
[celery]
```
section to `airflow.cfg`, this error went away. I have verified that it still picks up `AIRFLOW__CELERY__BROKER_URL` correctly.
### What you think should happen instead
I'd expect Airflow to take defaults as listed [here](https://airflow.apache.org/docs/apache-airflow/2.3.4/howto/set-config.html), I wouldn't expect the presence of configuration sections to cause errors.
### How to reproduce
1. Setup a docker image for the Airflow `scheduler` with `apache/airflow:slim-2.3.4)-python3.10` and the following configuration in `airflow.cfg` - with no `[celery]` section:
```
[core]
# The executor class that airflow should use. Choices include
# ``SequentialExecutor``, ``LocalExecutor``, ``CeleryExecutor``, ``DaskExecutor``,
# ``KubernetesExecutor``, ``CeleryKubernetesExecutor`` or the
# full import path to the class when using a custom executor.
executor = CeleryExecutor
[logging]
[metrics]
[secrets]
[cli]
[debug]
[api]
[lineage]
[atlas]
[operators]
[hive]
[webserver]
[email]
[smtp]
[sentry]
[celery_kubernetes_executor]
[celery_broker_transport_options]
[dask]
[scheduler]
[triggerer]
[kerberos]
[github_enterprise]
[elasticsearch]
[elasticsearch_configs]
[kubernetes]
[smart_sensor]
```
2. Run the `scheduler` command, also setting `AIRFLOW__CELERY__BROKER_URL` to point to a Celery redis broker.
3. Observe that the scheduler exits.
### Operating System
Ubuntu 20.04.5 LTS (Focal Fossa)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
AWS ECS
Docker `apache/airflow:slim-2.3.4)-python3.10`
Separate:
- Webserver
- Triggerer
- Scheduler
- Celery worker
- Celery flower
services
### Anything else
This seems to occur due to this `get-value` check in the Airflow image entrypoint: https://github.com/apache/airflow/blob/28126c12fbdd2cac84e0fbcf2212154085aa5ed9/scripts/docker/entrypoint_prod.sh#L203-L212
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29537 | https://github.com/apache/airflow/pull/29541 | 84b13e067f7b0c71086a42957bb5cf1d6dc86d1d | 06d45f0f2c8a71c211e22cf3792cc873f770e692 | "2023-02-14T14:58:55Z" | python | "2023-02-15T01:41:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,532 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/dag.py", "airflow/models/dagwarning.py"] | AIP-44 Migrate DagWarning.purge_inactive_dag_warnings to Internal API | Used in https://github.com/mhenc/airflow/blob/master/airflow/dag_processing/manager.py#L613
should be straighforward | https://github.com/apache/airflow/issues/29532 | https://github.com/apache/airflow/pull/29534 | 289ae47f43674ae10b6a9948665a59274826e2a5 | 50b30e5b92808e91ad9b6b05189f560d58dd8152 | "2023-02-14T13:13:04Z" | python | "2023-02-15T00:13:44Z" |