status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 23,550 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | Dynamic Task Mapping is Immutable within a Run | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Looks like mapped tasks are immutable, even when the source XCOM that created them changes.
This is a problem for things like Late Arriving Data and Data Reprocessing
### What you think should happen instead
Mapped tasks should change in response to a change of input
### How to reproduce
Here is a writeup and MVP DAG demonstrating the issue
https://gist.github.com/fritz-astronomer/d159d0e29d57458af5b95c0f253a3361
### Operating System
docker/debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
Can look into a fix - but may not be able to submit a full PR
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23550 | https://github.com/apache/airflow/pull/23667 | ad297c91777277e2b76dd7b7f0e3e3fc5c32e07c | b692517ce3aafb276e9d23570e9734c30a5f3d1f | "2022-05-06T21:42:12Z" | python | "2022-06-18T07:32:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,546 | ["airflow/www/views.py", "tests/www/views/test_views_graph_gantt.py"] | Gantt Chart Broken After Deleting a Task | ### Apache Airflow version
2.2.5
### What happened
After a task was deleted from a DAG we received the following message when visiting the gantt view for the DAG in the webserver.
```
{
"detail": null,
"status": 404,
"title": "Task delete-me not found",
"type": "https://airflow.apache.org/docs/apache-airflow/2.2.5/stable-rest-api-ref.html#section/Errors/NotFound"
}
```
This was only corrected by manually deleting the offending task instances from the `task_instance` and `task_fail` tables.
### What you think should happen instead
I would expect the gantt chart to load either excluding the non-existent task or flagging that the task associated with task instance no longer exists.
### How to reproduce
* Create a DAG with multiple tasks.
* Run the DAG.
* Delete one of the tasks.
* Attempt to open the gantt view for the DAG.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Custom docker container hosted on Amazon ECS.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23546 | https://github.com/apache/airflow/pull/23627 | e09e4635b0dc50cbd3a18f8be02ce9b2e2f3d742 | 4b731f440734b7a0da1bbc8595702aaa1110ad8d | "2022-05-06T20:07:01Z" | python | "2022-05-20T19:24:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,532 | ["airflow/utils/file.py", "tests/utils/test_file.py"] | Airflow .airflowignore not handling soft link properly. | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Soft link and folder under same root folder will be handled as the same relative path. Say i have dags folder which looks like this:
```
-dags:
-- .airflowignore
-- folder
-- soft-links-to-folder -> folder
```
and .airflowignore:
```
folder/
```
both folder and soft-links-to-folder will be ignored.
### What you think should happen instead
Only the folder should be ignored. This is the expected behavior in airflow 2.2.4, before i upgraded. ~~The root cause is that both _RegexpIgnoreRule and _GlobIgnoreRule is calling `relative_to` method to get search path.~~
### How to reproduce
check @tirkarthi comment for the test case.
### Operating System
ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23532 | https://github.com/apache/airflow/pull/23535 | 7ab5ea7853df9d99f6da3ab804ffe085378fbd8a | 8494fc7036c33683af06a0e57474b8a6157fda05 | "2022-05-06T13:57:32Z" | python | "2022-05-20T06:35:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,529 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | Provide resources attribute in KubernetesPodOperator to be templated | ### Description
Make resources in KubernetesPodOperator as templated. We need to modify this during several runs and it needs code change for each run.
### Use case/motivation
For running CPU and memory intensive workloads, we want to continuously optimise the "limt_cpu" and "limit_memory" parameters. Hence, we want to provide these parameters as a part of the pipeline definition.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23529 | https://github.com/apache/airflow/pull/27457 | aefadb8c5b9272613d5806b054a1b46edf29d82e | 47a2b9ee7f1ff2cc1cc1aa1c3d1b523c88ba29fb | "2022-05-06T13:35:16Z" | python | "2022-11-09T08:47:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,523 | ["scripts/ci/docker-compose/integration-cassandra.yml"] | Cassandra container 3.0.26 fails to start on CI | ### Apache Airflow version
main (development)
### What happened
Cassandra released a new image (3.0.26) on 05.05.2022 and it broke our builds, for example:
* https://github.com/apache/airflow/runs/6320170343?check_suite_focus=true#step:10:6651
* https://github.com/apache/airflow/runs/6319805534?check_suite_focus=true#step:10:12629
* https://github.com/apache/airflow/runs/6319710486?check_suite_focus=true#step:10:6759
The problem was that container for cassandra did not cleanly start:
```
ERROR: for airflow Container "3bd115315ba7" is unhealthy.
Encountered errors while bringing up the project.
3bd115315ba7 cassandra:3.0 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes (unhealthy) 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp airflow-integration-postgres_cassandra_1
```
The logs of cassandra container do not show anything suspected, cassandra seems to start ok, but the health-checks for the :
```
INFO 08:45:22 Using Netty Version: [netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=netty-codec-4.0.44.Final.452812a, netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, netty-codec-http=netty-codec-http-4.0.44.Final.452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handler-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, netty-transport=netty-transport-4.0.44.Final.452812a, netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
INFO 08:45:22 Starting listening for CQL clients on /0.0.0.0:9042 (unencrypted)...
INFO 08:45:23 Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it
INFO 08:45:23 Startup complete
INFO 08:45:24 Created default superuser role ‘cassandra’
```
We mitigated it by #23522 and pinned cassandra to 3.0.25 version but more investigation/reachout is needed.
### What you think should happen instead
Cassandra should start properly.
### How to reproduce
Revert #23522 and make. PR. The builds will start to fail with "cassandra unhealthy"
### Operating System
Github Actions
### Versions of Apache Airflow Providers
not relevant
### Deployment
Other
### Deployment details
CI
### Anything else
Always.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23523 | https://github.com/apache/airflow/pull/23537 | 953b85d8a911301c040a3467ab2a1ba2b6d37cd7 | 22a564296be1aee62d738105859bd94003ad9afc | "2022-05-06T10:40:06Z" | python | "2022-05-07T13:36:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,514 | ["airflow/providers/amazon/aws/hooks/s3.py", "tests/providers/amazon/aws/hooks/test_s3.py"] | Json files from S3 downloading as text files | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.0 (latest released)
### Operating System
Mac OS Mojave 10.14.6
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When I download a json file from S3 using the S3Hook:
`filename=s3_hook.download_file(bucket_name=self.source_s3_bucket, key=key, local_path="./data")
`
The file is being downloaded as a text file starting with `airflow_temp_`.
### What you think should happen instead
It would be nice to have them download as a json file or keep the same filename as in S3. Since it requires additional code to go back and read the file as a dictionary (ast.literal_eval) and there is no guarantee that the json structure is maintained.
### How to reproduce
Where s3_conn_id is the Airflow connection and s3_bucket is a bucket on AWS S3.
This is the custom operator class:
```
from airflow.models.baseoperator import BaseOperator
from airflow.utils.decorators import apply_defaults
from airflow.hooks.S3_hook import S3Hook
import logging
class S3SearchFilingsOperator(BaseOperator):
"""
Queries the Datastore API and uploads the processed info as a csv to the S3 bucket.
:param source_s3_bucket: Choose source s3 bucket
:param source_s3_directory: Source s3 directory
:param s3_conn_id: S3 Connection ID
:param destination_s3_bucket: S3 Bucket Destination
"""
@apply_defaults
def __init__(
self,
source_s3_bucket=None,
source_s3_directory=True,
s3_conn_id=True,
destination_s3_bucket=None,
destination_s3_directory=None,
search_terms=[],
*args,
**kwargs) -> None:
super().__init__(*args, **kwargs)
self.source_s3_bucket = source_s3_bucket
self.source_s3_directory = source_s3_directory
self.s3_conn_id = s3_conn_id
self.destination_s3_bucket = destination_s3_bucket
self.destination_s3_directory = destination_s3_directory
def execute(self, context):
"""
Executes the operator.
"""
s3_hook = S3Hook(self.s3_conn_id)
keys = s3_hook.list_keys(bucket_name=self.source_s3_bucket)
for key in keys:
# download file
filename=s3_hook.download_file(bucket_name=self.source_s3_bucket, key=key, local_path="./data")
logging.info(filename)
with open(filename, 'rb') as handle:
filing = handle.read()
filing = pickle.loads(filing)
logging.info(filing.keys())
```
And this is the dag file:
```
from keywordSearch.operators.s3_search_filings_operator import S3SearchFilingsOperator
from airflow import DAG
from airflow.utils.dates import days_ago
from datetime import timedelta
# from aws_pull import aws_pull
default_args = {
"owner" : "airflow",
"depends_on_past" : False,
"start_date": days_ago(2),
"email" : ["airflow@example.com"],
"email_on_failure" : False,
"email_on_retry" : False,
"retries" : 1,
"retry_delay": timedelta(seconds=30)
}
with DAG("keyword-search-full-load",
default_args=default_args,
description="Syntax Keyword Search",
max_active_runs=1,
schedule_interval=None) as dag:
op3 = S3SearchFilingsOperator(
task_id="s3_search_filings",
source_s3_bucket="processed-filings",
source_s3_directory="citations",
s3_conn_id="Syntax_S3",
destination_s3_bucket="keywordsearch",
destination_s3_directory="results",
dag=dag
)
op3
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23514 | https://github.com/apache/airflow/pull/26886 | d544e8fbeb362e76e14d7615d354a299445e5b5a | 777b57f0c6a8ca16df2b96fd17c26eab56b3f268 | "2022-05-05T21:59:08Z" | python | "2022-10-26T11:01:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,512 | ["airflow/cli/commands/webserver_command.py", "tests/cli/commands/test_webserver_command.py"] | Random "duplicate key value violates unique constraint" errors when initializing the postgres database | ### Apache Airflow version
2.3.0 (latest released)
### What happened
while testing airflow 2.3.0 locally (using postgresql 12.4), the webserver container shows random errors:
```
webserver_1 | + airflow db init
...
webserver_1 | + exec airflow webserver
...
webserver_1 | [2022-05-04 18:58:46,011] {{manager.py:568}} INFO - Added Permission menu access on Permissions to role Admin
postgres_1 | 2022-05-04 18:58:46.013 UTC [41] ERROR: duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
postgres_1 | 2022-05-04 18:58:46.013 UTC [41] DETAIL: Key (permission_view_id, role_id)=(204, 1) already exists.
postgres_1 | 2022-05-04 18:58:46.013 UTC [41] STATEMENT: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), 204, 1) RETURNING ab_permission_view_role.id
webserver_1 | [2022-05-04 18:58:46,015] {{manager.py:570}} ERROR - Add Permission to Role Error: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
webserver_1 | DETAIL: Key (permission_view_id, role_id)=(204, 1) already exists.
webserver_1 |
webserver_1 | [SQL: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), %(permission_view_id)s, %(role_id)s) RETURNING ab_permission_view_role.id]
webserver_1 | [parameters: {'permission_view_id': 204, 'role_id': 1}]
```
notes:
1. when the db is first initialized, i have ~40 errors like this (with ~40 different `permission_view_id` but always the same `'role_id': 1`)
2. when it's not the first time initializing db, i always have 1 error like this but it shows different `permission_view_id` each time
3. all these errors don't seem to have any real negative effects, the webserver is still running and airflow is still running and scheduling tasks
4. "occasionally" i do get real exceptions which render the webserver workers all dead:
```
postgres_1 | 2022-05-05 20:03:30.580 UTC [44] ERROR: duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
postgres_1 | 2022-05-05 20:03:30.580 UTC [44] DETAIL: Key (permission_view_id, role_id)=(214, 1) already exists.
postgres_1 | 2022-05-05 20:03:30.580 UTC [44] STATEMENT: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), 214, 1) RETURNING ab_permission_view_role.id
webserver_1 | [2022-05-05 20:03:30 +0000] [121] [ERROR] Exception in worker process
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
webserver_1 | self.dialect.do_execute(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
webserver_1 | cursor.execute(statement, parameters)
webserver_1 | psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
webserver_1 | DETAIL: Key (permission_view_id, role_id)=(214, 1) already exists.
webserver_1 |
webserver_1 |
webserver_1 | The above exception was the direct cause of the following exception:
webserver_1 |
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
webserver_1 | worker.init_process()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 134, in init_process
webserver_1 | self.load_wsgi()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
webserver_1 | self.wsgi = self.app.wsgi()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi
webserver_1 | self.callable = self.load()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
webserver_1 | return self.load_wsgiapp()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
webserver_1 | return util.import_app(self.app_uri)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/util.py", line 412, in import_app
webserver_1 | app = app(*args, **kwargs)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/app.py", line 158, in cached_app
webserver_1 | app = create_app(config=config, testing=testing)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/app.py", line 146, in create_app
webserver_1 | sync_appbuilder_roles(flask_app)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/app.py", line 68, in sync_appbuilder_roles
webserver_1 | flask_app.appbuilder.sm.sync_roles()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/security.py", line 580, in sync_roles
webserver_1 | self.update_admin_permission()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/security.py", line 562, in update_admin_permission
webserver_1 | self.get_session.commit()
webserver_1 | File "<string>", line 2, in commit
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1423, in commit
webserver_1 | self._transaction.commit(_to_root=self.future)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 829, in commit
webserver_1 | self._prepare_impl()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
webserver_1 | self.session.flush()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3255, in flush
webserver_1 | self._flush(objects)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3395, in _flush
webserver_1 | transaction.rollback(_capture_exception=True)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
webserver_1 | compat.raise_(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
webserver_1 | raise exception
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3355, in _flush
webserver_1 | flush_context.execute()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 453, in execute
webserver_1 | rec.execute(self)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 576, in execute
webserver_1 | self.dependency_processor.process_saves(uow, states)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/dependency.py", line 1182, in process_saves
webserver_1 | self._run_crud(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/dependency.py", line 1245, in _run_crud
webserver_1 | connection.execute(statement, secondary_insert)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
webserver_1 | return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 313, in _execute_on_connection
webserver_1 | return connection._execute_clauseelement(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1389, in _execute_clauseelement
webserver_1 | ret = self._execute_context(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
webserver_1 | self._handle_dbapi_exception(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
webserver_1 | util.raise_(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
webserver_1 | raise exception
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
webserver_1 | self.dialect.do_execute(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
webserver_1 | cursor.execute(statement, parameters)
webserver_1 | sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
webserver_1 | DETAIL: Key (permission_view_id, role_id)=(214, 1) already exists.
webserver_1 |
webserver_1 | [SQL: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), %(permission_view_id)s, %(role_id)s) RETURNING ab_permission_view_role.id]
webserver_1 | [parameters: {'permission_view_id': 214, 'role_id': 1}]
webserver_1 | (Background on this error at: http://sqlalche.me/e/14/gkpj)
webserver_1 | [2022-05-05 20:03:30 +0000] [121] [INFO] Worker exiting (pid: 121)
flower_1 | + exec airflow celery flower
scheduler_1 | + exec airflow scheduler
webserver_1 | [2022-05-05 20:03:31 +0000] [118] [INFO] Worker exiting (pid: 118)
webserver_1 | [2022-05-05 20:03:31 +0000] [119] [INFO] Worker exiting (pid: 119)
webserver_1 | [2022-05-05 20:03:31 +0000] [120] [INFO] Worker exiting (pid: 120)
worker_1 | + exec airflow celery worker
```
However such exceptions are rare and pure random, i can't find a way to reproduce them consistently.
### What you think should happen instead
prior to 2.3.0 there were no such errors
### How to reproduce
_No response_
### Operating System
Linux Mint 20.3
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23512 | https://github.com/apache/airflow/pull/27297 | 9ab1a6a3e70b32a3cddddf0adede5d2f3f7e29ea | 8f99c793ec4289f7fc28d890b6c2887f0951e09b | "2022-05-05T20:00:11Z" | python | "2022-10-27T04:25:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,497 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | Tasks stuck indefinitely when following container logs | ### Apache Airflow version
2.2.4
### What happened
I observed that some workers hanged randomly after being running. Also, logs were not being reported. After some time, the pod status was on "Completed" when inspecting from k8s api, but wasn't on Airflow, which showed "status:running" for the pod.
After some investigation, the issue is in the new kubernetes pod operator and is dependant of a current issue in the kubernetes api.
When a log rotate event occurs in kubernetes, the stream we consume on fetch_container_logs(follow=True,...) is no longer being feeded.
Therefore, the k8s pod operator hangs indefinetly at the middle of the log. Only a sigterm could terminate it as logs consumption is blocking execute() to finish.
Ref to the issue in kubernetes: https://github.com/kubernetes/kubernetes/issues/59902
Linking to https://github.com/apache/airflow/issues/12103 for reference, as the result is more or less the same for end user (although the root cause is different)
### What you think should happen instead
Pod operator should not hang.
Pod operator could follow the new logs from the container - this is out of scope of airflow as ideally the k8s api does it automatically.
### Solution proposal
I think there are many possibilities to walk-around this from airflow-side to not hang indefinitely (like making `fetch_container_logs` non-blocking for `execute` and instead always block until status.phase.completed as it's currently done when get_logs is not true).
### How to reproduce
Running multiple tasks will sooner or later trigger this. Also, one can configure a more aggressive logs rotation in k8s so this race is triggered more often.
#### Operating System
Debian GNU/Linux 11 (bullseye)
#### Versions of Apache Airflow Providers
```
apache-airflow==2.2.4
apache-airflow-providers-google==6.4.0
apache-airflow-providers-cncf-kubernetes==3.0.2
```
However, this should be reproducible with master.
#### Deployment
Official Apache Airflow Helm Chart
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23497 | https://github.com/apache/airflow/pull/28336 | 97006910a384579c9f0601a72410223f9b6a0830 | 6d2face107f24b7e7dce4b98ae3def1178e1fc4c | "2022-05-05T09:06:19Z" | python | "2023-03-04T18:08:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,476 | ["airflow/www/static/js/grid/TaskName.jsx"] | Grid View - Multilevel taskgroup shows white text on the UI | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Blank text if there are nested Task Groups .
Nested TaskGroup - Graph view:

Nested TaskGroup - Grid view:

### What you think should happen instead
We should see the text as up task group level.
### How to reproduce
### deploy below DAG:
```
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.utils.dates import datetime
from airflow.utils.task_group import TaskGroup
with DAG(dag_id="grid_view_dag", start_date=datetime(2022, 5, 3, 0, 00), schedule_interval=None, concurrency=2,
max_active_runs=2) as dag:
parent_task_group = None
for i in range(0, 10):
with TaskGroup(group_id=f"tg_level_{i}", parent_group=parent_task_group) as tg:
t = DummyOperator(task_id=f"task_level_{i}")
parent_task_group = tg
```
### got to grid view and expand the nodes:

#### you can see the text after text selection:

### Operating System
N/A
### Versions of Apache Airflow Providers
N/A
### Deployment
Docker-Compose
### Deployment details
reproducible using the following docker-compose file: https://airflow.apache.org/docs/apache-airflow/2.3.0/docker-compose.yaml
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23476 | https://github.com/apache/airflow/pull/23482 | d9902958448b9d6e013f90f14d2d066f3121dcd5 | 14befe3ad6a03f27e20357e9d4e69f99d19a06d1 | "2022-05-04T13:01:20Z" | python | "2022-05-04T15:30:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,473 | ["airflow/models/dagbag.py", "airflow/security/permissions.py", "airflow/www/security.py", "tests/www/test_security.py"] | Could not get DAG access permission after upgrade to 2.3.0 | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I upgraded my airflow instance from version 2.1.3 to 2.3.0 but got issue that there are no permission for new DAGs.
**The issue only happens in DAG which has dag_id contains dot symbol.**
### What you think should happen instead
There should be 3 new permissions for a DAG.
### How to reproduce
+ Create a new DAG with id, lets say: `dag.id_1`
+ Go to the UI -> Security -> List Role
+ Edit any Role
+ Try to insert permissions of new DAG above to chosen role.
-> Could not get any permission for created DAG above.
There are 3 DAG permissions named `can_read_DAG:dag`, `can_edit_DAG:dag`, `can_delete_DAG:dag`
There should be 3 new permissions: `can_read_DAG:dag.id_1`, `can_edit_DAG:dag.id_1`, `can_delete_DAG:dag.id_1`
### Operating System
Kubernetes
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23473 | https://github.com/apache/airflow/pull/23510 | ae3e68af3c42a53214e8264ecc5121049c3beaf3 | cc35fcaf89eeff3d89e18088c2e68f01f8baad56 | "2022-05-04T09:37:57Z" | python | "2022-06-08T07:47:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,460 | ["README.md", "breeze-complete", "dev/breeze/src/airflow_breeze/global_constants.py", "images/breeze/output-commands-hash.txt", "images/breeze/output-commands.svg", "images/breeze/output-config.svg", "images/breeze/output-shell.svg", "images/breeze/output-start-airflow.svg", "scripts/ci/libraries/_initialization.sh"] | Add Postgres 14 support | ### Description
_No response_
### Use case/motivation
Using Postgres 14 as backend
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23460 | https://github.com/apache/airflow/pull/23506 | 9ab9cd47cff5292c3ad602762ae3e371c992ea92 | 6169e0a69875fb5080e8d70cfd9d5e650a9d13ba | "2022-05-03T18:15:31Z" | python | "2022-05-11T16:26:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,425 | ["airflow/models/mappedoperator.py", "tests/models/test_taskinstance.py"] | Mapping over multiple parameters results in 1 task fewer than expected | ### Apache Airflow version
2.3.0 (latest released)
### What happened
While testing the [example](https://airflow.apache.org/docs/apache-airflow/2.3.0/concepts/dynamic-task-mapping.html#mapping-over-multiple-parameters) given for `Mapping over multiple parameters` I noticed only 5 tasks are being mapped rather than the expected 6.
task example from the doc:
```
@task
def add(x: int, y: int):
return x + y
added_values = add.expand(x=[2, 4, 8], y=[5, 10])
```
The doc mentions:
```
# This results in the add function being called with
# add(x=2, y=5)
# add(x=2, y=10)
# add(x=4, y=5)
# add(x=4, y=10)
# add(x=8, y=5)
# add(x=8, y=10)
```
But when I create a DAG with the example, only 5 tasks are mapped instead of 6:

### What you think should happen instead
A task should be mapped for all 6 possible outcomes, rather than only 5
### How to reproduce
Create a DAG using the example provided [here](Mapping over multiple parameters) and check the number of mapped instances:

### Operating System
macOS 11.5.2
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-databricks==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-snowflake==2.6.0
apache-airflow-providers-sqlite==2.1.3
### Deployment
Astronomer
### Deployment details
Localhost instance of Astronomer Runtime 5.0.0
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23425 | https://github.com/apache/airflow/pull/23434 | 0fde90d92ae306f37041831f5514e9421eee676b | 3fb8e0b0b4e8810bedece873949871a94dd7387a | "2022-05-02T18:17:23Z" | python | "2022-05-04T19:02:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,420 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | Add a queue DAG run endpoint to REST API | ### Description
Add a POST endpoint to queue a dag run like we currently do [here](https://github.com/apache/airflow/issues/23419).
Url format: `api/v1/dags/{dag_id}/dagRuns/{dag_run_id}/queue`
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23420 | https://github.com/apache/airflow/pull/23481 | 1220c1a7a9698cdb15289d7066b29c209aaba6aa | 4485393562ea4151a42f1be47bea11638b236001 | "2022-05-02T17:42:15Z" | python | "2022-05-09T12:25:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,419 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | Add a DAG Run clear endpoint to REST API | ### Description
Add a POST endpoint to clear a dag run like we currently do [here](https://github.com/apache/airflow/blob/main/airflow/www/views.py#L2087).
Url format: `api/v1/dags/{dag_id}/dagRuns/{dag_run_id}/clear`
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23419 | https://github.com/apache/airflow/pull/23451 | f352ee63a5d09546a7997ba8f2f8702a1ddb4af7 | b83cc9b5e2c7e2516b0881861bbc0f8589cb531d | "2022-05-02T17:40:44Z" | python | "2022-05-24T03:30:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,415 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py", "tests/api_connexion/schemas/test_dag_run_schema.py"] | Add more fields to DAG Run API endpoints | ### Description
There are a few fields that would be useful to include in the REST API for getting a DAG run or list of DAG runs:
`data_interval_start`
`data_interval_end`
`last_scheduling_decision`
`run_type` as (backfill, manual and scheduled)
### Use case/motivation
We use this information in the Grid view as part of `tree_data`. If we added these extra fields to the REST APi we could remove all dag run info from tree_data.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23415 | https://github.com/apache/airflow/pull/23440 | 22b49d334ef0008be7bd3d8481b55b8ab5d71c80 | 6178491a117924155963586b246d2bf54be5320f | "2022-05-02T17:26:24Z" | python | "2022-05-03T12:27:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,414 | ["airflow/migrations/utils.py", "airflow/migrations/versions/0110_2_3_2_add_cascade_to_dag_tag_foreignkey.py", "airflow/models/dag.py", "docs/apache-airflow/migrations-ref.rst"] | airflow db clean - Dag cleanup won't run if dag is tagged | ### Apache Airflow version
2.3.0 (latest released)
### What happened
When running `airflow db clean`, if a to-be-cleaned dag is also tagged, a foreign key constraint in dag_tag is violated. Full error:
```
sqlalchemy.exc.IntegrityError: (psycopg2.errors.ForeignKeyViolation) update or delete on table "dag" violates foreign key constraint "dag_tag_dag_id_fkey" on table "dag_tag"
DETAIL: Key (dag_id)=(some-dag-id-here) is still referenced from table "dag_tag".
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-mssql==2.1.3
apache-airflow-providers-oracle==2.2.3
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-samba==3.0.4
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23414 | https://github.com/apache/airflow/pull/23444 | e2401329345dcc5effa933b92ca969b8779755e4 | 8ccff9244a6d1a936d8732721373b967e95ec404 | "2022-05-02T17:23:19Z" | python | "2022-05-27T14:28:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,411 | ["airflow/sensors/base.py", "tests/serialization/test_dag_serialization.py", "tests/ti_deps/deps/test_ready_to_reschedule_dep.py"] | PythonSensor is not considering mode='reschedule', instead marking task UP_FOR_RETRY | ### Apache Airflow version
2.3.0 (latest released)
### What happened
A PythonSensor that works on versions <2.3.0 in mode reschedule is now marking the task as `UP_FOR_RETRY` instead.
Log says:
```
[2022-05-02, 15:48:23 UTC] {python.py:66} INFO - Poking callable: <function test at 0x7fd56286bc10>
[2022-05-02, 15:48:23 UTC] {taskinstance.py:1853} INFO - Rescheduling task, marking task as UP_FOR_RESCHEDULE
[2022-05-02, 15:48:23 UTC] {local_task_job.py:156} INFO - Task exited with return code 0
[2022-05-02, 15:48:23 UTC] {local_task_job.py:273} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
But it directly marks it as `UP_FOR_RETRY` and then follows `retry_delay` and `retries`
### What you think should happen instead
It should mark the task as `UP_FOR_RESCHEDULE` and reschedule it according to the `poke_interval`
### How to reproduce
```
from datetime import datetime, timedelta
from airflow import DAG
from airflow.sensors.python import PythonSensor
def test():
return False
default_args = {
"owner": "airflow",
"depends_on_past": False,
"start_date": datetime(2022, 5, 2),
"email_on_failure": False,
"email_on_retry": False,
"retries": 1,
"retry_delay": timedelta(minutes=1),
}
dag = DAG("dag_csdepkrr_development_v001",
default_args=default_args,
catchup=False,
max_active_runs=1,
schedule_interval=None)
t1 = PythonSensor(task_id="PythonSensor",
python_callable=test,
poke_interval=30,
mode='reschedule',
dag=dag)
```
### Operating System
Latest Docker image
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-docker==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-grpc==2.0.4
apache-airflow-providers-hashicorp==2.2.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-odbc==2.0.4
apache-airflow-providers-oracle==2.2.3
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-sendgrid==2.0.4
apache-airflow-providers-sftp==2.5.2
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
```
### Deployment
Docker-Compose
### Deployment details
Latest Docker compose from the documentation
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23411 | https://github.com/apache/airflow/pull/23674 | d3b08802861b006fc902f895802f460a72d504b0 | f9e2a3051cd3a5b6fcf33bca4c929d220cf5661e | "2022-05-02T16:07:22Z" | python | "2022-05-17T12:18:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,396 | ["airflow/providers/cncf/kubernetes/utils/pod_manager.py"] | Airflow kubernetes pod operator fetch xcom fails | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Airflow kubernetes pod operator load xcom fails
def _exec_pod_command(self, resp, command: str) -> Optional[str]:
if resp.is_open():
self.log.info('Running command... %s\n', command)
resp.write_stdin(command + '\n')
while resp.is_open():
resp.update(timeout=1)
if resp.peek_stdout():
return resp.read_stdout()
if resp.peek_stderr():
self.log.info("stderr from command: %s", resp.read_stderr())
break
return None
_exec_pod_command read only peek stdout doesn't read full response.This content is loaded as json file json. loads function which causes system break with error "unterminated string"
### What you think should happen instead
It should not read partial content
### How to reproduce
When json size is larger
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23396 | https://github.com/apache/airflow/pull/23490 | b0406f58f0c51db46d2da7c7c84a0b5c3d4f09ae | faae9faae396610086d5ea18d61c356a78a3d365 | "2022-05-02T00:42:02Z" | python | "2022-05-10T15:46:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,361 | ["airflow/models/taskinstance.py", "tests/jobs/test_scheduler_job.py"] | Scheduler crashes with psycopg2.errors.DeadlockDetected exception | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Customer has a dag that generates around 2500 tasks dynamically using a task group. While running the dag, a subset of the tasks (~1000) run successfully with no issue and (~1500) of the tasks are getting "skipped", and the dag fails. The same DAG runs successfully in Airflow v2.1.3 with same Airflow configuration.
While investigating the Airflow processes, We found that both the scheduler got restarted with below error during the DAG execution.
```
[2022-04-27 20:42:44,347] {scheduler_job.py:742} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1256, in _execute_context
self.dialect.do_executemany(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/psycopg2.py", line 912, in do_executemany
cursor.executemany(statement, parameters)
psycopg2.errors.DeadlockDetected: deadlock detected
DETAIL: Process 1646244 waits for ShareLock on transaction 3915993452; blocked by process 1640692.
Process 1640692 waits for ShareLock on transaction 3915992745; blocked by process 1646244.
HINT: See server log for query details.
CONTEXT: while updating tuple (189873,4) in relation "task_instance"
```
This issue seems to be related to #19957
### What you think should happen instead
This issue was observed while running huge number of concurrent task created dynamically by a DAG. Some of the tasks are getting skipped due to restart of scheduler with Deadlock exception.
### How to reproduce
DAG file:
```
from propmix_listings_details import BUCKET, ZIPS_FOLDER, CITIES_ZIP_COL_NAME, DETAILS_DEV_LIMIT, DETAILS_RETRY, DETAILS_CONCURRENCY, get_api_token, get_values, process_listing_ids_based_zip
from airflow.utils.task_group import TaskGroup
from airflow import DAG
from airflow.operators.dummy_operator import DummyOperator
from airflow.operators.python_operator import PythonOperator
from datetime import datetime, timedelta
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
'retries': 0,
}
date = '{{ execution_date }}'
email_to = ['example@airflow.com']
# Using a DAG context manager, you don't have to specify the dag property of each task
state = 'Maha'
with DAG('listings_details_generator_{0}'.format(state),
start_date=datetime(2021, 11, 18),
schedule_interval=None,
max_active_runs=1,
concurrency=DETAILS_CONCURRENCY,
dagrun_timeout=timedelta(minutes=10),
catchup=False # enable if you don't want historical dag runs to run
) as dag:
t0 = DummyOperator(task_id='start')
with TaskGroup(group_id='group_1') as tg1:
token = get_api_token()
zip_list = get_values(BUCKET, ZIPS_FOLDER+state, CITIES_ZIP_COL_NAME)
for zip in zip_list[0:DETAILS_DEV_LIMIT]:
details_operator = PythonOperator(
task_id='details_{0}_{1}'.format(state, zip), # task id is generated dynamically
pool='pm_details_pool',
python_callable=process_listing_ids_based_zip,
task_concurrency=40,
retries=3,
retry_delay=timedelta(seconds=10),
op_kwargs={'zip': zip, 'date': date, 'token':token, 'state':state}
)
t0 >> tg1
```
### Operating System
kubernetes cluster running on GCP linux (amd64)
### Versions of Apache Airflow Providers
pip freeze | grep apache-airflow-providers
apache-airflow-providers-amazon==1!3.2.0
apache-airflow-providers-cncf-kubernetes==1!3.0.0
apache-airflow-providers-elasticsearch==1!2.2.0
apache-airflow-providers-ftp==1!2.1.2
apache-airflow-providers-google==1!6.7.0
apache-airflow-providers-http==1!2.1.2
apache-airflow-providers-imap==1!2.2.3
apache-airflow-providers-microsoft-azure==1!3.7.2
apache-airflow-providers-mysql==1!2.2.3
apache-airflow-providers-postgres==1!4.1.0
apache-airflow-providers-redis==1!2.0.4
apache-airflow-providers-slack==1!4.2.3
apache-airflow-providers-snowflake==2.6.0
apache-airflow-providers-sqlite==1!2.1.3
apache-airflow-providers-ssh==1!2.4.3
### Deployment
Astronomer
### Deployment details
Airflow v2.2.5-2
Scheduler count: 2
Scheduler resources: 20AU (2CPU and 7.5GB)
Executor used: Celery
Worker count : 2
Worker resources: 24AU (2.4 CPU and 9GB)
Termination grace period : 2mins
### Anything else
This issue happens in all the dag runs. Some of the tasks are getting skipped and some are getting succeeded and the scheduler fails with the Deadlock exception error.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23361 | https://github.com/apache/airflow/pull/25312 | 741c20770230c83a95f74fe7ad7cc9f95329f2cc | be2b53eaaf6fc136db8f3fa3edd797a6c529409a | "2022-04-29T13:05:15Z" | python | "2022-08-09T14:17:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,343 | ["tests/cluster_policies/__init__.py", "tests/dags_corrupted/test_nonstring_owner.py", "tests/models/test_dagbag.py"] | Silent DAG import error by making owner a list | ### Apache Airflow version
2.2.5 (latest released)
### What happened
If the argument `owner` is unhashable, such as a list, the DAG will fail to be imported, but will also not report as an import error. If the DAG is new, it will simply be missing. If this is an update to the existing DAG, the webserver will continue to show the old version.
### What you think should happen instead
A DAG import error should be raised.
### How to reproduce
Set the `owner` argument for a task to be a list. See this minimal reproduction DAG.
```
from datetime import datetime
from airflow.decorators import dag, task
@dag(
schedule_interval="@daily",
start_date=datetime(2021, 1, 1),
catchup=False,
default_args={"owner": ["person"]},
tags=['example'])
def demo_bad_owner():
@task()
def say_hello():
print("hello")
demo_bad_owner()
```
### Operating System
Debian Bullseye
### Versions of Apache Airflow Providers
None needed.
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
The worker appears to still be able to execute the tasks when updating an existing DAG. Not sure how that's possible.
Also reproduced on 2.3.0rc2.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23343 | https://github.com/apache/airflow/pull/23359 | 9a0080c20bb2c4a9c0f6ccf1ece79bde895688ac | c4887bcb162aab9f381e49cecc2f212600c493de | "2022-04-28T22:09:14Z" | python | "2022-05-02T10:58:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,327 | ["airflow/providers/google/cloud/operators/gcs.py"] | GCSTransformOperator: provide Jinja templating in source and destination object names | ### Description
Provide an option to receive the source_object and destination_object via Jinja params.
### Use case/motivation
Usecase: Need to execute a DAG to fetch a video from GCS bucket based on paramater and then transform it and store it back.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23327 | https://github.com/apache/airflow/pull/23328 | 505af06303d8160c71f6a7abe4792746f640083d | c82b3b94660a38360f61d47676ed180a0d32c189 | "2022-04-28T12:27:11Z" | python | "2022-04-28T17:07:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,306 | ["docs/helm-chart/production-guide.rst"] | Helm chart production guide fails to inform resultBackendSecretName parameter should be used | ### What do you see as an issue?
The [production guide](https://airflow.apache.org/docs/helm-chart/stable/production-guide.html) indicates that the code below is what is necessary for deploying with secrets. But `resultBackendSecretName` should also be filled, or Airflow wont start.
```
data:
metadataSecretName: mydatabase
```
In addition to that, the expected URL is different in both variables.
`resultBackendSecretName` expects a url that starts with `db+postgresql://`, while `metadataSecretName` expects `postgresql://` or `postgres://` and wont work with `db+postgresql://`. To solve this, it might be necessary to create multiple secrets.
Just in case this is relevant, I'm using CeleryKubernetesExecutor.
### Solving the problem
Docs should warn about the issue above.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23306 | https://github.com/apache/airflow/pull/23307 | 3977e1798d8294ba628b5f330f43702c1a5c79fc | 48915bd149bd8b58853880d63b8c6415688479ec | "2022-04-27T20:34:07Z" | python | "2022-05-04T21:28:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,292 | ["airflow/providers/google/cloud/hooks/cloud_sql.py"] | GCP Composer v1.18.6 and 2.0.10 incompatible with CloudSqlProxyRunner | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
6.6.0 or above
### Apache Airflow version
2.2.3
### Operating System
n/a
### Deployment
Composer
### Deployment details
_No response_
### What happened
Hi! A [user on StackOverflow](https://stackoverflow.com/questions/71975635/gcp-composer-v1-18-6-and-2-0-10-incompatible-with-cloudsqlproxyrunner
) and some Cloud SQL engineers at Google noticed that the CloudSQLProxyRunner was broken by [this commit](https://github.com/apache/airflow/pull/22127/files#diff-5992ce7fff93c23c57833df9ef892e11a023494341b80a9fefa8401f91988942L454)
### What you think should happen instead
Ideally DAGs should continue to work as they did before
### How to reproduce
Make a DAG that connects to Cloud SQL using the CloudSQLProxyRunner in Composer 1.18.6 or above using the google providers 6.6.0 or above and see a 404
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23292 | https://github.com/apache/airflow/pull/23299 | 0c9c1cf94acc6fb315a9bc6f5bf1fbd4e4b4c923 | 1f3260354988b304cf31d5e1d945ce91798bed48 | "2022-04-27T17:34:37Z" | python | "2022-04-28T13:42:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,285 | ["airflow/models/taskmixin.py", "airflow/utils/edgemodifier.py", "airflow/utils/task_group.py", "tests/utils/test_edgemodifier.py"] | Cycle incorrectly detected in DAGs when using Labels within Task Groups | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
When attempting to create a DAG containing Task Groups and in those Task Groups there are Labels between nodes, the DAG fails to import due to cycle detection.
Consider this DAG:
```python
from pendulum import datetime
from airflow.decorators import dag, task, task_group
from airflow.utils.edgemodifier import Label
@task
def begin():
...
@task
def end():
...
@dag(start_date=datetime(2022, 1, 1), schedule_interval=None)
def task_groups_with_edge_labels():
@task_group
def group():
begin() >> Label("label") >> end()
group()
_ = task_groups_with_edge_labels()
```
When attempting to import the DAG, this error message is displayed:
<img width="1395" alt="image" src="https://user-images.githubusercontent.com/48934154/165566299-3dd65cff-5e36-47d3-a243-7bc33d4344d6.png">
This also occurs on the `main` branch as well.
### What you think should happen instead
Users should be able to specify Labels between tasks within a Task Group.
### How to reproduce
- Use the DAG mentioned above and try to import into an Airflow environment
- Or, create a simple unit test of the following and execute said test.
```python
def test_cycle_task_group_with_edge_labels(self):
from airflow.models.baseoperator import chain
from airflow.utils.task_group import TaskGroup
from airflow.utils.edgemodifier import Label
dag = DAG('dag', start_date=DEFAULT_DATE, default_args={'owner': 'owner1'})
with dag:
with TaskGroup(group_id="task_group") as task_group:
op1 = EmptyOperator(task_id='A')
op2 = EmptyOperator(task_id='B')
op1 >> Label("label") >> op2
assert not check_cycle(dag)
```
A `AirflowDagCycleException` should be thrown:
```
tests/utils/test_dag_cycle.py::TestCycleTester::test_cycle_task_group_with_edge_labels FAILED [100%]
=============================================================================================== FAILURES ===============================================================================================
________________________________________________________________________ TestCycleTester.test_cycle_task_group_with_edge_labels ________________________________________________________________________
self = <tests.utils.test_dag_cycle.TestCycleTester testMethod=test_cycle_task_group_with_edge_labels>
def test_cycle_task_group_with_edge_labels(self):
from airflow.models.baseoperator import chain
from airflow.utils.task_group import TaskGroup
from airflow.utils.edgemodifier import Label
dag = DAG('dag', start_date=DEFAULT_DATE, default_args={'owner': 'owner1'})
with dag:
with TaskGroup(group_id="task_group") as task_group:
op1 = EmptyOperator(task_id='A')
op2 = EmptyOperator(task_id='B')
op1 >> Label("label") >> op2
> assert not check_cycle(dag)
tests/utils/test_dag_cycle.py:168:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
airflow/utils/dag_cycle_tester.py:76: in check_cycle
child_to_check = _check_adjacent_tasks(current_task_id, task)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
task_id = 'task_group.B', current_task = <Task(EmptyOperator): task_group.B>
def _check_adjacent_tasks(task_id, current_task):
"""Returns first untraversed child task, else None if all tasks traversed."""
for adjacent_task in current_task.get_direct_relative_ids():
if visited[adjacent_task] == CYCLE_IN_PROGRESS:
msg = f"Cycle detected in DAG. Faulty task: {task_id}"
> raise AirflowDagCycleException(msg)
E airflow.exceptions.AirflowDagCycleException: Cycle detected in DAG. Faulty task: task_group.B
airflow/utils/dag_cycle_tester.py:62: AirflowDagCycleException
---------------------------------------------------------------------------------------- Captured stdout setup -----------------------------------------------------------------------------------------
========================= AIRFLOW ==========================
Home of the user: /root
Airflow home /root/airflow
Skipping initializing of the DB as it was initialized already.
You can re-initialize the database by adding --with-db-init flag when running tests.
======================================================================================= short test summary info ========================================================================================
FAILED tests/utils/test_dag_cycle.py::TestCycleTester::test_cycle_task_group_with_edge_labels - airflow.exceptions.AirflowDagCycleException: Cycle detected in DAG. Faulty task: task_group.B
==================================================================================== 1 failed, 2 warnings in 1.08s =====================================================================================
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
N/A
### Deployment
Astronomer
### Deployment details
This issue also occurs on the `main` branch using Breeze.
### Anything else
Possibly related to #21404
When the Label is removed, no cycle is detected.
```python
from pendulum import datetime
from airflow.decorators import dag, task, task_group
from airflow.utils.edgemodifier import Label
@task
def begin():
...
@task
def end():
...
@dag(start_date=datetime(2022, 1, 1), schedule_interval=None)
def task_groups_with_edge_labels():
@task_group
def group():
begin() >> end()
group()
_ = task_groups_with_edge_labels()
```
<img width="1437" alt="image" src="https://user-images.githubusercontent.com/48934154/165566908-a521d685-a032-482e-9e6b-ef85f0743e64.png">
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23285 | https://github.com/apache/airflow/pull/23291 | 726b27f86cf964924e5ee7b29a30aefe24dac45a | 3182303ce50bda6d5d27a6ef4e19450fb4e47eea | "2022-04-27T16:28:04Z" | python | "2022-04-27T18:12:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,284 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_schema.py", "tests/api_connexion/endpoints/test_task_endpoint.py", "tests/api_connexion/schemas/test_task_schema.py"] | Get DAG tasks in REST API does not include is_mapped | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
The rest API endpoint for get [/dags/{dag_id}/tasks](https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_tasks) does not include `is_mapped`.
Example: `consumer` is mapped but I have no way to tell that from the API response:
<img width="306" alt="Screen Shot 2022-04-27 at 11 35 54 AM" src="https://user-images.githubusercontent.com/4600967/165556420-f8ade6e6-e904-4be0-a759-5281ddc04cba.png">
<img width="672" alt="Screen Shot 2022-04-27 at 11 35 25 AM" src="https://user-images.githubusercontent.com/4600967/165556310-742ec23d-f5a8-4cae-bea1-d00fd6c6916f.png">
### What you think should happen instead
Someone should be able to know if a task from get /tasks is mapped or not.
### How to reproduce
call get /tasks on a dag with mapped tasks. see there is no way to determine if it is mapped from the response body.
### Operating System
Mac OSX
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23284 | https://github.com/apache/airflow/pull/23319 | 98ec8c6990347fda60cbad33db915dc21497b1f0 | f3d80c2a0dce93b908d7c9de30c9cba673eb20d5 | "2022-04-27T15:37:09Z" | python | "2022-04-28T12:54:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,249 | ["airflow/cli/commands/task_command.py", "tests/cli/commands/test_task_command.py"] | Pool option does not work in backfill command | ### Apache Airflow version
2.2.4
### What happened
Discussion Ref: https://github.com/apache/airflow/discussions/22201
Added the pool option to the backfill command, but only uses default_pool.
The log appears as below, but if you check the Task Instance Details / List Pool UI, default_pool is used.
```--------------------------------------------------------------------------------
[2022-03-12, 20:03:44 KST] {taskinstance.py:1244} INFO - Starting attempt 1 of 1
[2022-03-12, 20:03:44 KST] {taskinstance.py:1245} INFO -
--------------------------------------------------------------------------------
[2022-03-12, 20:03:44 KST] {taskinstance.py:1264} INFO - Executing <Task(BashOperator): runme_0> on 2022-03-05 00:00:00+00:00
[2022-03-12, 20:03:44 KST] {standard_task_runner.py:52} INFO - Started process 555 to run task
[2022-03-12, 20:03:45 KST] {standard_task_runner.py:76} INFO - Running: ['***', 'tasks', 'run', 'example_bash_operator', 'runme_0', 'backfill__2022-03-05T00:00:00+00:00', '--job-id', '127', '--pool', 'backfill_pool', '--raw', '--subdir', '/home/***/.local/lib/python3.8/site-packages/***/example_dags/example_bash_operator.py', '--cfg-path', '/tmp/tmprhjr0bc_', '--error-file', '/tmp/tmpkew9ufim']
[2022-03-12, 20:03:45 KST] {standard_task_runner.py:77} INFO - Job 127: Subtask runme_0
[2022-03-12, 20:03:45 KST] {logging_mixin.py:109} INFO - Running <TaskInstance: example_bash_operator.runme_0 backfill__2022-03-05T00:00:00+00:00 [running]> on host 56d55382c860
[2022-03-12, 20:03:45 KST] {taskinstance.py:1429} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=***
AIRFLOW_CTX_DAG_ID=example_bash_operator
AIRFLOW_CTX_TASK_ID=runme_0
AIRFLOW_CTX_EXECUTION_DATE=2022-03-05T00:00:00+00:00
AIRFLOW_CTX_DAG_RUN_ID=backfill__2022-03-05T00:00:00+00:00
[2022-03-12, 20:03:45 KST] {subprocess.py:62} INFO - Tmp dir root location:
/tmp
[2022-03-12, 20:03:45 KST] {subprocess.py:74} INFO - Running command: ['bash', '-c', 'echo "example_bash_operator__runme_0__20220305" && sleep 1']
[2022-03-12, 20:03:45 KST] {subprocess.py:85} INFO - Output:
[2022-03-12, 20:03:46 KST] {subprocess.py:89} INFO - example_bash_operator__runme_0__20220305
[2022-03-12, 20:03:47 KST] {subprocess.py:93} INFO - Command exited with return code 0
[2022-03-12, 20:03:47 KST] {taskinstance.py:1272} INFO - Marking task as SUCCESS. dag_id=example_bash_operator, task_id=runme_0, execution_date=20220305T000000, start_date=20220312T110344, end_date=20220312T110347
[2022-03-12, 20:03:47 KST] {local_task_job.py:154} INFO - Task exited with return code 0
[2022-03-12, 20:03:47 KST] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
### What you think should happen instead
The backfill task instance should use a slot in the backfill_pool.
### How to reproduce
1. Create a backfill_pool in UI.
2. Run the backfill command on the example dag.
```
$ docker exec -it airflow_airflow-scheduler_1 /bin/bash
$ airflow dags backfill example_bash_operator -s 2022-03-05 -e 2022-03-06 \
--pool backfill_pool --reset-dagruns -y
[2022-03-12 11:03:52,720] {backfill_job.py:386} INFO - [backfill progress] | finished run 0 of 2 | tasks waiting: 2 | succeeded: 8 | running: 2 | failed: 0 | skipped: 2 | deadlocked: 0 | not ready: 2
[2022-03-12 11:03:57,574] {dagrun.py:545} INFO - Marking run <DagRun example_bash_operator @ 2022-03-05T00:00:00+00:00: backfill__2022-03-05T00:00:00+00:00, externally triggered: False> successful
[2022-03-12 11:03:57,575] {dagrun.py:590} INFO - DagRun Finished: dag_id=example_bash_operator, execution_date=2022-03-05T00:00:00+00:00, run_id=backfill__2022-03-05T00:00:00+00:00, run_start_date=2022-03-12 11:03:37.530158+00:00, run_end_date=2022-03-12 11:03:57.575869+00:00, run_duration=20.045711, state=success, external_trigger=False, run_type=backfill, data_interval_start=2022-03-05T00:00:00+00:00, data_interval_end=2022-03-06 00:00:00+00:00, dag_hash=None
[2022-03-12 11:03:57,582] {dagrun.py:545} INFO - Marking run <DagRun example_bash_operator @ 2022-03-06T00:00:00+00:00: backfill__2022-03-06T00:00:00+00:00, externally triggered: False> successful
[2022-03-12 11:03:57,583] {dagrun.py:590} INFO - DagRun Finished: dag_id=example_bash_operator, execution_date=2022-03-06T00:00:00+00:00, run_id=backfill__2022-03-06T00:00:00+00:00, run_start_date=2022-03-12 11:03:37.598927+00:00, run_end_date=2022-03-12 11:03:57.583295+00:00, run_duration=19.984368, state=success, external_trigger=False, run_type=backfill, data_interval_start=2022-03-06 00:00:00+00:00, data_interval_end=2022-03-07 00:00:00+00:00, dag_hash=None
[2022-03-12 11:03:57,584] {backfill_job.py:386} INFO - [backfill progress] | finished run 2 of 2 | tasks waiting: 0 | succeeded: 10 | running: 0 | failed: 0 | skipped: 4 | deadlocked: 0 | not ready: 0
[2022-03-12 11:03:57,589] {backfill_job.py:851} INFO - Backfill done. Exiting.
```
### Operating System
MacOS BigSur, docker-compose
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Follow the guide - [Running Airflow in Docker]. Use CeleryExecutor.
https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23249 | https://github.com/apache/airflow/pull/23258 | 511d0ee256b819690ccf0f6b30d12340b1dd7f0a | 3970ea386d5e0a371143ad1e69b897fd1262842d | "2022-04-26T10:48:39Z" | python | "2022-04-30T19:11:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,246 | ["airflow/api_connexion/endpoints/task_instance_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | Add api call for changing task instance status | ### Description
In the UI you can change the status of a task instance, but there is no API call available for the same feature.
It would be nice to have an api call for this as well.
### Use case/motivation
I found a solution on stack-overflow on [How to add manual tasks in an Apache Airflow Dag]. There is a suggestion to set a task on failed and change it manually to succeed when the task is done.
Our project has many manual tasks. This suggestions seems like a good option, but there is no api call yet to call instead of change all status manually. I would like to use an api call for this instead.
You can change the status of on a dag run so it also seems natural to have something similar for task instances.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23246 | https://github.com/apache/airflow/pull/26165 | 5c37b503f118b8ad2585dff9949dd8fdb96689ed | 1e6f1d54c54e5dc50078216e23ba01560ebb133c | "2022-04-26T09:17:52Z" | python | "2022-10-31T05:31:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,227 | ["airflow/api_connexion/endpoints/task_instance_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/schemas/test_task_instance_schema.py"] | Ability to clear a specific DAG Run's task instances via REST APIs | ### Discussed in https://github.com/apache/airflow/discussions/23220
<div type='discussions-op-text'>
<sup>Originally posted by **yashk97** April 25, 2022</sup>
Hi,
My use case is in case multiple DAG Runs fail on some task (not the same one in all of them), I want to individually re-trigger each of these DAG Runs. Currently, I have to rely on the Airflow UI (attached screenshots) where I select the failed task and clear its state (along with the downstream tasks) to re-run from that point. While this works, it becomes tedious if the number of failed DAG runs is huge.
I checked the REST API Documentation and came across the clear Task Instances API with the following URL: /api/v1/dags/{dag_id}/clearTaskInstances
However, it filters task instances of the specified DAG in a given date range.
I was wondering if, for a specified DAG Run, we can clear a task along with its downstream tasks irrespective of the states of the tasks or the DAG run through REST API.
This will give us more granular control over re-running DAGs from the point of failure.


</div> | https://github.com/apache/airflow/issues/23227 | https://github.com/apache/airflow/pull/23516 | 3221ed5968423ea7a0dc7e1a4b51084351c2d56b | eceb4cc5888a7cf86a9250fff001fede2d6aba0f | "2022-04-25T18:40:24Z" | python | "2022-08-05T17:27:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,171 | ["airflow/api/common/mark_tasks.py", "airflow/models/dag.py", "tests/models/test_dag.py", "tests/test_utils/mapping.py"] | Mark Success on a mapped task, reruns other failing mapped tasks | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
Have a DAG with mapped tasks. Mark at least two mapped tasks as failed. Mark one of the failures as success. See the other task(s) switch to `no_status` and rerun.

### What you think should happen instead
Marking a single mapped task as a success probably shouldn't affect other failed mapped tasks.
### How to reproduce
_No response_
### Operating System
OSX
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23171 | https://github.com/apache/airflow/pull/23177 | d262a72ca7ab75df336b93cefa338e7ba3f90ebb | 26a9ec65816e3ec7542d63ab4a2a494931a06c9b | "2022-04-22T14:25:54Z" | python | "2022-04-25T09:03:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,168 | ["airflow/api_connexion/schemas/connection_schema.py", "tests/api_connexion/endpoints/test_connection_endpoint.py"] | Getting error "Extra Field may not be null" while hitting create connection api with extra=null | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
Getting error "Extra Field may not be null" while hitting create connection api with extra=null
```
{
"detail": "{'extra': ['Field may not be null.']}",
"status": 400,
"title": "Bad Request",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest"
}
```
### What you think should happen instead
I should be able to create connection through API
### How to reproduce
Steps to reproduce:
1. Hit connection end point with json body
Api Endpoint - api/v1/connections
HTTP Method - Post
Json Body -
```
{
"connection_id": "string6",
"conn_type": "string",
"host": "string",
"login": null,
"schema": null,
"port": null,
"password": "pa$$word",
"extra":null
}
```
### Operating System
debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
Astro dev start
### Anything else
As per code I am assuming it may be null.
```
Connection:
description: Full representation of the connection.
allOf:
- $ref: '#/components/schemas/ConnectionCollectionItem'
- type: object
properties:
password:
type: string
format: password
writeOnly: true
description: Password of the connection.
extra:
type: string
nullable: true
description: Other values that cannot be put into another field, e.g. RSA keys.
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23168 | https://github.com/apache/airflow/pull/23183 | b33cd10941dd10d461023df5c2d3014f5dcbb7ac | b45240ad21ca750106931ba2b882b3238ef2b37d | "2022-04-22T10:48:23Z" | python | "2022-04-25T14:55:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,145 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Task stuck in "scheduled" when running in backfill job | ### Apache Airflow version
2.2.4
### What happened
We are running airflow 2.2.4 with KubernetesExecutor. I have created a dag to run airflow backfill command with SubprocessHook. What was observed is that when I started to backfill a few days' dagruns the backfill would get stuck with some dag runs having tasks staying in the "scheduled" state and never getting running.
We are using the default pool and the pool is totoally free when the tasks got stuck.
I could find some logs saying:
`TaskInstance: <TaskInstance: test_dag_2.task_1 backfill__2022-03-29T00:00:00+00:00 [queued]> found in queued state but was not launched, rescheduling` and nothing else in the log.
### What you think should happen instead
The tasks stuck in "scheduled" should start running when there is free slot in the pool.
### How to reproduce
Airflow 2.2.4 with python 3.8.13, KubernetesExecutor running in AWS EKS.
One backfill command example is: `airflow dags backfill test_dag_2 -s 2022-03-01 -e 2022-03-10 --rerun-failed-tasks`
The test_dag_2 dag is like:
```
import time
from datetime import timedelta
import pendulum
from airflow import DAG
from airflow.decorators import task
from airflow.models.dag import dag
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
from airflow.operators.python import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['airflow@example.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
}
def get_execution_date(**kwargs):
ds = kwargs['ds']
print(ds)
with DAG(
'test_dag_2',
default_args=default_args,
description='Testing dag',
start_date=pendulum.datetime(2022, 4, 2, tz='UTC'),
schedule_interval="@daily", catchup=True, max_active_runs=1,
) as dag:
t1 = BashOperator(
task_id='task_1',
depends_on_past=False,
bash_command='sleep 30'
)
t2 = PythonOperator(
task_id='get_execution_date',
python_callable=get_execution_date
)
t1 >> t2
```
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.0.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-cncf-kubernetes==3.0.2
apache-airflow-providers-docker==2.4.1
apache-airflow-providers-elasticsearch==2.2.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.4.0
apache-airflow-providers-grpc==2.0.1
apache-airflow-providers-hashicorp==2.1.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-microsoft-azure==3.6.0
apache-airflow-providers-microsoft-mssql==2.1.0
apache-airflow-providers-odbc==2.0.1
apache-airflow-providers-postgres==3.0.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sendgrid==2.0.1
apache-airflow-providers-sftp==2.4.1
apache-airflow-providers-slack==4.2.0
apache-airflow-providers-snowflake==2.5.0
apache-airflow-providers-sqlite==2.1.0
apache-airflow-providers-ssh==2.4.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23145 | https://github.com/apache/airflow/pull/23720 | 49cfb6498eed0acfc336a24fd827b69156d5e5bb | 640d4f9636d3867d66af2478bca15272811329da | "2022-04-21T12:29:32Z" | python | "2022-11-18T01:09:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,092 | ["airflow/www/static/css/bootstrap-theme.css"] | UI: Transparent border causes dropshadow to render 1px away from Action dropdown menu in Task Instance list | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Airflow:
> Astronomer Certified: v2.2.5.post1 based on Apache Airflow v2.2.5
> Git Version: .release:2.2.5+astro.1+90fc013e6e4139e2d4bfe438ad46c3af1d523668
Due to this CSS in `airflowDefaultTheme.ce329611a683ab0c05fd.css`:
```css
.dropdown-menu {
background-clip: padding-box;
background-color: #fff;
border: 1px solid transparent; /* <-- transparent border */
}
```
the dropdown border and dropshadow renders...weirdly:

Zoomed in - take a close look at the border and how the contents underneath the dropdown bleed through the border, making the dropshadow render 1px away from the dropdown menu:

### What you think should happen instead
When I remove the abberrant line of CSS above, it cascades to this in `bootstrap.min.css`:
```css
.dropdown-menu {
...
border: 1px solid rgba(0,0,0,.15);
...
}
```
which renders the border as gray:

So I think we should not use a transparent border, or we should remove the explicit border from the dropdown and let Bootstrap control it.
### How to reproduce
Spin up an instance of Airflow with `astro dev start`, trigger a DAG, inspect the DAG details, and list all task instances of a DAG run. Then click the Actions dropdown menu.
### Operating System
macOS 11.6.4 Big Sur (Intel)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Astro installed via Homebrew:
> Astro CLI Version: 0.28.1, Git Commit: 980c0d7bd06b818a2cb0e948bb101d0b27e3a90a
> Astro Server Version: 0.28.4-rc9
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23092 | https://github.com/apache/airflow/pull/27789 | 8b1ebdacd8ddbe841a74830f750ed8f5e6f38f0a | d233c12c30f9a7f3da63348f3f028104cb14c76b | "2022-04-19T17:56:36Z" | python | "2022-11-19T23:57:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,083 | ["BREEZE.rst", "TESTING.rst", "dev/breeze/src/airflow_breeze/commands/testing.py", "dev/breeze/src/airflow_breeze/shell/enter_shell.py", "dev/breeze/src/airflow_breeze/utils/docker_command_utils.py", "images/breeze/output-commands.svg", "images/breeze/output-tests.svg"] | Breeze: Running integration tests in Breeze | We should be able to run integration tests with Breeze - this is extension of `test` unit tests command that should allow to enable --integrations (same as in Shell) and run the tests with only the integration tests selected. | https://github.com/apache/airflow/issues/23083 | https://github.com/apache/airflow/pull/23445 | 83784d9e7b79d2400307454ccafdacddaee16769 | 7ba4e35a9d1b65b4c1a318ba4abdf521f98421a2 | "2022-04-19T14:17:28Z" | python | "2022-05-06T09:03:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,082 | ["BREEZE.rst", "TESTING.rst", "dev/breeze/src/airflow_breeze/commands/testing.py", "dev/breeze/src/airflow_breeze/shell/enter_shell.py", "dev/breeze/src/airflow_breeze/utils/docker_command_utils.py", "images/breeze/output-commands.svg", "images/breeze/output-tests.svg"] | Breeze: Add running unit tests with Breeze | We should be able to run unit tests automatically from breeze (`test` command in legacy-breeze) | https://github.com/apache/airflow/issues/23082 | https://github.com/apache/airflow/pull/23445 | 83784d9e7b79d2400307454ccafdacddaee16769 | 7ba4e35a9d1b65b4c1a318ba4abdf521f98421a2 | "2022-04-19T14:15:49Z" | python | "2022-05-06T09:03:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,068 | ["airflow/www/static/js/tree/InstanceTooltip.jsx", "airflow/www/static/js/tree/details/content/dagRun/index.jsx", "airflow/www/static/js/tree/details/content/taskInstance/Details.jsx", "airflow/www/static/js/tree/details/content/taskInstance/MappedInstances.jsx", "airflow/www/utils.py"] | Grid view: "duration" shows 00:00:00 | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
Run [a dag with an expanded TimedeltaSensor and a normal TimedeltaSensor](https://gist.github.com/MatrixManAtYrService/051fdc7164d187ab215ff8087e4db043), and navigate to the corresponding entries in the grid view.
While the dag runs:
- The unmapped task shows its "duration" to be increasing
- The mapped task shows a blank entry for the duration
Once the dag has finished:
- both show `00:00:00` for the duration
### What you think should happen instead
I'm not sure what it should show, probably time spent running? Or maybe queued + running? Whatever it should be, 00:00:00 doesn't seem right if it spent 90 seconds waiting around (e.g. in the "running" state)
Also, if we're going to update duration continuously while the normal task is running, we should do the same for the expanded task.
### How to reproduce
run a dag with expanded sensors, notice 00:00:00 duration
### Operating System
debian (docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astrocloud dev start`
Dockerfile:
```
FROM quay.io/astronomer/ap-airflow-dev:main
```
image at airflow version 6d6ac2b2bcbb0547a488a1a13fea3cb1a69d24e8
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23068 | https://github.com/apache/airflow/pull/23259 | 511ea702d5f732582d018dad79754b54d5e53f9d | 9e2531fa4d9890f002d184121e018e3face5586b | "2022-04-19T03:11:17Z" | python | "2022-04-26T15:42:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,042 | ["airflow/www/static/css/graph.css", "airflow/www/static/js/graph.js"] | Graph view: Nodes arrows are cut | ### Body
<img width="709" alt="Screen Shot 2022-04-15 at 17 37 37" src="https://user-images.githubusercontent.com/45845474/163584251-f1ea5bc7-e132-41c4-a20c-cc247b81b899.png">
Reproduce example using [example_emr_job_flow_manual_steps ](https://github.com/apache/airflow/blob/b3cae77218788671a72411a344aab42a3c58e89c/airflow/providers/amazon/aws/example_dags/example_emr_job_flow_manual_steps.py)in AWS provider
Already discussed with @bbovenzi this issue will be fixed after 2.3.0 as it requires quite a bit of changes... also this is not a regression and it's just a "comsitic" issue in very specific DAGs.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23042 | https://github.com/apache/airflow/pull/23044 | 749e53def43055225a2e5d09596af7821d91b4ac | 028087b5a6e94fd98542d0e681d947979eb1011f | "2022-04-15T14:45:05Z" | python | "2022-05-12T19:47:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,028 | ["airflow/cli/commands/task_command.py"] | `airflow tasks states-for-dag-run` has no `map_index` column | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
I ran:
```
$ airflow tasks states-for-dag-run taskmap_xcom_pull 'manual__2022-04-14T13:27:04.958420+00:00'
dag_id | execution_date | task_id | state | start_date | end_date
==================+==================================+===========+=========+==================================+=================================
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | foo | success | 2022-04-14T13:27:05.343134+00:00 | 2022-04-14T13:27:05.598641+00:00
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | bar | success | 2022-04-14T13:27:06.256684+00:00 | 2022-04-14T13:27:06.462664+00:00
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | identity | success | 2022-04-14T13:27:07.480364+00:00 | 2022-04-14T13:27:07.713226+00:00
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | identity | success | 2022-04-14T13:27:07.512084+00:00 | 2022-04-14T13:27:07.768716+00:00
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | identity | success | 2022-04-14T13:27:07.546097+00:00 | 2022-04-14T13:27:07.782719+00:00
```
...targeting a dagrun for which `identity` had three expanded tasks. All three showed up, but the output didn't show me enough to know which one was which.
### What you think should happen instead
There should be a `map_index` column so that I know which one is which.
### How to reproduce
Run a dag with expanded tasks, then try to view their states via the cli
### Operating System
debian (docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23028 | https://github.com/apache/airflow/pull/23030 | 10c9cb5318fd8a9e41a7b4338e5052c8feece7ae | b24650c0cc156ceb5ef5791f1647d4d37a529920 | "2022-04-14T23:35:08Z" | python | "2022-04-19T02:23:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,947 | ["airflow/hooks/dbapi.py"] | closing connection chunks in DbApiHook.get_pandas_df | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Hi all,
Please be patient with me, it's my first Bugreport in git at all :)
**Affected function:** DbApiHook.get_pandas_df
**Short description**: If I use DbApiHook.get_pandas_df with parameter "chunksize" the connection is lost
**Error description**
I tried using the DbApiHook.get_pandas_df function instead of pandas.read_sql. Without the parameter "chunksize" both functions work the same. But as soon as I add the parameter chunksize to get_pandas_df, I lose the connection in the first iteration. This happens both when querying Oracle and Mysql (Mariadb) databases.
During my research I found a comment on a closed issue that describes the same -> [#8468
](https://github.com/apache/airflow/issues/8468)
My Airflow version: 2.2.5
I think it's something to do with the "with closing" argument, because when I remove that argument, the chunksize argument was working.
```
def get_pandas_df(self, sql, parameters=None, **kwargs):
"""
Executes the sql and returns a pandas dataframe
:param sql: the sql statement to be executed (str) or a list of
sql statements to execute
:param parameters: The parameters to render the SQL query with.
:param kwargs: (optional) passed into pandas.io.sql.read_sql method
"""
try:
from pandas.io import sql as psql
except ImportError:
raise Exception("pandas library not installed, run: pip install 'apache-airflow[pandas]'.")
# Not working
with closing(self.get_conn()) as conn:
return psql.read_sql(sql, con=conn, params=parameters, **kwargs)
# would working
# return psql.read_sql(sql, con=conn, params=parameters, **kwargs)_
```
### What you think should happen instead
It should give me a chunk of DataFrame
### How to reproduce
**not working**
```
src_hook = OracleHook(oracle_conn_id='oracle_source_conn_id')
query = "select * from example_table"
for chunk in src_hook.get_pandas_df(query,chunksize=2):
print(chunk.head())
```
**works**
```
for chunk in src_hook.get_pandas_df(query):
print(chunk.head())
```
**works**
```
for chunk in pandas.read_sql(query,src_hook.get_conn(),chunksize=2):
print(chunk.head())
```
### Operating System
MacOS Monetäre
### Versions of Apache Airflow Providers
apache-airflow 2.2.5
apache-airflow-providers-ftp 2.1.2
apache-airflow-providers-http 2.1.2
apache-airflow-providers-imap 2.2.3
apache-airflow-providers-microsoft-mssql 2.1.3
apache-airflow-providers-mongo 2.3.3
apache-airflow-providers-mysql 2.2.3
apache-airflow-providers-oracle 2.2.3
apache-airflow-providers-salesforce 3.4.3
apache-airflow-providers-sftp 2.5.2
apache-airflow-providers-sqlite 2.1.3
apache-airflow-providers-ssh 2.4.3
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22947 | https://github.com/apache/airflow/pull/23452 | 41e94b475e06f63db39b0943c9d9a7476367083c | ab1f637e463011a34d950c306583400b7a2fceb3 | "2022-04-12T11:41:24Z" | python | "2022-05-31T10:39:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,942 | ["airflow/models/taskinstance.py", "tests/models/test_trigger.py"] | Deferrable operator trigger event payload is not persisted in db and not passed to completion method | ### Apache Airflow version
2.2.5 (latest released)
### What happened
When trigger is fired, event payload is added in next_kwargs with 'event' key.
This gets persisted in db when next_kwargs are not provided by operator. but when present due to modification of existing dict its not persisted in db
### What you think should happen instead
It should persist trigger event payload in db even when next kwargs are provided
### How to reproduce
_No response_
### Operating System
any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22942 | https://github.com/apache/airflow/pull/22944 | a801ea3927b8bf3ca154fea3774ebf2d90e74e50 | bab740c0a49b828401a8baf04eb297d083605ae8 | "2022-04-12T10:00:48Z" | python | "2022-04-13T18:26:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,931 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | XCom is cleared when a task resumes from deferral. | ### Apache Airflow version
2.2.5 (latest released)
### What happened
A task's XCom value is cleared when a task is rescheduled after being deferred.
### What you think should happen instead
XCom should not be cleared in this case, as it is still the same task run.
### How to reproduce
```
from datetime import datetime, timedelta
from airflow import DAG
from airflow.models import BaseOperator
from airflow.triggers.temporal import TimeDeltaTrigger
class XComPushDeferOperator(BaseOperator):
def execute(self, context):
context["ti"].xcom_push("test", "test_value")
self.defer(
trigger=TimeDeltaTrigger(delta=timedelta(seconds=10)),
method_name="next",
)
def next(self, context, event=None):
pass
with DAG(
"xcom_clear", schedule_interval=None, start_date=datetime(2022, 4, 11),
) as dag:
XComPushDeferOperator(task_id="xcom_push")
```
### Operating System
macOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22931 | https://github.com/apache/airflow/pull/22932 | 4291de218e0738f32f516afe0f9d6adce7f3220d | 8b687ec82a7047fc35410f5c5bb0726de434e749 | "2022-04-12T00:34:38Z" | python | "2022-04-12T06:12:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,912 | ["airflow/www/static/css/main.css"] | Text wrap for task group tooltips | ### Description
Improve the readability of task group tooltips by wrapping the text after a certain number of characters.
### Use case/motivation
When tooltips have a lot of words in them, and your computer monitor is fairly large, Airflow will display the task group tooltip on one very long line. This can be difficult to read. It would be nice if after, say, 60 characters, additional tooltip text would be displayed on a new line.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22912 | https://github.com/apache/airflow/pull/22978 | 0cd8833df74f4b0498026c4103bab130e1fc1068 | 2f051e303fd433e64619f931eab2180db44bba23 | "2022-04-11T15:46:34Z" | python | "2022-04-13T13:57:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,897 | ["airflow/www/views.py", "tests/www/views/test_views_log.py"] | Invalid JSON metadata in get_logs_with_metadata causes server error. | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Invalid JSON metadata in get_logs_with_metadata causes server error. The `json.loads` exception is not handled like validation in other endpoints.
http://127.0.0.1:8080/get_logs_with_metadata?execution_date=2015-11-16T14:34:15+00:00&metadata=invalid
### What you think should happen instead
A proper error message can be returned
### How to reproduce
Accessing below endpoint with invalid metadata payload
http://127.0.0.1:8080/get_logs_with_metadata?execution_date=2015-11-16T14:34:15+00:00&metadata=invalid
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22897 | https://github.com/apache/airflow/pull/22898 | 8af77127f1aa332c6e976c14c8b98b28c8a4cd26 | a3dd8473e4c5bbea214ebc8d5545b75281166428 | "2022-04-11T08:03:51Z" | python | "2022-04-11T10:48:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,810 | ["airflow/providers/jira/sensors/jira.py"] | JiraTicketSensor duplicates TaskId | ### Apache Airflow Provider(s)
jira
### Versions of Apache Airflow Providers
apache-airflow-providers-jira==2.0.1
### Apache Airflow version
2.2.2
### Operating System
Amazon Linux 2
### Deployment
MWAA
### Deployment details
_No response_
### What happened
I've been trying to use the Jira Operator to create a Ticket from Airflow and use the JiraTicketSensor to check if the ticket was resolved. Creating the task works fine, but I can't get the Sensor to work.
If I don't provide the method_name I get an error that it is required, if I provide it as None, I get an error saying the Task id has already been added to the DAG.
```text
Broken DAG: [/usr/local/airflow/dags/jira_ticket_sensor.py] Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 553, in __init__
task_group.add(self)
File "/usr/local/lib/python3.7/site-packages/airflow/utils/task_group.py", line 175, in add
raise DuplicateTaskIdFound(f"Task id '{key}' has already been added to the DAG")
airflow.exceptions.DuplicateTaskIdFound: Task id 'jira_sensor' has already been added to the DAG
```
### What you think should happen instead
_No response_
### How to reproduce
use this dag
```python
from datetime import datetime
from airflow import DAG
from airflow.providers.jira.sensors.jira import JiraTicketSensor
with DAG(
dag_id='jira_ticket_sensor',
schedule_interval=None,
start_date=datetime(2021, 1, 1),
catchup=False
) as dag:
jira_sensor = JiraTicketSensor(
task_id='jira_sensor',
jira_conn_id='jira_default',
ticket_id='TEST-1',
field='status',
expected_value='Completed',
method_name='issue',
poke_interval=600
)
```
### Anything else
This error occurs every time
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22810 | https://github.com/apache/airflow/pull/23046 | e82a2fdf841dd571f3b8f456c4d054cf3a94fc03 | bf10545d8358bcdb9ca5dacba101482296251cab | "2022-04-07T10:43:06Z" | python | "2022-04-25T11:16:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,790 | ["chart/templates/secrets/metadata-connection-secret.yaml", "tests/charts/test_basic_helm_chart.py"] | Helm deployment fails when postgresql.nameOverride is used | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Helm installation fails with the following config:
```
postgresql:
enabled: true
nameOverride: overridename
```
The problem is manifested in the `-airflow-metadata` secret where the connection string will be generated without respect to the `nameOverride`
With the example config the generated string should be:
`postgresql://postgres:postgres@myrelease-overridename:5432/postgres?sslmode=disable`
but the actual string generated is:
`postgresql://postgres:postgres@myrelease-overridename.namespace:5432/postgres?sslmode=disable`
### What you think should happen instead
Installation should succeed with correctly generated metadata connection string
### How to reproduce
To reproduce just set the following in values.yaml and attempt `helm install`
```
postgresql:
enabled: true
nameOverride: overridename
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
using helm with kind cluster
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22790 | https://github.com/apache/airflow/pull/29214 | 338a633fc9faab54e72c408e8a47eeadb3ad55f5 | 56175e4afae00bf7ccea4116ecc09d987a6213c3 | "2022-04-06T16:28:38Z" | python | "2023-02-02T17:00:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,738 | ["airflow/models/taskinstance.py", "airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"] | Webserver doesn't mask rendered fields for pending tasks | ### Apache Airflow version
2.2.5 (latest released)
### What happened
When triggering a new dagrun the webserver will not mask secrets in the rendered fields for that dagrun's tasks which didn't start yet.
Tasks which have completed or are in state running are not affected by this.
### What you think should happen instead
The webserver should mask all secrets for tasks which have started or not started.
<img width="628" alt="Screenshot 2022-04-04 at 15 36 29" src="https://user-images.githubusercontent.com/7921017/161628806-c2c579e2-faea-40cc-835c-ac6802d15dc1.png">
.
### How to reproduce
Create a variable `my_secret` and run this DAG
```python
from datetime import timedelta
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.sensors.time_delta import TimeDeltaSensor
from airflow.utils.dates import days_ago
with DAG(
"secrets",
start_date=days_ago(1),
schedule_interval=None,
) as dag:
wait = TimeDeltaSensor(
task_id="wait",
delta=timedelta(minutes=1),
)
task = wait >> BashOperator(
task_id="secret_task",
bash_command="echo '{{ var.value.my_secret }}'",
)
```
While the first task `wait` is running, displaying rendered fields for the second task `secret_task` will show the unmasked secret variable.
<img width="1221" alt="Screenshot 2022-04-04 at 15 33 43" src="https://user-images.githubusercontent.com/7921017/161628734-b7b13190-a3fe-4898-8fa9-ff7537245c1c.png">
### Operating System
Debian (Astronomer Airflow Docker image)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==1!3.2.0
apache-airflow-providers-cncf-kubernetes==1!3.0.0
apache-airflow-providers-elasticsearch==1!3.0.2
apache-airflow-providers-ftp==1!2.1.2
apache-airflow-providers-google==1!6.7.0
apache-airflow-providers-http==1!2.1.2
apache-airflow-providers-imap==1!2.2.3
apache-airflow-providers-microsoft-azure==1!3.7.2
apache-airflow-providers-mysql==1!2.2.3
apache-airflow-providers-postgres==1!4.1.0
apache-airflow-providers-redis==1!2.0.4
apache-airflow-providers-slack==1!4.2.3
apache-airflow-providers-sqlite==1!2.1.3
apache-airflow-providers-ssh==1!2.4.3
```
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
We have seen this issue also in Airflow 2.2.3.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22738 | https://github.com/apache/airflow/pull/23807 | 10a0d8e7085f018b7328533030de76b48de747e2 | 2dc806367c3dc27df5db4b955d151e789fbc78b0 | "2022-04-04T20:47:44Z" | python | "2022-05-21T15:36:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,705 | ["airflow/providers/google/cloud/transfers/local_to_gcs.py", "tests/providers/google/cloud/transfers/test_local_to_gcs.py"] | LocalFileSystemToGCSOperator give false positive while copying file from src to dest, even when src has no file | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.4.0
### Apache Airflow version
2.1.4
### Operating System
Debian GNU/Linux 10 (buster)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When you run LocalFilesSystemToGCSOperator with the params for src and dest, the operator reports a false positive when there are no files present under the specified src directory. I expected it to fail stating the specified directory doesn't have any file.
[2022-03-15 14:26:15,475] {taskinstance.py:1107} INFO - Executing <Task(LocalFilesystemToGCSOperator): upload_files_to_GCS> on 2022-03-15T14:25:59.554459+00:00
[2022-03-15 14:26:15,484] {standard_task_runner.py:52} INFO - Started process 709 to run task
[2022-03-15 14:26:15,492] {standard_task_runner.py:76} INFO - Running: ['***', 'tasks', 'run', 'dag', 'upload_files_to_GCS', '2022-03-15T14:25:59.554459+00:00', '--job-id', '1562', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/dag.py', '--cfg-path', '/tmp/tmp_e9t7pl9', '--error-file', '/tmp/tmpyij6m4er']
[2022-03-15 14:26:15,493] {standard_task_runner.py:77} INFO - Job 1562: Subtask upload_files_to_GCS
[2022-03-15 14:26:15,590] {logging_mixin.py:104} INFO - Running <TaskInstance: dag.upload_files_to_GCS 2022-03-15T14:25:59.554459+00:00 [running]> on host 653e566fd372
[2022-03-15 14:26:15,752] {taskinstance.py:1300} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=jet2
AIRFLOW_CTX_DAG_ID=dag
AIRFLOW_CTX_TASK_ID=upload_files_to_GCS
AIRFLOW_CTX_EXECUTION_DATE=2022-03-15T14:25:59.554459+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2022-03-15T14:25:59.554459+00:00
[2022-03-15 14:26:19,357] {taskinstance.py:1204} INFO - Marking task as SUCCESS. gag, task_id=upload_files_to_GCS, execution_date=20220315T142559, start_date=20220315T142615, end_date=20220315T142619
[2022-03-15 14:26:19,422] {taskinstance.py:1265} INFO - 1 downstream tasks scheduled from follow-on schedule check
[2022-03-15 14:26:19,458] {local_task_job.py:149} INFO - Task exited with return code 0
### What you think should happen instead
The operator should at least info that no files were copied than just making it successful.
### How to reproduce
- create a Dag with LocalFilesSystemToGCSOperator
- specify an empty directory as src and a gcp bucket as bucket_name, dest param(can be blank).
- run the dag
### Anything else
No
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22705 | https://github.com/apache/airflow/pull/22772 | 921ccedf7f90f15e8d18c27a77b29d232be3c8cb | 838cf401b9a424ad0fbccd5fb8d3040a8f4a7f44 | "2022-04-02T11:30:11Z" | python | "2022-04-06T19:22:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,693 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | KubernetesPodOperator failure email alert with actual error log from command executed | ### Description
When a command executed using KubernetesPodOperator fails, the alert email only says:
`Exception: Pod Launching failed: Pod pod_name_xyz returned a failure`
along with other parameters supplied to the operator but doesn't contain actual error message thrown by the command.
~~I am thinking similar to how xcom works with KubernetesPodOperator, if the command could write the error log in sidecar container in /airflow/log/error.log and airflow picks that up, then it could be included in the alert email (probably at the top). It can use same sidecar as for xcom (if that is easier to maintain) but write in different folder.~~
Looks like kubernetes has a way to send termination message.
https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/
Just need to pull that from container status message and include it in failure message at the top.
### Use case/motivation
Similar to how email alert for most other operator includes key error message right there without having to login to airflow to see the logs, i am expecting similar functionality from KubernetesPodOperator too.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22693 | https://github.com/apache/airflow/pull/22871 | ddb5d9b4a2b4e6605f66f82a6bec30393f096c05 | d81703c5778e13470fcd267578697158776b8318 | "2022-04-01T17:07:52Z" | python | "2022-04-14T00:16:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,689 | ["docs/apache-airflow-providers-apache-hdfs/index.rst"] | HDFS provider causes TypeError: __init__() got an unexpected keyword argument 'encoding' | ### Discussed in https://github.com/apache/airflow/discussions/22301
<div type='discussions-op-text'>
<sup>Originally posted by **frankie1211** March 16, 2022</sup>
I build the custom container image, below is my Dockerfile.
```dockerfile
FROM apache/airflow:2.2.4-python3.9
USER root
RUN apt-get update \
&& apt-get install -y gcc g++ vim libkrb5-dev build-essential libsasl2-dev \
&& apt-get autoremove -yqq --purge \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
USER airflow
RUN pip install --upgrade pip
RUN pip install apache-airflow-providers-apache-spark --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.4/constraints-3.9.txt
RUN pip install apache-airflow-providers-apache-hdfs --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.4/constraints-3.9.txt"
```
But i got the error when i run the container
```
airflow-init_1 | The container is run as root user. For security, consider using a regular user account.
airflow-init_1 | ....................
airflow-init_1 | ERROR! Maximum number of retries (20) reached.
airflow-init_1 |
airflow-init_1 | Last check result:
airflow-init_1 | $ airflow db check
airflow-init_1 | Traceback (most recent call last):
airflow-init_1 | File "/home/airflow/.local/bin/airflow", line 5, in <module>
airflow-init_1 | from airflow.__main__ import main
airflow-init_1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 28, in <module>
airflow-init_1 | from airflow.cli import cli_parser
airflow-init_1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 621, in <module>
airflow-init_1 | type=argparse.FileType('w', encoding='UTF-8'),
airflow-init_1 | TypeError: __init__() got an unexpected keyword argument 'encoding'
airflow-init_1 |
airflow_airflow-init_1 exited with code 1
```
</div> | https://github.com/apache/airflow/issues/22689 | https://github.com/apache/airflow/pull/29614 | 79c07e3fc5d580aea271ff3f0887291ae9e4473f | 0a4184e34c1d83ad25c61adc23b838e994fc43f1 | "2022-04-01T14:05:22Z" | python | "2023-02-19T20:37:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,675 | ["airflow/providers/google/cloud/transfers/gcs_to_gcs.py", "tests/providers/google/cloud/transfers/test_gcs_to_gcs.py"] | GCSToGCSOperator cannot copy a single file/folder without copying other files/folders with that prefix | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.4 (latest released)
### Operating System
MacOS 12.2.1
### Deployment
Composer
### Deployment details
_No response_
### What happened
I have file "hourse.jpeg" and "hourse.jpeg.copy" and a folder "hourse.jpeg.folder" in source bucket.
I use the following code to try to copy only "hourse.jpeg" to another bucket.
gcs_to_gcs_op = GCSToGCSOperator(
task_id="gcs_to_gcs",
source_bucket=my_source_bucket,
source_object="hourse.jpeg",
destination_bucket=my_destination_bucket
)
The result is the two files and one folder mentioned above are copied.
From the source code it seems there is no way to do what i want.
### What you think should happen instead
Only the file specified should be copied, that means we should treat source_object as exact match instead of prefix.
To accomplish the current behavior as prefix, the user can/should use wild char
source_object="hourse.jpeg*"
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22675 | https://github.com/apache/airflow/pull/24039 | 5e6997ed45be0972bf5ea7dc06e4e1cef73b735a | ec84ffe71cfa8246155b9b4cb10bf2167e75adcf | "2022-04-01T06:25:57Z" | python | "2022-06-06T12:17:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,657 | ["chart/templates/flower/flower-ingress.yaml", "chart/templates/webserver/webserver-ingress.yaml"] | Wrong apiVersion Detected During Ingress Creation | ### Official Helm Chart version
1.5.0 (latest released)
### Apache Airflow version
2.2.4 (latest released)
### Kubernetes Version
microk8s 1.23/stable
### Helm Chart configuration
```
executor: KubernetesExecutor
ingress:
enabled: true
## airflow webserver ingress configs
web:
annotations:
kubernetes.io/ingress.class: public
hosts:
-name: "example.com"
path: "/airflow"
## Disabled due to using KubernetesExecutor as recommended in the documentation
flower:
enabled: false
## Disabled due to using KubernetesExecutor as recommended in the documentation
redis:
enabled: false
```
### Docker Image customisations
No customization required to recreate, the default image has the same behavior.
### What happened
Installation notes below, as displayed the install fails due to the web ingress chart attempting a semVerCompare to check that the kube version is greater than 1.19 and, if it's not, it defaults back to the v1beta networking version. The microk8s install exceeds this version so I would expect the Webserver Ingress version to utilize "networking.k8s.io/v1" instead of the beta version.
Airflow installation
```
$: helm install airflow apache-airflow/airflow --namespace airflow --values ./custom-values.yaml
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
```
microk8s installation
```
$: kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.5-2+c812603a312d2b", GitCommit:"c812603a312d2b0c59687a1be1ae17c0878104cc", GitTreeState:"clean", BuildDate:"2022-03-17T16:14:08Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.5-2+c812603a312d2b", GitCommit:"c812603a312d2b0c59687a1be1ae17c0878104cc", GitTreeState:"clean", BuildDate:"2022-03-17T16:11:06Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
```
### What you think should happen instead
The Webserver Ingress chart should detect that the kube version is greater than 1.19 and utilize the version ```networking.k8s.io/v1```.
### How to reproduce
On Ubuntu 18.04, run:
1. ```sudo snap install microk8s --classic```
2. ```microk8s status --wait-ready```
3. ```microk8s enable dns ha-cluster helm3 ingress metrics-server storage```
4. ```microk8s helm3 repo add apache-airflow https://airflow.apache.org```
5. ```microk8s kubectl create namespace airflow```
6. ```touch ./custom-values.yaml```
7. ```vi ./custom-values.yaml``` and insert the values.yaml contents from above
8. ```microk8s helm3 install airflow apache-airflow/airflow --namespace airflow --values ./custom-values.yaml```
### Anything else
This problem can be reproduced consistently.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22657 | https://github.com/apache/airflow/pull/28461 | e377e869da9f0e42ac1e0a615347cf7cd6565d54 | 5c94ef0a77358dbee8ad8735a132b42d78843df7 | "2022-03-31T16:19:33Z" | python | "2022-12-19T15:03:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,647 | ["airflow/utils/sqlalchemy.py"] | SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True | ### Apache Airflow version
2.2.4 (latest released)
### What happened
Error
```
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/xcom.py:437: SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
return query.delete()
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py:2214: SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
for result in query.with_entities(XCom.task_id, XCom.value)
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/renderedtifields.py:126: SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
session.merge(self)
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/renderedtifields.py:162: SAWarning: Coercing Subquery object into a select() for use in IN(); please pass a select() construct explicitly
tuple_(cls.dag_id, cls.task_id, cls.execution_date).notin_(subq1),
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/renderedtifields.py:163: SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
).delete(synchronize_session=False)
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com/"
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.0.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-postgres==3.0.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sqlite==2.1.0
### Deployment
Other
### Deployment details
Pip package
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22647 | https://github.com/apache/airflow/pull/24499 | cc6a44bdc396a305fd53c7236427c578e9d4d0b7 | d9694733cafd9a3d637eb37d5154f0e1e92aadd4 | "2022-03-31T12:23:17Z" | python | "2022-07-05T12:50:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,576 | ["airflow/providers/ssh/hooks/ssh.py", "tests/providers/ssh/hooks/test_ssh.py"] | SFTP connection hook not working when using inline Ed25519 key from Airflow connection | ### Apache Airflow version
2.2.4 (latest released)
### What happened
I am trying to create an SFTP connection which includes the extra params of `private_key` which includes a txt output of my private key. Ie: `{"look_for_keys": "false", "no_host_key_check": "true", "private_key": "-----BEGIN OPENSSH PRIVATE KEY-----
keygoeshere==\n----END OPENSSH PRIVATE KEY-----"}`
When I test the connection I get the error `expected str, bytes or os.PathLike object, not Ed25519Key`
When I try and use this connection I get the following error:
```
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/sftp/hooks/sftp.py", line 208, in list_directory
conn = self.get_conn()
File "/home/airflow/.local/lib/python3.7/site-packages/tenacity/__init__.py", line 324, in wrapped_f
return self(f, *args, **kw)
File "/home/airflow/.local/lib/python3.7/site-packages/tenacity/__init__.py", line 404, in __call__
do = self.iter(retry_state=retry_state)
File "/home/airflow/.local/lib/python3.7/site-packages/tenacity/__init__.py", line 349, in iter
return fut.result()
File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 428, in result
return self.__get_result()
File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/airflow/.local/lib/python3.7/site-packages/tenacity/__init__.py", line 407, in __call__
result = fn(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/sftp/hooks/sftp.py", line 172, in get_conn
self.conn = pysftp.Connection(**conn_params)
File "/home/airflow/.local/lib/python3.7/site-packages/pysftp/__init__.py", line 142, in __init__
self._set_authentication(password, private_key, private_key_pass)
File "/home/airflow/.local/lib/python3.7/site-packages/pysftp/__init__.py", line 164, in _set_authentication
private_key_file = os.path.expanduser(private_key)
File "/usr/local/lib/python3.7/posixpath.py", line 235, in expanduser
path = os.fspath(path)
TypeError: expected str, bytes or os.PathLike object, not Ed25519Key
```
This only seems to happen for Ed25519 keys. RSA worked fine!
### What you think should happen instead
It should work, I don't specify this as an `Ed25519Key` I think the connection manager code is saving it as a paraminko key but when testing / using it as a DAG it is expecting a string!
I don't see why you can't save it as a paraminko key and use it in the connection.
Also it seems to work fine when using RSA keys, but super short keys are cooler!
### How to reproduce
Create a new Ed25519 ssh key and a new SFTP connection and copy the following into the extra field:
{"look_for_keys": "false", "no_host_key_check": "true", "private_key": "-----BEGIN RSA PRIVATE KEY----- Ed25519_key_goes_here -----END RSA PRIVATE KEY-----"}
Test should yield the failure `TypeError: expected str, bytes or os.PathLike object, not Ed25519Key`
### Operating System
RHEL 7.9 on host OS and Docker image for the rest.
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.0.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-cncf-kubernetes==3.0.2
apache-airflow-providers-docker==2.4.1
apache-airflow-providers-elasticsearch==2.2.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.4.0
apache-airflow-providers-grpc==2.0.1
apache-airflow-providers-hashicorp==2.1.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-microsoft-azure==3.6.0
apache-airflow-providers-mysql==2.2.0
apache-airflow-providers-odbc==2.0.1
apache-airflow-providers-postgres==3.0.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sendgrid==2.0.1
apache-airflow-providers-sftp==2.4.1
apache-airflow-providers-slack==4.2.0
apache-airflow-providers-sqlite==2.1.0
apache-airflow-providers-ssh==2.4.0
### Deployment
Other Docker-based deployment
### Deployment details
Docker image of 2.2.4 release with VERY minimal changes. (wget, curl, etc added)
### Anything else
RSA seems to work fine... only after a few hours of troubleshooting and writing this ticket did I learn that. 😿
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22576 | https://github.com/apache/airflow/pull/23043 | d7b85d9a0a09fd7b287ec928d3b68c38481b0225 | e63dbdc431c2fa973e9a4c0b48ec6230731c38d1 | "2022-03-28T20:06:31Z" | python | "2022-05-09T22:49:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,551 | ["docker_tests/test_prod_image.py", "docs/apache-airflow-providers-microsoft-azure/index.rst", "setup.py"] | Consider depending on `azure-keyvault-secrets` instead of `azure-keyvault` metapackage | ### Description
It appears that the `microsoft-azure` provider only depends on `azure-keyvault-secrets`:
https://github.com/apache/airflow/blob/388723950de9ca519108e0a8f6818f0fc0dd91d4/airflow/providers/microsoft/azure/secrets/key_vault.py#L24
and not the other 2 packages in the `azure-keyvault` metapackage.
### Use case/motivation
I am the maintainer of the `apache-airflow-providers-*` packages on `conda-forge` and I'm running into small issues with the way `azure-keyvault` is maintained as a metapackage on `conda-forge`. I think depending on `azure-keyvault-secrets` explicitly would solve my problem and also provide better clarity for the `microsoft-azure` provider in general.
### Related issues
https://github.com/conda-forge/azure-keyvault-feedstock/issues/6
https://github.com/conda-forge/apache-airflow-providers-microsoft-azure-feedstock/pull/13
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22551 | https://github.com/apache/airflow/pull/22557 | a6609d5268ebe55bcb150a828d249153582aa936 | 77d4e725c639efa68748e0ae51ddf1e11b2fd163 | "2022-03-27T12:24:12Z" | python | "2022-03-29T13:44:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,487 | ["airflow/cli/commands/task_command.py"] | "Running <TaskInstance: *.* * [queued]> on host *" written with WARNING level | ### Apache Airflow version
2.2.3
### What happened
"Running <TaskInstance: *.* * [queued]> on host *" written with WARNING level
### What you think should happen instead
This message should be written with INFO level
### How to reproduce
_No response_
### Operating System
Composer
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22487 | https://github.com/apache/airflow/pull/22488 | 388f4e8b032fe71ccc9a16d84d7c2064c80575b3 | acb1a100bbf889dddef1702c95bd7262a578dfcc | "2022-03-23T13:28:26Z" | python | "2022-03-25T09:40:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,474 | ["airflow/cli/commands/dag_command.py", "tests/cli/commands/test_dag_command.py"] | CLI command "airflow dags next-execution" give unexpected results with paused DAG and catchup=False | ### Apache Airflow version
2.2.2
### What happened
Current time 16:54 UTC
Execution Schedule: * * * * *
Last Run: 16:19 UTC
DAG Paused
Catchup=False
`airflow dags next-execution sample_dag`
returns
```
[INFO] Please be reminded this DAG is PAUSED now.
2022-03-22T16:20:00+00:00
```
### What you think should happen instead
I would expect
```
[INFO] Please be reminded this DAG is PAUSED now.
2022-03-22T16:53:00+00:00
```
To be returned since when you unpause the DAG that is the next executed DAG
### How to reproduce
Create a simple sample dag with a schedule of * * * * * and pause with catchup=False and wait a few minutes, then run
`airflow dags next-execution sample_dag`
### Operating System
Debian
### Versions of Apache Airflow Providers
Airflow 2.2.2
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22474 | https://github.com/apache/airflow/pull/30117 | 1f2b0c21d5ebefc404d12c123674e6ac45873646 | c63836ccb763fd078e0665c7ef3353146b1afe96 | "2022-03-22T17:06:41Z" | python | "2023-03-22T14:22:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,473 | ["airflow/secrets/local_filesystem.py", "tests/cli/commands/test_connection_command.py", "tests/secrets/test_local_filesystem.py"] | Connections import and export should also support ".yml" file extensions | ### Apache Airflow version
2.2.4 (latest released)
### What happened
Trying to export or import a yaml formatted connections file with ".yml" extension fails.
### What you think should happen instead
While the "official recommended extension" for YAML files is .yaml, many pipeline are built around using the .yml file extension. Importing and exporting of .yml files should also be supported.
### How to reproduce
Running airflow connections import or export with a file having a .yml file extension errors with:
`Unsupported file format. The file must have the extension .env or .json or .yaml`
### Operating System
debian 10 buster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22473 | https://github.com/apache/airflow/pull/22872 | 1eab1ec74c426197af627c09817b76081c5c4416 | 3c0ad4af310483cd051e94550a7d857653dcee6d | "2022-03-22T15:36:21Z" | python | "2022-04-13T16:52:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,434 | ["airflow/providers/snowflake/example_dags/__init__.py", "docs/apache-airflow-providers-snowflake/index.rst", "docs/apache-airflow-providers-snowflake/operators/s3_to_snowflake.rst", "docs/apache-airflow-providers-snowflake/operators/snowflake.rst", "docs/apache-airflow-providers-snowflake/operators/snowflake_to_slack.rst", "tests/system/providers/snowflake/example_snowflake.py"] | Migrate Snowflake system tests to new design | There is a new design of system tests that was introduced by the [AIP-47](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-47+New+design+of+Airflow+System+Tests).
All current system tests need to be migrated, so they can be run in the CI process automatically before releases.
This is an aggregated issue for all system tests related to `Snowflake` provider.
It is created to track progress of their migration.
List of paths to test files with corresponding number of tests inside:
- [x] tests/providers/snowflake/operators/test_snowflake_system.py (1)
For anyone involved in working with this issue - please, make sure to also check if all example DAGs are migrated. The issue for them is stored separately. Search for `Migrate Snowflake example DAGs to new design`
| https://github.com/apache/airflow/issues/22434 | https://github.com/apache/airflow/pull/24151 | c60bb9edc0c9b55a2824eae879af8a4a90ccdd2d | c2f10a4ee9c2404e545d78281bf742a199895817 | "2022-03-22T13:48:43Z" | python | "2022-06-03T16:09:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,418 | ["airflow/www/static/css/main.css", "airflow/www/static/js/dags.js", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py"] | auto refresh Dags home page | ### Description
Similar to auto refresh in the DAG page, it would be nice to have this option in the home page as well.

### Use case/motivation
having an auto refresh at the home page will let users have a live view of running dags and tasks.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22418 | https://github.com/apache/airflow/pull/22900 | d6141c6594da86653b15d67eaa99511e8fe37a26 | cd70afdad92ee72d96edcc0448f2eb9b44c8597e | "2022-03-22T08:50:02Z" | python | "2022-05-01T10:59:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,417 | ["airflow/providers/jenkins/hooks/jenkins.py", "airflow/providers/jenkins/provider.yaml", "airflow/providers/jenkins/sensors/__init__.py", "airflow/providers/jenkins/sensors/jenkins.py", "tests/providers/jenkins/hooks/test_jenkins.py", "tests/providers/jenkins/sensors/__init__.py", "tests/providers/jenkins/sensors/test_jenkins.py"] | Jenkins Sensor to monitor a jenkins job finish | ### Description
Sensor for jenkins jobs in airflow. There are cases in which we need to monitor the state of a build in jenkins and pause the DAG until the build finishes.
### Use case/motivation
I am trying to achieve a way of pausing the DAG until a build or the last build in a jenkins job finishes.
This could be done in different ways but cleanest is to have a dedicated jenkins sensor in airflow and use jenkins hook and connection.
There are two cases to monitor a job in jenkins
1. Specify the build number to monitor
2. Get the last build automatically and check whether it is still running or not.
Technically the only important thing from a sensor perspective is to check whether the build is ongoing or finished. Monitoring for a specific Status or result doesn't make sense in this use case. This use case only concerns whether there is an ongoing build in a job or not. If a current build is ongoing then wait for it to finish.
If build number not specified, sensor should query for the latest build number and check whether it is running or not.
If build number is specified, it should check the run state of that specific build.
### Related issues
There are no related issues.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22417 | https://github.com/apache/airflow/pull/22421 | ac400ebdf3edc1e08debf3b834ade3809519b819 | 4e24b22379e131fe1007e911b93f52e1b6afcf3f | "2022-03-22T07:57:54Z" | python | "2022-03-24T08:01:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,413 | ["chart/templates/flower/flower-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_flower.py"] | Flower is missing extraVolumeMounts | ### Official Helm Chart version
1.5.0 (latest released)
### Apache Airflow version
2.2.4 (latest released)
### Kubernetes Version
1.19
### Helm Chart configuration
```
flower:
extraContainers:
- image: foo
imagePullPolicy: IfNotPresent
name: foo
volumeMounts:
- mountPath: /var/log/foo
name: foo
readOnly: false
extraVolumeMounts:
- mountPath: /var/log/foo
name: foo
extraVolumes:
- emptyDir: {}
name: foo
```
### Docker Image customisations
_No response_
### What happened
```
Error: values don't meet the specifications of the schema(s) in the following chart(s):
airflow:
- flower: Additional property extraVolumeMounts is not allowed
```
### What you think should happen instead
The flower pod should support the same extraVolumeMounts that other pods support.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22413 | https://github.com/apache/airflow/pull/22414 | 7667d94091b663f9d9caecf7afe1b018bcad7eda | f3bd2a35e6f7b9676a79047877dfc61e5294aff8 | "2022-03-21T22:58:02Z" | python | "2022-03-22T11:17:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,404 | ["airflow/task/task_runner/standard_task_runner.py"] | tempfile.TemporaryDirectory does not get deleted after task failure | ### Discussed in https://github.com/apache/airflow/discussions/22403
<div type='discussions-op-text'>
<sup>Originally posted by **m1racoli** March 18, 2022</sup>
### Apache Airflow version
2.2.4 (latest released)
### What happened
When creating a temporary directory with `tempfile.TemporaryDirectory()` and then failing a task, the corresponding directory does not get deleted.
This happens in Airflow on Astronomer as well as locally in for `astro dev` setups for LocalExecutor and CeleryExecutor.
### What you think should happen instead
As in normal Python environments, the directory should get cleaned up, even in the case of a raised exception.
### How to reproduce
Running this DAG will leave a temporary directory in the corresponding location:
```python
import os
import tempfile
from airflow.decorators import dag, task
from airflow.utils.dates import days_ago
class MyException(Exception):
pass
@task
def run():
tmpdir = tempfile.TemporaryDirectory()
print(f"directory {tmpdir.name} created")
assert os.path.exists(tmpdir.name)
raise MyException("error!")
@dag(start_date=days_ago(1))
def tempfile_test():
run()
_ = tempfile_test()
```
### Operating System
Debian (Astronomer Airflow Docker image)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==1!3.0.0
apache-airflow-providers-cncf-kubernetes==1!3.0.2
apache-airflow-providers-elasticsearch==1!2.2.0
apache-airflow-providers-ftp==1!2.0.1
apache-airflow-providers-google==1!6.4.0
apache-airflow-providers-http==1!2.0.3
apache-airflow-providers-imap==1!2.2.0
apache-airflow-providers-microsoft-azure==1!3.6.0
apache-airflow-providers-mysql==1!2.2.0
apache-airflow-providers-postgres==1!3.0.0
apache-airflow-providers-redis==1!2.0.1
apache-airflow-providers-slack==1!4.2.0
apache-airflow-providers-sqlite==1!2.1.0
apache-airflow-providers-ssh==1!2.4.0
```
### Deployment
Astronomer
### Deployment details
GKE, vanilla `astro dev`, LocalExecutor and CeleryExecutor.
### Anything else
Always
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/22404 | https://github.com/apache/airflow/pull/22475 | 202a3a10e553a8a725a0edb6408de605cb79e842 | b0604160cf95f76ed75b4c4ab42b9c7902c945ed | "2022-03-21T16:16:30Z" | python | "2022-03-24T21:23:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,392 | ["airflow/cli/commands/connection_command.py", "tests/cli/commands/test_connection_command.py"] | Unknown connection types fail in cryptic ways | ### Apache Airflow version
2.2.4 (latest released)
### What happened
I created a connection like:
```
airflow connections add fsconn --conn-host /tmp --conn-type File
```
When I really should have created it like:
```
airflow connections add fsconn --conn-host /tmp --conn-type fs
```
While using this connection, I found that FileSensor would only work if I provided absolute paths. Relative paths would cause the sensor to timeout because it couldn't find the file. Using `fs` instead of `File` caused the FileSensor to start working like I expected.
### What you think should happen instead
Ideally I'd have gotten an error when I tried to create the connection with an invalid type.
Or if that's not practical, then I should have gotten an error in the task logs when I tried to use the FileSensor with a connection of the wrong type.
### How to reproduce
_No response_
### Operating System
debian (in docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astro dev start`
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22392 | https://github.com/apache/airflow/pull/22688 | 9a623e94cb3e4f02cbe566e02f75f4a894edc60d | d7993dca2f182c1d0f281f06ac04b47935016cf1 | "2022-03-21T04:36:56Z" | python | "2022-04-13T19:45:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,381 | ["airflow/providers/amazon/aws/hooks/athena.py", "airflow/providers/amazon/aws/hooks/emr.py", "airflow/providers/amazon/aws/operators/athena.py", "airflow/providers/amazon/aws/operators/emr.py"] | AthenaOperator retries max_tries mix-up | ### Apache Airflow version
2.2.4 (latest released)
### What happened
After a recent upgrade from 1.10.9 to 2.2.4, an odd behavior where the aforementioned Attributes, are wrongfully Coupled, is observed.
An example to showcase the issue:
```
AthenaOperator(
...
retries=3,
max_tries=30,
...)
```
Related Documentation states:
* retries: Number of retries that should be performed before failing the task
* max_tries: Number of times to poll for query state before function exits
Regardless of the above specification `max_tries=30`, inspection of related _Task Instance Details_ shows that the Value of both Attributes is **3**
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.0.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
Imagine a Query, executed on an hourly basis, with a varying scope, causing it to 'organically' execute for anywhere between 5 - 10 minutes. This Query Task should Fail after 3 execution attempts.
In such cases, we would like to poll the state of the Query frequently (every 15 seconds), in order to avoid redundant idle time for downstream Tasks.
A configuration matching the above description:
```
AthenaOperator(
...
retry_delay=15,
retries=3,
max_tries=40, # 40 polls * 15 seconds delay between polls = 10 minutes
...)
```
When deployed, `retries == max_tries == 3`, thus causing the Task to terminate after 45 seconds
In order to quickly avert this situation where our ETL breaks, we are using the following configuration:
```
AthenaOperator(
...
retry_delay=15,
retries=40,
max_tries=40,
...)
```
With the last configuration, our Task does not terminate preemptively but will retry **40 times** before failing - which causes an issue with downstream Tasks SLA, at the very least (that is before weighing in the waste of time and operational costs)
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22381 | https://github.com/apache/airflow/pull/25971 | 18386026c28939fa6d91d198c5489c295a05dcd2 | d5820a77e896a1a3ceb671eddddb9c8e3bcfb649 | "2022-03-20T08:50:45Z" | python | "2022-09-11T23:25:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,380 | ["dev/provider_packages/SETUP_TEMPLATE.cfg.jinja2"] | Newest providers incorrectly include `gitpython` and `wheel` in `install_requires` | ### Apache Airflow Provider(s)
ftp, openfaas, sqlite
### Versions of Apache Airflow Providers
I am the maintainer of the Airflow Providers on conda-forge. The providers I listed above are the first 3 I have looked at but I believe all are affected. These are the new releases (as of yesterday) of all providers.
### Apache Airflow version
2.2.4 (latest released)
### Operating System
Linux (Azure CI)
### Deployment
Other Docker-based deployment
### Deployment details
This is on conda-forge Azure CI.
### What happened
All providers I have looked at (and I suspect all providers) now have `gitpython` and `wheel` in their `install_requires`:
From `apache-airflow-providers-ftp-2.1.1.tar.gz`:
```
install_requires =
gitpython
wheel
```
I believe these requirements are incorrect (neither should be needed at install time) and this will make maintaining these packages on conda-forge an absolute nightmare! (It's already a serious challenge because I get a PR to update each time each provider gets updated.)
### What you think should happen instead
These install requirements should be removed.
### How to reproduce
Open any of the newly released providers from pypi and look at `setup.cfg`.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22380 | https://github.com/apache/airflow/pull/22382 | 172df9ee247af62e9417cebb2e2a3bc2c261a204 | ab4ba6f1b770a95bf56965f3396f62fa8130f9e9 | "2022-03-20T07:48:57Z" | python | "2022-03-20T12:15:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,358 | ["airflow/api_connexion/openapi/v1.yaml"] | ScheduleInterval schema in OpenAPI specs should have "nullable: true" otherwise generated OpenAPI client will throw an error in case of nullable "schedule_interval" | ### Apache Airflow version
2.2.4 (latest released)
### What happened
Currently we have this schema definition in the OpenAPI specs:
```
ScheduleInterval:
description: |
Schedule interval. Defines how often DAG runs, this object gets added to your latest task instance's
execution_date to figure out the next schedule.
readOnly: true
oneOf:
- $ref: '#/components/schemas/TimeDelta'
- $ref: '#/components/schemas/RelativeDelta'
- $ref: '#/components/schemas/CronExpression'
discriminator:
propertyName: __type
```
The issue with above is, when using an OpenAPI generator for Java for example (I think is same for other languages as well), it will treat `ScheduleInterval` as **non-nullable** property, although what is returned under `/dags/{dag_id}` or `/dags/{dag_id}/details` in case of a `None` `schedule_interval` is `null` for `schedule_interval`.
### What you think should happen instead
We should have `nullable: true` in `ScheduleInterval` schema which will allow `schedule_interval` to be parsed as `null`.
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
If the maintainers think is a valid bug, I will be more than happy to submit a PR :)
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22358 | https://github.com/apache/airflow/pull/24253 | b88ce951881914e51058ad71858874fdc00a3cbe | 7e56bf662915cd58849626d7a029a4ba70cdda4d | "2022-03-18T09:13:24Z" | python | "2022-06-07T11:21:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,325 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_schema.py", "airflow/models/dag.py", "tests/api_connexion/endpoints/test_dag_endpoint.py", "tests/api_connexion/schemas/test_dag_schema.py"] | ReST API : get_dag should return more than a simplified view of the dag | ### Description
The current response payload from https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_dag is a useful but simple view of the state of a given DAG. However it is missing some additional attributes that I feel would be useful for indiduals/groups who are choosing to interact with Airflow primarily through the ReST interface.
### Use case/motivation
As part of a testing workflow we upload DAGs to a running airflow instance and want to trigger an execution of the DAG after we know that the scheduler has updated it. We're currently automating this process through the ReST API, but the `last_updated` is not exposed.
This should be implemented from the dag_source endpoint.
https://github.com/apache/airflow/blob/main/airflow/api_connexion/endpoints/dag_source_endpoint.py
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22325 | https://github.com/apache/airflow/pull/22637 | 55ee62e28a0209349bf3e49a25565e7719324500 | 9798c8cad1c2fe7e674f8518cbe5151e91f1ca7e | "2022-03-16T20:49:07Z" | python | "2022-03-31T10:40:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,320 | ["airflow/www/templates/airflow/dag.html"] | Copying DAG ID from UI and pasting in Slack includes schedule | ### Apache Airflow version
2.2.3
### What happened
(Yes, I know the title says Slack and it might not seem like an Airflow issue, but so far this is the only application I noticed this on. There might be others.)
PR https://github.com/apache/airflow/pull/11503 was a fix to issue https://github.com/apache/airflow/issues/11500 to prevent text-selection of scheduler interval when selecting DAG ID. However it does not fix pasting the text into certain applications (such as Slack), at least on a Mac.
@ryanahamilton thanks for the fix, but this is fixed in the visible sense (double clicking the DAG ID to select it will now not show the schedule interval and next run as selected in the UI), however if you copy what is selected for some reason it still includes schedule interval and next run when pasted into certain applications.
I can't be sure why this is happening, but certain places such as pasting into Google Chrome, TextEdit, or Visual Studio Code it will only include the DAG ID and a new line. But other applications such as Slack (so far only one I can tell) it includes the schedule interval and next run, as you can see below:
- Schedule interval and next run **not shown as selected** on the DAG page:

- Schedule interval and next run **not pasted** in Google Chrome and TextEdit:


- Schedule interval and next run **_pasted and visible_** in Slack:

### What you think should happen instead
When you select the DAG ID on the DAG page, copy what is selected, and then paste into a Slack message, only the DAG ID should be pasted.
### How to reproduce
Select the DAG ID on the DAG page (such as double-clicking the DAG ID), copy what is selected, and then paste into a Slack message.
### Operating System
macOS 1.15.7 (Catalina)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
This is something that possibly could be a Slack bug (one could say that Slack should strip out anything that is `user-select: none`), however it should be possible to fix the HTML layout so `user-select: none` is not even needed to prevent selection. It is sort of a band-aid fix.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22320 | https://github.com/apache/airflow/pull/28643 | 1da17be37627385fed7fc06584d72e0abda6a1b5 | 9aea857343c231319df4c5f47e8b4d9c8c3975e6 | "2022-03-16T18:31:26Z" | python | "2023-01-04T21:19:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,248 | ["airflow/utils/docs.py", "docs/apache-airflow-providers/index.rst"] | Allow custom redirect for provider information in /provider | ### Description
`/provider` enables users to get amazing information via the UI, however if you've written a custom provider the documentation redirect defaults to `https://airflow.apache.org/docs/airflow-provider-{rest_of_name}/{version}/`, which isn't useful for custom operators. (If this feature exists then I must've missed the documentation on it, sorry!)
### Use case/motivation
As an airflow developer I've written a custom provider package and would like to link to my internal documentation as well as my private github repo via the `/provider` entry for my custom provider.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
As this is a UI change + more, I am willing to submit a PR, but would likely need help.
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22248 | https://github.com/apache/airflow/pull/23012 | 3b2ef88f877fc5e4fcbe8038f0a9251263eaafbc | 7064a95a648286a4190a452425626c159e467d6e | "2022-03-14T15:27:23Z" | python | "2022-04-22T13:21:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,220 | ["airflow/providers/databricks/provider.yaml", "docs/apache-airflow-providers-databricks/index.rst", "setup.py", "tests/providers/databricks/operators/test_databricks_sql.py"] | Databricks SQL fails on Python 3.10 | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
The databricks SQL does not work on Python 3.10 due to "from collections import Iterable" in the `databricks-sql-connector`
* https://pypi.org/project/databricks-sql-connector/
Details of this issue dicussed in https://github.com/apache/airflow/pull/22050
For now we will likely just exclude the tests (and mark databricks provider as non-python 3.10 compatible). But once this is fixed (in either 1.0.2 or upcoming 2.0.0 version of the library, we wil restore it back).
### Apache Airflow version
main (development)
### Operating System
All
### Deployment
Other
### Deployment details
Just Breeze with Python 3.10
### What happened
The tests are failing:
```
self = <databricks.sql.common.ParamEscaper object at 0x7fe81c6dd6c0>
item = ['file1', 'file2', 'file3']
def escape_item(self, item):
if item is None:
return 'NULL'
elif isinstance(item, (int, float)):
return self.escape_number(item)
elif isinstance(item, basestring):
return self.escape_string(item)
> elif isinstance(item, collections.Iterable):
E AttributeError: module 'collections' has no attribute 'Iterable'
```
https://github.com/apache/airflow/runs/5523057543?check_suite_focus=true#step:8:16781
### What you expected to happen
Test succeed :)
### How to reproduce
Run `TestDatabricksSqlCopyIntoOperator` in Python 3.10 environment.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22220 | https://github.com/apache/airflow/pull/22886 | aa8c08db383ebfabf30a7c2b2debb64c0968df48 | 7be57eb2566651de89048798766f0ad5f267cdc2 | "2022-03-13T14:55:30Z" | python | "2022-04-10T18:32:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,174 | ["airflow/www/static/js/ti_log.js", "airflow/www/templates/airflow/ti_log.html"] | Support log download in task log view | ### Description
Support log downloading from the task log view by adding a download button in the UI.
### Use case/motivation
In the current version of Airflow, when we want to download a task try's log, we can click on the task node in Tree View or Graph view, and use the "Download" button in the task action modal, as in this screenshot:
<img width="752" alt="Screen Shot 2022-03-10 at 5 59 23 PM" src="https://user-images.githubusercontent.com/815701/157787811-feb7bdd4-4e32-4b85-b822-2d68662482e9.png">
It would make log downloading more convenient if we add a Download button in the task log view. This is the screenshot of the task log view, we can add a button on the right side of the "Toggle Wrap" button.
<img width="1214" alt="Screen Shot 2022-03-10 at 5 55 53 PM" src="https://user-images.githubusercontent.com/815701/157788262-a4cb8ff7-b813-4140-b8a1-41a5d0630e1f.png">
I work on Airflow at Pinterest, internally we get such a feature request from our users. I'd like to get your thoughts about adding this feature before I create a PR for it. Thanks.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22174 | https://github.com/apache/airflow/pull/22804 | 6aa65a38e0be3fee18ae9c1541e6091a47ab1f76 | b29cbbdc1bbc290d67e64aa3a531caf2b9f6846b | "2022-03-11T02:08:11Z" | python | "2022-04-08T14:55:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,141 | ["airflow/cli/commands/scheduler_command.py", "airflow/utils/cli.py", "docs/apache-airflow/howto/set-config.rst", "tests/cli/commands/test_scheduler_command.py"] | Dump configurations in airflow scheduler logs based on the config it reads | ### Description
We don't have any way to cross-verify the configs that the airflow scheduler uses. It would be good to have it logged somewhere so that users can cross-verify it.
### Use case/motivation
How do you know for sure that the configs in airflow.cfg are being correctly parsed by airflow scheduler?
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22141 | https://github.com/apache/airflow/pull/22588 | c30ab6945ea0715889d32e38e943c899a32d5862 | 78586b45a0f6007ab6b94c35b33790a944856e5e | "2022-03-10T09:56:08Z" | python | "2022-04-04T12:05:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,111 | ["Dockerfile", "Dockerfile.ci", "airflow/providers/google/CHANGELOG.rst", "airflow/providers/google/ads/hooks/ads.py", "docs/apache-airflow-providers-google/index.rst", "setup.cfg", "setup.py", "tests/providers/google/ads/operators/test_ads.py"] | apache-airflow-providers-google uses deprecated Google Ads API V8 | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google = 6.4.0
### Apache Airflow version
2.1.3
### Operating System
Debian Buster
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
`apache-airflow-providers-google 6.4.0` has the requirement `google-ads >=12.0.0,<14.0.1`
The latest version of the Google Ads API supported by this is V8 - this was deprecated in November 2021, and is due to be disabled in April / May (see https://developers.google.com/google-ads/api/docs/sunset-dates)
### What you expected to happen
Update the requirements to so that the provider uses a version of the Google Ads API which hasn't been deprecated
At the moment, the only non-deprecated version is V10 - support for this was added in `google-ads=15.0.0`
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22111 | https://github.com/apache/airflow/pull/22965 | c92954418a21dcafcb0b87864ffcb77a67a707bb | c36bcc4c06c93dce11e2306a4aff66432bffd5a5 | "2022-03-09T10:05:51Z" | python | "2022-04-15T10:20:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,065 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_mssql_to_gcs.py", "tests/providers/google/cloud/transfers/test_mysql_to_gcs.py", "tests/providers/google/cloud/transfers/test_oracle_to_gcs.py", "tests/providers/google/cloud/transfers/test_postgres_to_gcs.py", "tests/providers/google/cloud/transfers/test_presto_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_trino_to_gcs.py"] | DB To GCS Operations Should Return/Save Row Count | ### Description
All DB to GCS Operators should track the per file and total row count written for metadata and validation purposes.
- Optionally, based on param, include the row count metadata as GCS file upload metadata.
- Always return row count data through XCom. Currently this operator has no return value.
### Use case/motivation
Currently, there is no way to check the uploaded files row count without opening the file. Downstream operations should have access to this information, and allowing it to be saved as GCS metadata and returning it through XCom makes it readily available for other uses.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22065 | https://github.com/apache/airflow/pull/24382 | 8e0bddaea69db4d175f03fa99951f6d82acee84d | 94257f48f4a3f123918b0d55c34753c7c413eb74 | "2022-03-07T23:36:35Z" | python | "2022-06-13T06:55:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,034 | ["airflow/providers/google/cloud/hooks/bigquery.py", "airflow/providers/google/cloud/transfers/bigquery_to_gcs.py", "tests/providers/google/cloud/transfers/test_bigquery_to_gcs.py"] | BigQueryToGCSOperator: Invalid dataset ID error | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
`apache-airflow-providers-google==6.3.0`
### Apache Airflow version
2.2.3
### Operating System
Linux
### Deployment
Composer
### Deployment details
- Composer Environment version: `composer-2.0.3-airflow-2.2.3`
### What happened
When I use BigQueryToGCSOperator, I got following error.
```
Invalid dataset ID "MY_PROJECT:MY_DATASET". Dataset IDs must be alphanumeric (plus underscores and dashes) and must be at most 1024 characters long.
```
### What you expected to happen
I guess that it is due to I use colon (`:` ) as the separator between project_id and dataset_id in `source_project_dataset_table `.
I tried use dot(`.`) as separator and it worked.
However, [document of BigQueryToGCSOperator](https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/transfers/bigquery_to_gcs/index.html) states that it is possible to use colon as the separator between project_id and dataset_id. In fact, at least untill Airflow1.10.15 version, it also worked with colon separator.
In Airflow 1.10.*, it separate and extract project_id and dataset_id by colon in bigquery hook. But `apache-airflow-providers-google==6.3.0` doesn't have this process.
https://github.com/apache/airflow/blob/d3b066931191b82880d216af103517ea941c74ba/airflow/contrib/hooks/bigquery_hook.py#L2186-L2247
### How to reproduce
You can reproduce following steps.
- Create a test DAG to execute BigQueryToGCSOperator in Composer environment(`composer-2.0.3-airflow-2.2.3`).
- And give `source_project_dataset_table` arg source BigQuery table path in following format.
- Trigger DAG.
```
source_project_dataset_table = 'PROJECT_ID:DATASET_ID.TABLE_ID'
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22034 | https://github.com/apache/airflow/pull/22506 | 02526b3f64d090e812ebaf3c37a23da2a3e3084e | 02976bef885a5da29a8be59b32af51edbf94466c | "2022-03-07T05:00:21Z" | python | "2022-03-27T20:21:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,015 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | Allow showing more than 25 last DAG runs in the task duration view | ### Apache Airflow version
2.1.2
### What happened
Task duration view for triggered dags shows all dag runs instead of n last. Changing the number of runs in the `Runs` drop-down menu doesn't change the view. Additionally the chart loads slowly, it show all dag runs.

### What you expected to happen
The number of shown dag runs is 25 (like for scheduled dags), and the last runs are shown. The number of runs button should allow to increase / decrease the number of shown dag runs (respectively the task times of the dag runs).
### How to reproduce
Trigger a dag multiple (> 25) times. Look at the "Task Duration" chart.
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22015 | https://github.com/apache/airflow/pull/29195 | de2889c2e9779177363d6b87dc9020bf210fdd72 | 8b8552f5c4111fe0732067d7af06aa5285498a79 | "2022-03-05T16:16:48Z" | python | "2023-02-25T21:50:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,007 | ["airflow/api_connexion/endpoints/variable_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/variable_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_variable_endpoint.py"] | Add Variable, Connection "description" fields available in the API | ### Description
I'd like to get the "description" field from the variable, and connection table available through the REST API for the calls:
1. /variables/{key}
2. /connections/{conn_id}
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22007 | https://github.com/apache/airflow/pull/25370 | faf3c4fe474733965ab301465f695e3cc311169c | 98f16aa7f3b577022791494e13b6aa7057afde9d | "2022-03-04T22:55:04Z" | python | "2022-08-02T21:05:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,996 | ["airflow/providers/ftp/hooks/ftp.py", "airflow/providers/sftp/hooks/sftp.py", "tests/providers/ftp/hooks/test_ftp.py"] | Add test_connection to FTP Hook | ### Description
I would like to test if FTP connections are properly setup.
### Use case/motivation
To test FTP connections via the Airflow UI similar to https://github.com/apache/airflow/pull/19609
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21996 | https://github.com/apache/airflow/pull/21997 | a9b7dd69008710f1e5b188e4f8bc2d09a5136776 | 26e8d6d7664bbaae717438bdb41766550ff57e4f | "2022-03-04T15:09:39Z" | python | "2022-03-06T10:16:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,987 | ["airflow/providers/amazon/aws/hooks/s3.py"] | Airflow S3 connection name | ### Description
Hi,
I took a look some issues and PRs and noticed that `Elastic MapReduce` connection name has been changed to `Amazon Elastic MapReduce` lately. [#20746](https://github.com/apache/airflow/issues/20746)
I think it would be much intuitive if the connection name `S3` is changed to `Amazon S3`, and would look better on connection list in web UI. (also, it is the official name of s3)
Finally, AWS connections would be the followings:
```
Amazon Web Services
Amazon Redshift
Amazon Elastic MapReduce
Amazon S3
```
Would you mind to assign me for PR and change it to `Amazon S3`?
It would be a great start for my Airflow contribution journey.
Thank you!
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21987 | https://github.com/apache/airflow/pull/21988 | 2b4d14696b3f32bc5ab71884a6e434887755e5a3 | 9ce45ff756fa825bd363a5a00c2333d91c60c012 | "2022-03-04T07:38:44Z" | python | "2022-03-04T17:25:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,978 | ["airflow/providers/google/cloud/hooks/gcs.py", "tests/providers/google/cloud/hooks/test_gcs.py"] | Add Metadata Upload Support to GCSHook Upload Method | ### Description
When uploading a file using the GCSHook Upload method, allow for optional metadata to be uploaded with the file. This metadata would then be visible on the blob properties in GCS.
### Use case/motivation
Being able to associate metadata with a GCS blob is very useful for tracking information about the data stored in the GCS blob.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21978 | https://github.com/apache/airflow/pull/22058 | c1faaf3745dd631d4491202ed245cf8190f35697 | a11d831e3f978826d75e62bd70304c5277a8a1ea | "2022-03-03T21:08:50Z" | python | "2022-03-07T22:28:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,970 | ["docs/helm-chart/manage-dags-files.rst"] | helm chart - mounting-dags-from-a-private-github-repo-using-git-sync-sidecar | ### Describe the issue with documentation
doc link: https://airflow.apache.org/docs/helm-chart/stable/manage-dags-files.html#mounting-dags-from-a-private-github-repo-using-git-sync-sidecar
doc location:
"""
[...]
repo: ssh://git@github.com/<username>/<private-repo-name>.git
[...]
"""
I literally spent one working day making the helm deployment work with the git sync feature.
I prefixed my ssh git repo url with "ssh://" as written in the doc. This resulted in the git-sync container being stuck in a CrashLoopBackOff.
### How to solve the problem
Only when I removed the prefix it worked correctly.
### Anything else
chart version: 1.4.0
git-sync image tag: v3.1.6 (default v3.3.0)
Maybe the reason for the issue is the change of the image tag. However I want to share my experience. Maybe the doc is misleading. For me it was.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21970 | https://github.com/apache/airflow/pull/26632 | 5560a46bfe8a14205c5e8a14f0b5c2ae74ee100c | 05d351b9694c3e25843a8b0e548b07a70a673288 | "2022-03-03T16:10:41Z" | python | "2022-09-27T13:05:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,941 | ["airflow/providers/amazon/aws/hooks/sagemaker.py", "airflow/providers/amazon/aws/operators/sagemaker.py", "tests/providers/amazon/aws/operators/test_sagemaker_transform.py"] | Sagemaker Transform Job fails if there are job with Same name | ### Description
Sagemaker Transform Job fails if there are job with Same name exist. Let say I create a job name as 'transform-2021-01-01T00-30-00' . So if I clear the airflow task run id for this so that the operator re-triggers then the Sagemaker Job creation fails because job with same name exists. So can we add 'action_if_job_exists flag where Behaviour if the job name already exists. Possible options are "increment" (default) and "fail".'
### Use case/motivation
Now in production environment failures are inevitable and with Sagemaker Jobs we have to ensure there is unique name for each run of the Job. So like the Sagemaker Processing Job operator or training operator we have an option to increment a job name by appending the count like if I run same job twice the job name will be 'transform-2021-01-01T00-30-00-1' where 1 is appended at end with the help of 'action_if_job_exists ([str](https://docs.python.org/3/library/stdtypes.html#str)) -- Behaviour if the job name already exists. Possible options are "increment" (default) and "fail".'
I have faced this issue personally on one of the task I am working on and think will save time and cost instead of running entire workflow again to get unique job names if there are other dependent task in the job by just clearing failed task id post fixing the failure in Sagemaker code , docker image input etc and that will just continue from where it failed
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21941 | https://github.com/apache/airflow/pull/25263 | 1fd702e5e55cabb40fe7e480bc47e70d9a036944 | 007b1920ddcee1d78f871d039a6ed8f4d0d4089d | "2022-03-02T13:52:31Z" | python | "2022-08-02T18:20:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,929 | ["airflow/providers/elasticsearch/hooks/elasticsearch.py", "docs/apache-airflow-providers-elasticsearch/hooks/elasticsearch_python_hook.rst", "docs/apache-airflow-providers-elasticsearch/hooks/elasticsearch_sql_hook.rst", "docs/apache-airflow-providers-elasticsearch/hooks/index.rst", "docs/apache-airflow-providers-elasticsearch/index.rst", "tests/always/test_project_structure.py", "tests/providers/elasticsearch/hooks/test_elasticsearch.py", "tests/system/providers/elasticsearch/example_elasticsearch_query.py"] | Elasticsearch hook support DSL | ### Description
Current elasticsearch provider hook does not support query with DSL. Can we implement some methods that support user input json to get query results?
BTW, why current `ElasticsearchHook's` father class is `DbApiHook`? I thought DbApiHook is for relational database that supports `sqlalchemy`?
### Use case/motivation
I think the elasticsearch provider hook should be like `MongoHook` that inherit from `BaseHook` and provide more useful methods works out of the box.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21929 | https://github.com/apache/airflow/pull/24895 | 33b2cd8784dcbc626f79e2df432ad979727c9a08 | 2ddc1004050464c112c18fee81b03f87a7a11610 | "2022-03-02T08:38:07Z" | python | "2022-07-08T21:51:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,923 | ["airflow/api/common/trigger_dag.py", "airflow/jobs/scheduler_job.py", "airflow/models/dag.py", "airflow/models/dagrun.py", "airflow/timetables/base.py", "airflow/utils/types.py", "docs/apache-airflow/howto/timetable.rst", "tests/models/test_dag.py"] | Programmatic customization of run_id for scheduled DagRuns | ### Description
Allow DAG authors to control how `run_id`'s are generated for created DagRuns. Currently the only way to specify a DagRun's `run_id` is through the manual trigger workflow either through the CLI or API and passing in `run_id`. It would be great if DAG authors are able to write a custom logic to generate `run_id`'s from scheduled `DagRunInterval`'s.
### Use case/motivation
In Airflow 1.x, the semantics of `execution_date` were burdensome enough for users that DAG authors would subclass DAG to override `create_dagrun` so that when new DagRuns were created, they were created with `run_id`'s that provided context into semantics about the DagRun. For example,
```
def create_dagrun(self, **kwargs):
kwargs['run_id'] = kwargs['execution_date'] + self.following_schedule(kwargs['execution_date']).date()
return super().create_dagrun(kwargs)
```
would result in the UI DagRun dropdown to display the weekday of when the Dag actually ran.
<img width="528" alt="image001" src="https://user-images.githubusercontent.com/9851473/156280393-e261d7fa-dfe0-41db-9887-941510f4070f.png">
After upgrading to Airflow 2.0 and with Dag serialization in the scheduler overridden methods are no longer there in the SerializedDAG, so we are back to having `scheduled__<execution_date>` values in the UI dropdown. It would be great if some functionality could be exposed either through the DAG or just in the UI to display meaningful values in the DagRun dropdown.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21923 | https://github.com/apache/airflow/pull/25795 | 5c48ed19bd3b554f9c3e881a4d9eb61eeba4295b | 0254f30a5a90f0c3104782525fabdcfdc6d3b7df | "2022-03-02T02:02:30Z" | python | "2022-08-19T13:15:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,897 | ["docs/apache-airflow/logging-monitoring/metrics.rst"] | Metrics documentation incorrectly lists dag_processing.processor_timeouts as a gauge | ### Describe the issue with documentation
According to the [documentation](https://airflow.apache.org/docs/apache-airflow/2.2.4/logging-monitoring/metrics.html), `dag_processing.processor_timeouts` is a gauge.
However, checking the code, it appears to be a counter: https://github.com/apache/airflow/blob/3035d3ab1629d56f3c1084283bed5a9c43258e90/airflow/dag_processing/manager.py#L1004
### How to solve the problem
Move the metric to the counter section.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21897 | https://github.com/apache/airflow/pull/23393 | 82c244f9c7f24735ee952951bcb5add45422d186 | fcfaa8307ac410283f1270a0df9e557570e5ffd3 | "2022-03-01T13:40:37Z" | python | "2022-05-08T21:11:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,891 | ["airflow/providers/apache/hive/provider.yaml", "setup.py", "tests/providers/apache/hive/hooks/test_hive.py", "tests/providers/apache/hive/transfers/test_hive_to_mysql.py", "tests/providers/apache/hive/transfers/test_hive_to_samba.py", "tests/providers/apache/hive/transfers/test_mssql_to_hive.py", "tests/providers/apache/hive/transfers/test_mysql_to_hive.py"] | hive provider support for python 3.9 | ### Apache Airflow Provider(s)
apache-hive
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.4 (latest released)
### Operating System
Debian “buster”
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
Hive provider can not be used in python 3.9 airflow image without also manually installing `PyHive`, `sasl`, `thrift-sasl` python libraries.
### What you expected to happen
Hive provider can be used in python 3.9 airflow image after installing only hive provider.
### How to reproduce
_No response_
### Anything else
Looks like for python 3.9 hive provider support was removed in https://github.com/apache/airflow/pull/15515#issuecomment-860264240 , because `sasl` library did not support python 3.9. However now python 3.9 is supported in `sasl`: https://github.com/cloudera/python-sasl/issues/21#issuecomment-865914647 .
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21891 | https://github.com/apache/airflow/pull/21893 | 76899696fa00c9f267316f27e088852556ebcccf | 563ecfa0539f5cbd42a715de0e25e563bd62c179 | "2022-03-01T10:27:55Z" | python | "2022-03-01T22:16:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,808 | ["airflow/providers/amazon/aws/operators/sagemaker.py", "tests/providers/amazon/aws/operators/test_sagemaker_base.py"] | Add default 'aws_conn_id' to SageMaker Operators | The SageMaker Operators not having a default value for `aws_conn_id` is a pain, we should fix that. See EKS operators for an example: https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/operators/eks.py
_Originally posted by @ferruzzi in https://github.com/apache/airflow/pull/21673#discussion_r813414043_ | https://github.com/apache/airflow/issues/21808 | https://github.com/apache/airflow/pull/23515 | 828016747ac06f6fb2564c07bb8be92246f42567 | 5d1e6ff19ab4a63259a2c5aed02b601ca055a289 | "2022-02-24T22:58:10Z" | python | "2022-05-09T17:36:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,768 | ["airflow/models/baseoperator.py", "airflow/models/dag.py"] | raise TypeError when default_args not a dictionary | ### Apache Airflow version
2.2.4 (latest released)
### What happened
When triggering this dag below it runs when it should fail. A set is being passed to default_args instead of what should be a dictionary yet the dag still succeeds.
### What you expected to happen
I expected the dag to fail as the default_args parameter should only be a dictionary.
### How to reproduce
```
from airflow.models import DAG
from airflow.operators.python import PythonVirtualenvOperator, PythonOperator
from airflow.utils.dates import days_ago
def callable1():
pass
with DAG(
dag_id="virtualenv_python_operator",
default_args={"owner: airflow"},
schedule_interval=None,
start_date=days_ago(2),
tags=["core"],
) as dag:
task = PythonOperator(
task_id="check_errors",
python_callable=callable1,
)
```
### Operating System
Docker (debian:buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
Astro CLI with images:
- quay.io/astronomer/ap-airflow-dev:2.2.4-1-onbuild
- quay.io/astronomer/ap-airflow-dev:2.2.3-2
- quay.io/astronomer/ap-airflow-dev:2.2.0-5-buster-onbuild
### Anything else
Bug happens every time.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21768 | https://github.com/apache/airflow/pull/21809 | 7724a5a2ec9531f03497a259c4cd7823cdea5e0c | 7be204190d6079e49281247d3e2c644535932925 | "2022-02-23T18:50:03Z" | python | "2022-03-07T00:18:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,671 | ["airflow/providers/amazon/aws/utils/emailer.py", "docs/apache-airflow/howto/email-config.rst", "tests/providers/amazon/aws/utils/test_emailer.py"] | Amazon Airflow Provider | Broken AWS SES as backend for Email | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
```
apache-airflow==2.2.2
apache-airflow-providers-amazon==2.4.0
```
### Apache Airflow version
2.2.2
### Operating System
Amazon Linux 2
### Deployment
MWAA
### Deployment details
_No response_
### What happened
As part of this PR https://github.com/apache/airflow/pull/18042 the signature of the function `airflow.providers.amazon.aws.utils.emailer.send_email` is no longer compatible with how `airflow.utils.email.send_email` invokes the function. Essentially the functionally of using SES as Email Backend is broken.
### What you expected to happen
This behavior is erroneous because the signature of `airflow.providers.amazon.aws.utils.emailer.send_email` should be compatible with how we call the backend function in `airflow.utils.email.send_email`:
```
return backend(
to_comma_separated,
subject,
html_content,
files=files,
dryrun=dryrun,
cc=cc,
bcc=bcc,
mime_subtype=mime_subtype,
mime_charset=mime_charset,
conn_id=backend_conn_id,
**kwargs,
)
```
### How to reproduce
## Use AWS SES as Email Backend
```
[email]
email_backend = airflow.providers.amazon.aws.utils.emailer.send_email
email_conn_id = aws_default
```
## Try sending an Email
```
from airflow.utils.email import send_email
def email_callback(**kwargs):
send_email(to=['test@hello.io'], subject='test', html_content='content')
email_task = PythonOperator(
task_id='email_task',
python_callable=email_callback,
)
```
## The bug shows up
```
File "/usr/local/airflow/dags/environment_check.py", line 46, in email_callback
send_email(to=['test@hello.io'], subject='test', html_content='content')
File "/usr/local/lib/python3.7/site-packages/airflow/utils/email.py", line 66, in send_email
**kwargs,
TypeError: send_email() missing 1 required positional argument: 'html_content'
```
### Anything else
Every time.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21671 | https://github.com/apache/airflow/pull/21681 | b48dc4da9ec529745e689d101441a05a5579ef46 | b28f4c578c0b598f98731350a93ee87956d866ae | "2022-02-18T18:16:17Z" | python | "2022-02-19T09:34:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,656 | ["airflow/models/baseoperator.py"] | Airflow >= 2.2.0 execution date change is failing TaskInstance get_task_instances method and possibly others | ### Apache Airflow version
2.2.3 (latest released)
### What happened
This is my first time reporting or posting on this forum. Please let me know if I need to provide any more information. Thanks for looking at this!
I have a Python Operator that uses the BaseOperator get_task_instances method and during the execution of this method, I encounter the following error:
<img width="1069" alt="Screen Shot 2022-02-17 at 2 28 48 PM" src="https://user-images.githubusercontent.com/18559784/154581673-718bc199-8390-49cf-a3fe-8972b6f39f81.png">
This error is from doing an upgrade from airflow 1.10.15 -> 2.2.3.
I am using SQLAlchemy version 1.2.24 but I also tried with version 1.2.23 and encountered the same error. However, I do not think this is a sqlAlchemy issue.
The issue seems to have been introduced with Airflow 2.2.0 (pr: https://github.com/apache/airflow/pull/17719/files), where the TaskInstance.execution_date changed from being a column to this association_proxy. I do not have deep knowledge of SQLAlchemny so I am not sure why this change was made, but it results in it the error I'm getting.
2.2 .0 +
<img width="536" alt="Screen Shot 2022-02-17 at 2 41 00 PM" src="https://user-images.githubusercontent.com/18559784/154583252-4729b44d-40e2-4a89-9018-95b09ef4da76.png">
1.10.15
<img width="428" alt="Screen Shot 2022-02-17 at 2 56 15 PM" src="https://user-images.githubusercontent.com/18559784/154585325-4546309c-66b6-4e69-aba2-9b6979762a1b.png">
if you follow the stack trace you will get to this chunk of code that leads to the error because the association_proxy has a '__clause_element__' attr, but the attr raises the exception in the error when called.
<img width="465" alt="Screen Shot 2022-02-17 at 2 43 51 PM" src="https://user-images.githubusercontent.com/18559784/154583639-a7957209-b19e-4134-a5c2-88d53176709c.png">
### What you expected to happen
_No response_
### How to reproduce
_No response_
### Operating System
Linux from the official airflow helm chart docker image python version 3.7
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon 2.4.0
apache-airflow-providers-celery 2.1.0
apache-airflow-providers-cncf-kubernetes 2.2.0
apache-airflow-providers-databricks 2.2.0
apache-airflow-providers-docker 2.3.0
apache-airflow-providers-elasticsearch 2.1.0
apache-airflow-providers-ftp 2.0.1
apache-airflow-providers-google 6.2.0
apache-airflow-providers-grpc 2.0.1
apache-airflow-providers-hashicorp 2.1.1
apache-airflow-providers-http 2.0.1
apache-airflow-providers-imap 2.0.1
apache-airflow-providers-microsoft-azure 3.4.0
apache-airflow-providers-mysql 2.1.1
apache-airflow-providers-odbc 2.0.1
apache-airflow-providers-postgres 2.4.0
apache-airflow-providers-redis 2.0.1
apache-airflow-providers-sendgrid 2.0.1
apache-airflow-providers-sftp 2.3.0
apache-airflow-providers-slack 4.1.0
apache-airflow-providers-sqlite 2.0.1
apache-airflow-providers-ssh 2.3.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
The only extra dependency I am using is awscli==1.20.65. I have changed very little with the deployment besides a few environments variables and some pod annotations.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21656 | https://github.com/apache/airflow/pull/21705 | b2c0a921c155e82d1140029e6495594061945025 | bb577a98494369b22ae252ac8d23fb8e95508a1c | "2022-02-17T22:53:28Z" | python | "2022-02-22T20:12:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,647 | ["docs/apache-airflow-providers-jenkins/connections.rst", "docs/apache-airflow-providers-jenkins/index.rst"] | Jenkins Connection Example | ### Describe the issue with documentation
I need to configure a connection to our jenkins and I dont find anywhere an example.
I suppose that I need to define a http connection with the format:
`http://usename:password@jenkins_url`
However don't have any idea about adding `/api` so that the url would be:
`http://usename:password@jenkins_url/api`
### How to solve the problem
Is it possible to include at least a jenkins connection example in the documentation?
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21647 | https://github.com/apache/airflow/pull/22682 | 3849b4e709acfc9e85496aa2dededb2dae117fc7 | cb41d5c02e3c53a24f9dc148e45e696891c347c2 | "2022-02-17T16:40:43Z" | python | "2022-04-02T20:04:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,638 | ["airflow/models/connection.py", "tests/models/test_connection.py"] | Spark Connection with k8s in URL not mapped correctly | ### Official Helm Chart version
1.2.0
### Apache Airflow version
2.1.4
### Kubernetes Version
v1.21.6+bb8d50a
### Helm Chart configuration
I defined a new Connection String for AIRFLOW_CONN_SPARK_DEFAULT in values.yaml like the following section (base64 encoded or with correct string (spark://k8s://100.68.0.1:443?deploy-mode=cluster):
```
extraSecrets:
'{{ .Release.Name }}-airflow-connections':
data: |
AIRFLOW_CONN_SPARK_DEFAULT: 'c3Bhcms6Ly9rOHM6Ly8xMDAuNjguMC4xOjQ0Mz9kZXBsb3ktbW9kZT1jbHVzdGVy'
```
in Section extraEnvFrom i defined the following:
```
extraEnvFrom: |
- secretRef:
name: '{{ .Release.Name }}-airflow-connections'
```
### Docker Image customisations
added apache-airflow-providers-apache-spark to base Image
### What happened
Airflow Connection mapped wrong because of the k8s:// within the url. if i ask for the connection with cmd "airflow connections get spark_default" then host=k8s and schema=/100.60.0.1:443 which is wrong
### What you expected to happen
the spark Connection based on k8s (spark://k8s://100.68.0.1:443?deploy-mode=cluster) should be parsed correctly
### How to reproduce
define in values.yaml
```
extraSecrets:
'{{ .Release.Name }}-airflow-connections':
data: |
AIRFLOW_CONN_SPARK_DEFAULT: 'c3Bhcms6Ly9rOHM6Ly8xMDAuNjguMC4xOjQ0Mz9kZXBsb3ktbW9kZT1jbHVzdGVy'
extraEnvFrom: |
- secretRef:
name: '{{ .Release.Name }}-airflow-connections'
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21638 | https://github.com/apache/airflow/pull/31465 | 232771869030d708c57f840aea735b18bd4bffb2 | 0560881f0eaef9c583b11e937bf1f79d13e5ac7c | "2022-02-17T09:39:46Z" | python | "2023-06-19T09:32:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,615 | ["chart/tests/test_create_user_job.py"] | ArgoCD deployment: Cannot synchronize after updating values | ### Official Helm Chart version
1.4.0 (latest released)
### Apache Airflow version
2.2.3 (latest released)
### Kubernetes Version
v1.20.12-gke.1500
### Helm Chart configuration
defaultAirflowTag: "2.2.3-python3.9"
airflowVersion: "2.2.3"
createUserJob:
useHelmHooks: false
migrateDatabaseJob:
useHelmHooks: false
images:
migrationsWaitTimeout: 300
executor: "KubernetesExecutor"
### Docker Image customisations
_No response_
### What happened
I was able to configure the synchronization properly when I added the application to _ArgoCD_ the first time, but after updating an environment value, it is set properly (the scheduler is restarted and works fine), but _ArgoCD_ cannot synchronize the jobs (_airflow-run-airflow-migrations_ and _airflow-create-user_), so it shows that the application is not synchronized.
Since I deploy _Airflow_ with _ArgoCD_ and I disable the _Helm's_ hooks, these jobs are not deleted when finished and remain as completed.
The workaround I am doing is to delete these jobs manually, but I have to repeat this after an update.
Should the attribute `ttlSecondsAfterFinished: 0` be included below this line when the _Helm's_ hooks are disabled in the jobs templates?
https://github.com/apache/airflow/blob/af2c047320c5f0742f466943c171ec761d275bab/chart/templates/jobs/migrate-database-job.yaml#L48
https://github.com/apache/airflow/blob/af2c047320c5f0742f466943c171ec761d275bab/chart/templates/jobs/migrate-database-job.yaml#L48
PD: I created a custom chart in order to synchronize my value's files with _ArgoCD_, and this chart only includes a dependency with the _Airflow's_ chart and my values files (I use one by environment), and the _Helm_ configuration I put in the section _Helm Chart configuration_ is under a _airflow_ block in my value's files.
This is my _Chart.yaml_:
```yaml
apiVersion: v2
name: my-airflow
version: 1.0.0
description: Airflow Chart with my values
appVersion: "2.2.3"
dependencies:
- name: airflow
version: 1.4.0
repository: https://airflow.apache.org
```
### What you expected to happen
I expect that _ArgoCD_ synchronizes after changing an environment variable in my values file.
### How to reproduce
- Deploy the chart as an _ArgoCD_ application.
- Change an environment variable in the values file.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21615 | https://github.com/apache/airflow/pull/21776 | dade6e075f5229f15b8b0898393c529e0e9851bc | 608b8c4879c188881e057e6318a0a15f54c55c7b | "2022-02-16T13:19:19Z" | python | "2022-02-25T01:46:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,566 | ["setup.cfg"] | typing_extensions package isn't installed with apache-airflow-providers-amazon causing an issue for SqlToS3Operator | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
3.0.0rc2
### Apache Airflow version
2.2.3 (latest released)
### Python version
Python 3.9.7 (default, Oct 12 2021, 02:43:43)
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
I was working on adding this operator to a DAG and it failed to import due to a lack of a required file
### What you expected to happen
_No response_
### How to reproduce
Add
```
from airflow.providers.amazon.aws.transfers.sql_to_s3 import SqlToS3Operator
```
to a dag
### Anything else
This can be resolved by adding `typing-extensions==4.1.1` to `requirements.txt` when building the project (locally)
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21566 | https://github.com/apache/airflow/pull/21567 | 9407f11c814413064afe09c650a79edc45807965 | e4ead2b10dccdbe446f137f5624255aa2ff2a99a | "2022-02-14T20:21:15Z" | python | "2022-02-25T21:26:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,559 | ["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/hooks/databricks_base.py", "airflow/providers/databricks/operators/databricks.py", "docs/apache-airflow-providers-databricks/operators/run_now.rst", "docs/apache-airflow-providers-databricks/operators/submit_run.rst", "tests/providers/databricks/hooks/test_databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | Databricks hook: Retry also on HTTP Status 429 - rate limit exceeded | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
2.2.0
### Apache Airflow version
2.2.3 (latest released)
### Operating System
Any
### Deployment
Other
### Deployment details
_No response_
### What happened
Operations aren't retried when Databricks API returns HTTP Status 429 - rate limit exceeded
### What you expected to happen
the operation should retry
### How to reproduce
this happens when you have multiple calls to API, especially when it happens outside of Airflow
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21559 | https://github.com/apache/airflow/pull/21852 | c108f264abde68e8f458a401296a53ccbe7a47f6 | 12e9e2c695f9ebb9d3dde9c0f7dfaa112654f0d6 | "2022-02-14T10:08:01Z" | python | "2022-03-13T23:19:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,545 | ["airflow/providers/apache/beam/hooks/beam.py", "docs/docker-stack/docker-images-recipes/go-beam.Dockerfile", "docs/docker-stack/recipes.rst", "tests/providers/apache/beam/hooks/test_beam.py"] | Add Go to docker images | ### Description
Following https://github.com/apache/airflow/pull/20386 we are now supporting execution of Beam Pipeline written in Go.
We might want to add Go to the images.
Beam Go SDK first stable release is `v2.33.0` and requires `Go v1.16` minimum:
### Use case/motivation
This way people running airflow from docker can build/run their go pipelines.
### Related issues
Issue:
https://github.com/apache/airflow/issues/20283
PR:
https://github.com/apache/airflow/pull/20386
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21545 | https://github.com/apache/airflow/pull/22296 | 7bd165fbe2cbbfa8208803ec352c5d16ca2bd3ec | 4a1503b39b0aaf50940c29ac886c6eeda35a79ff | "2022-02-13T11:38:59Z" | python | "2022-03-17T03:57:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,537 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py"] | add partition option for parquet files by columns in BaseSQLToGCSOperator | ### Description
Add the ability to partition parquet files by columns. Right now you can partition files only by size.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21537 | https://github.com/apache/airflow/pull/28677 | 07a17bafa1c3de86a993ee035f91b3bbd284e83b | 35a8ffc55af220b16ea345d770f80f698dcae3fb | "2022-02-12T10:56:36Z" | python | "2023-01-10T05:55:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,486 | ["airflow/providers/postgres/example_dags/example_postgres.py", "airflow/providers/postgres/operators/postgres.py", "docs/apache-airflow-providers-postgres/operators/postgres_operator_howto_guide.rst", "tests/providers/postgres/operators/test_postgres.py"] | Allow to set statement behavior for PostgresOperator | ### Body
Add the ability to pass parameters like `statement_timeout` from PostgresOperator.
https://www.postgresql.org/docs/14/runtime-config-client.html#GUC-STATEMENT-TIMEOUT
The goal is to allow to control over specific query rather than setting the parameters on the connection level.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/21486 | https://github.com/apache/airflow/pull/21551 | ecc5b74528ed7e4ecf05c526feb2c0c85f463429 | 0ec56775df66063cab807d886e412ebf88c572bf | "2022-02-10T10:08:32Z" | python | "2022-03-18T15:09:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,412 | ["airflow/providers/microsoft/azure/hooks/cosmos.py", "tests/providers/microsoft/azure/hooks/test_azure_cosmos.py", "tests/providers/microsoft/azure/operators/test_azure_cosmos.py"] | v3.5.0 airflow.providers.microsoft.azure.operators.cosmos not running | ### Apache Airflow version
2.2.3 (latest released)
### What happened
Submitting this on advice from the community Slack: Attempting to use the v3.5.0 `AzureCosmosInsertDocumentOperator` operator fails with an attribute error: `AttributeError: 'CosmosClient' object has no attribute 'QueryDatabases'`
### What you expected to happen
Expected behaviour is that the document is upserted correctly. I've traced through the source and `does_database_exist()` seems to call `QueryDatabases()` on the result of `self.get_conn()`. Thing is `get_conn()` (AFAICT) returns an actual MS/AZ `CosmosClient` which definitely does not have a `QueryDatabases()` method (it's `query_databases()`)
### How to reproduce
From what I can see, any attempt to use this operator on airflow 2.2.3 will fail in this way
### Operating System
Ubuntu 18.04.5 LTS
### Versions of Apache Airflow Providers
azure-batch==12.0.0
azure-common==1.1.28
azure-core==1.22.0
azure-cosmos==4.2.0
azure-datalake-store==0.0.52
azure-identity==1.7.1
azure-keyvault==4.1.0
azure-keyvault-certificates==4.3.0
azure-keyvault-keys==4.4.0
azure-keyvault-secrets==4.3.0
azure-kusto-data==0.0.45
azure-mgmt-containerinstance==1.5.0
azure-mgmt-core==1.3.0
azure-mgmt-datafactory==1.1.0
azure-mgmt-datalake-nspkg==3.0.1
azure-mgmt-datalake-store==0.5.0
azure-mgmt-nspkg==3.0.2
azure-mgmt-resource==20.1.0
azure-nspkg==3.0.2
azure-storage-blob==12.8.1
azure-storage-common==2.1.0
azure-storage-file==2.1.0
msrestazure==0.6.4
### Deployment
Virtualenv installation
### Deployment details
Clean standalone install I am using for evaluating airflow for our environment
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21412 | https://github.com/apache/airflow/pull/21514 | de41ccc922b3d1f407719744168bb6822bde9a58 | 3c4524b4ec2b42a8af0a8c7b9d8f1d065b2bfc83 | "2022-02-08T05:53:54Z" | python | "2022-02-23T16:39:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,388 | ["airflow/providers/google/cloud/transfers/gcs_to_gcs.py", "tests/providers/google/cloud/transfers/test_gcs_to_gcs.py"] | Optionally raise an error if source file does not exist in GCSToGCSOperator | ### Description
Right now when using GCSToGCSOperator to copy a file from one bucket to another, if the source file does not exist, nothing happens and the task is considered successful. This could be good for some use cases, for example, when you want to copy all the files from a directory or that match a specific pattern.
But for some other cases, like when you only want to copy one specific blob, it might be useful to raise an exception if the source file can't be found. Otherwise, the task would be failing silently.
My proposal is to add a new flag to GCSToGCSOperator to enable this feature. By default, for backward compatibility, the behavior would be the current one. But it would be possible to force the source file to be required and mark the task as failed if it doesn't exist.
### Use case/motivation
Task would fail if the source file to copy does not exist, but only in the case you enable it.
### Related issues
If you want to be sure that the source file exists and it will be copied on every execution, currently the operator does not allow you to make the task fail. If the status is successful but nothing is written in the destination, it would be failing silently.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21388 | https://github.com/apache/airflow/pull/21391 | a2abf663157aea14525e1a55eb9735ba659ae8d6 | 51aff276ca4a33ee70326dd9eea6fba59f1463a3 | "2022-02-07T12:15:28Z" | python | "2022-02-10T19:30:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 21,380 | ["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/operators/databricks.py", "tests/providers/databricks/hooks/test_databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | Databricks: support for triggering jobs by name | ### Description
The DatabricksRunNowOperator supports triggering job runs by job ID. We would like to extend the operator to also support triggering jobs by name. This will likely require first making an API call to list jobs in order to find the appropriate job id.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/21380 | https://github.com/apache/airflow/pull/21663 | 537c24433014d3d991713202df9c907e0f114d5d | a1845c68f9a04e61dd99ccc0a23d17a277babf57 | "2022-02-07T10:23:18Z" | python | "2022-02-26T21:55:30Z" |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.