status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 28,452 | ["airflow/providers/docker/operators/docker_swarm.py", "tests/providers/docker/operators/test_docker_swarm.py"] | TaskInstances do not succeed when using enable_logging=True option in DockerSwarmOperator | ### Apache Airflow Provider(s)
docker
### Versions of Apache Airflow Providers
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-docker==3.3.0
### Apache Airflow version
2.5.0
### Operating System
centos 7
### Deployment
Other Docker-based deployment
### Deployment details
Running an a docker-swarm cluster deployed locally.
### What happened
Same issue as https://github.com/apache/airflow/issues/13675
With logging_enabled=True the DAG never completes and stays in running.
When using DockerSwarmOperator together with the default enable_logging=True option, tasks do not succeed and stay in state running. When checking the docker service logs I can clearly see that the container ran and ended successfully. Airflow however does not recognize that the container finished and keeps the tasks in state running.
### What you think should happen instead
DAG should complete.
### How to reproduce
Docker-compose deployment:
```console
curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.5.0/docker-compose.yaml'
docker compose up airflow-init
docker compose up -d
```
DAG code:
```python
from airflow import DAG
from docker.types import Mount, SecretReference
from airflow.providers.docker.operators.docker_swarm import DockerSwarmOperator
from datetime import timedelta
from airflow.utils.dates import days_ago
from airflow.models import Variable
# Setup default args for the job
default_args = {
'owner': 'airflow',
'start_date': days_ago(2),
'retries': 0
}
# Create the DAG
dag = DAG(
'test_dag', # DAG ID
default_args=default_args,
schedule_interval='0 0 * * *',
catchup=False
)
# # Create the DAG object
with dag as dag:
docker_swarm_task = DockerSwarmOperator(
task_id="job_run",
image="<any image>",
execution_timeout=timedelta(minutes=5),
command="<specific code>",
api_version='auto',
tty=True,
enable_logging=True
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28452 | https://github.com/apache/airflow/pull/35677 | 3bb5978e63f3be21a5bb7ae89e7e3ce9d06a4ab8 | 882108862dcaf08e7f5da519b3d186048d4ec7f9 | "2022-12-19T03:51:53Z" | python | "2023-12-06T22:07:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,381 | ["Dockerfile.ci", "airflow/www/extensions/init_views.py", "airflow/www/package.json", "airflow/www/templates/swagger-ui/index.j2", "airflow/www/webpack.config.js", "airflow/www/yarn.lock", "setup.cfg"] | CVE-2019-17495 for swagger-ui | ### Apache Airflow version
2.5.0
### What happened
this issue https://github.com/apache/airflow/issues/18383 still isn't closed. It seems like the underlying swagger-ui bundle has been abandoned by its maintainer, and we should instead point swagger UI bundle to this version which is kept up-to-date
https://github.com/bartsanchez/swagger_ui_bundle
edit : it seems like this might not be coming from the swagger_ui_bundle any more but instead perhaps from connexion. I'm not familiar with python dependencies, so forgive me if I'm mis-reporting this.
There are CVE scanner tools that notifies https://github.com/advisories/GHSA-c427-hjc3-wrfw using the apache/airflow:2.1.4
The python deps include swagger-ui-2.2.10 and swagger-ui-3.30.0 as part of the bundle. It is already included at ~/.local/lib/python3.6/site-packages/swagger_ui_bundle
swagger-ui-2.2.10 swagger-ui-3.30.0
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28381 | https://github.com/apache/airflow/pull/28788 | 35a8ffc55af220b16ea345d770f80f698dcae3fb | 35ad16dc0f6b764322b1eb289709e493fbbb0ae0 | "2022-12-15T13:50:45Z" | python | "2023-01-10T10:24:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,356 | ["airflow/config_templates/default_webserver_config.py"] | CSRF token should be expire with session | ### Apache Airflow version
2.5.0
### What happened
In the default configuration, the CSRF token [expires in one hour](https://pythonhosted.org/Flask-WTF/config.html#forms-and-csrf). This setting leads to frequent errors in the UI – for no good reason.
### What you think should happen instead
A short expiration date for the CSRF token is not the right value in my view and I [agree with this answer](https://security.stackexchange.com/a/56520/22108) that the CSRF token should basically never expire, instead pegging itself to the current session.
That is, the CSRF token should last as long as the current session. The easiest way to accomplish this is by generating the CSRF token from the session id.
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28356 | https://github.com/apache/airflow/pull/28730 | 04306f18b0643dfed3ed97863bbcf24dc50a8973 | 543e9a592e6b9dc81467c55169725e192fe95e89 | "2022-12-14T10:21:12Z" | python | "2023-01-10T23:25:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,296 | ["airflow/ti_deps/deps/prev_dagrun_dep.py", "tests/models/test_dagrun.py"] | Dynamic task mapping does not correctly handle depends_on_past | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Using Airflow 2.4.2.
I've got a task that retrieves some filenames, which then creates dynamically mapped tasks to move the files, one per task.
I'm using a similar task across multiple DAGs. However, task mapping fails on some DAG runs: it inconsistently happens per DAG run, and some DAGs do not seem to be affected at all. These seem to be the DAGs where no task was ever mapped, so that the mapped task instance ended up in a Skipped state.
What happens is that multiple files will be found, but only a single dynamically mapped task will be created. This task never starts and has map_index of -1. It can be found under the "List instances, all runs" menu, but says "No Data found." under the "Mapped Tasks" tab.
When I press the "Run" button when the mapped task is selected, the following error appears:
```
Could not queue task instance for execution, dependencies not met: Previous Dagrun State: depends_on_past is true for this task's DAG, but the previous task instance has not run yet., Task has been mapped: The task has yet to be mapped!
```
The previous task *has* run however. No errors appeared in my Airflow logs.
### What you think should happen instead
The appropriate amount of task instances should be created, they should correctly resolve the ```depends_on_past``` check and then proceed to run correctly.
### How to reproduce
This DAG reliably reproduces the error for me. The first set of mapped tasks succeeds, the subsequent ones do not.
```python
from airflow import DAG
from airflow.decorators import task
import datetime as dt
from airflow.operators.python import PythonOperator
@task
def get_filenames_kwargs():
return [
{"file_name": i}
for i in range(10)
]
def print_filename(file_name):
print(file_name)
with DAG(
dag_id="dtm_test",
start_date=dt.datetime(2022, 12, 10),
default_args={
"owner": "airflow",
"depends_on_past": True,
},
schedule="@daily",
) as dag:
get_filenames_task = get_filenames_kwargs.override(task_id="get_filenames_task")()
print_filename_task = PythonOperator.partial(
task_id="print_filename_task",
python_callable=print_filename,
).expand(op_kwargs=get_filenames_task)
# Perhaps redundant
get_filenames_task >> print_filename_task
```
### Operating System
Amazon Linux 2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28296 | https://github.com/apache/airflow/pull/28379 | a62840806c37ef87e4112c0138d2cdfd980f1681 | 8aac56656d29009dbca24a5948c2a2097043f4f3 | "2022-12-12T07:36:52Z" | python | "2022-12-15T16:43:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,270 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/dag_processing/manager.py", "tests/api_internal/endpoints/test_rpc_api_endpoint.py", "tests/api_internal/test_internal_api_call.py", "tests/dag_processing/test_manager.py"] | AIP-44 Migrate DagFileProcessorManager._deactivate_stale_dags to Internal API | null | https://github.com/apache/airflow/issues/28270 | https://github.com/apache/airflow/pull/28476 | c18dbe963ad87c03d49e95dfe189b765cc18fbec | 29a26a810ee8250c30f8ba0d6a72bc796872359c | "2022-12-09T19:55:02Z" | python | "2023-01-25T21:26:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,268 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/dag_processing/processor.py", "airflow/utils/log/logging_mixin.py", "tests/dag_processing/test_processor.py"] | AIP-44 Migrate DagFileProcessor.manage_slas to Internal API | null | https://github.com/apache/airflow/issues/28268 | https://github.com/apache/airflow/pull/28502 | 7e2493e3c8b2dbeb378dba4e40110ab1e4ad24da | 0359a42a3975d0d7891a39abe4395bdd6f210718 | "2022-12-09T19:54:41Z" | python | "2023-01-23T20:54:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,267 | ["airflow/api_internal/internal_api_call.py", "airflow/cli/commands/internal_api_command.py", "airflow/cli/commands/scheduler_command.py", "airflow/www/app.py", "tests/api_internal/test_internal_api_call.py"] | AIP-44 Provide information to internal_api_call decorator about the running component | Scheduler/Webserver should never use Internal API, so calling any method decorated with internal_api_call should still execute them locally | https://github.com/apache/airflow/issues/28267 | https://github.com/apache/airflow/pull/28783 | 50b30e5b92808e91ad9b6b05189f560d58dd8152 | 6046aef56b12331b2bb39221d1935b2932f44e93 | "2022-12-09T19:53:23Z" | python | "2023-02-15T01:37:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,266 | [".pre-commit-config.yaml", "airflow/cli/cli_parser.py", "airflow/cli/commands/internal_api_command.py", "airflow/www/extensions/init_views.py", "tests/cli/commands/test_internal_api_command.py"] | AIP-44 Implement standalone internal-api component | https://github.com/apache/airflow/pull/27892 added Internal API as part of Webserver.
We need to introduce `airlfow internal-api` CLI command that starts Internal API as a independent component. | https://github.com/apache/airflow/issues/28266 | https://github.com/apache/airflow/pull/28425 | 760c52949ac41ffa7a2357aa1af0cdca163ddac8 | 367e8f135c2354310b67b3469317f15cec68dafa | "2022-12-09T19:51:08Z" | python | "2023-01-20T18:19:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,242 | ["airflow/cli/commands/role_command.py", "airflow/www/extensions/init_appbuilder.py"] | Airflow CLI to list roles is slow | ### Apache Airflow version
2.5.0
### What happened
We're currently running a suboptimal setup where database connectivity is laggy, 125ms roundtrip.
This has interesting consequences. For example, `airflow roles list` is really slow. Turns out that it's doing a lot of individual queries.
### What you think should happen instead
Ideally, listing roles should be a single (perhaps complex) query.
### How to reproduce
We're using py-spy to sample program execution:
```bash
$ py-spy record -o spy.svg -i --rate 250 --nonblocking airflow roles list
```
Now, to see the bad behavior, the database should incur significant latency.
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28242 | https://github.com/apache/airflow/pull/28244 | 2f5c77b0baa0ab26d2c51fa010850653ded80a46 | e24733662e95ad082e786d4855066cd4d36015c9 | "2022-12-08T22:18:08Z" | python | "2022-12-09T12:47:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,227 | ["airflow/utils/sqlalchemy.py", "tests/utils/test_sqlalchemy.py"] | Scheduler error: 'V1PodSpec' object has no attribute '_ephemeral_containers' | ### Apache Airflow version
2.5.0
### What happened
After upgrade 2.2.5 -> 2.5.0 scheduler failing with error:
```
AttributeError: 'V1PodSpec' object has no attribute '_ephemeral_containers'
```
tried with no luck:
```
airflow dags reserialize
```
Full Traceback:
```verilog
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 108, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 73, in scheduler
_run_scheduler_job(args=args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 43, in _run_scheduler_job
job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 247, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 759, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 889, in _run_scheduler_loop
num_finished_events = self._process_executor_events(session=session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 705, in _process_executor_events
self.executor.send_callback(request)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/celery_kubernetes_executor.py", line 213, in send_callback
self.callback_sink.send(request)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/callbacks/database_callback_sink.py", line 34, in send
db_callback = DbCallbackRequest(callback=callback, priority_weight=10)
File "<string>", line 4, in __init__
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/state.py", line 480, in _initialize_instance
manager.dispatch.init_failure(self, args, kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/state.py", line 477, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/db_callback_request.py", line 46, in __init__
self.callback_data = callback.to_json()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/callbacks/callback_requests.py", line 91, in to_json
val = BaseSerialization.serialize(self.__dict__, strict=True)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in serialize
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in <dictcomp>
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 450, in serialize
return cls._encode(cls.serialize(var.__dict__, strict=strict), type_=DAT.SIMPLE_TASK_INSTANCE)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in serialize
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in <dictcomp>
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in serialize
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in <dictcomp>
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 412, in serialize
json_pod = PodGenerator.serialize_pod(var)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/kubernetes/pod_generator.py", line 411, in serialize_pod
return api_client.sanitize_for_serialization(pod)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 241, in sanitize_for_serialization
return {key: self.sanitize_for_serialization(val)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 241, in <dictcomp>
return {key: self.sanitize_for_serialization(val)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 237, in sanitize_for_serialization
obj_dict = {obj.attribute_map[attr]: getattr(obj, attr)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 239, in <dictcomp>
if getattr(obj, attr) is not None}
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod_spec.py", line 397, in ephemeral_containers
return self._ephemeral_containers
AttributeError: 'V1PodSpec' object has no attribute '_ephemeral_containers'
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
AWS EKS
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28227 | https://github.com/apache/airflow/pull/28454 | dc06bb0e26a0af7f861187e84ce27dbe973b731c | 27f07b0bf5ed088c4186296668a36dc89da25617 | "2022-12-08T15:44:30Z" | python | "2022-12-26T07:56:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,167 | ["airflow/www/.babelrc", "airflow/www/babel.config.js", "airflow/www/jest.config.js", "airflow/www/package.json", "airflow/www/static/js/components/ReactMarkdown.tsx", "airflow/www/static/js/dag/details/NotesAccordion.tsx", "airflow/www/yarn.lock"] | Allow Markdown in Task comments | ### Description
Implement the support for Markdown in Task notes inside Airflow.
### Use case/motivation
It would be helpful to use markdown syntax in Task notes/comments for the following usecases:
- Formatting headers, lists, and tables to allow more complex note-taking.
- Parsing a URL to reference a ticket in an Issue ticketing system (Jira, Pagerduty, etc.)
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28167 | https://github.com/apache/airflow/pull/28245 | 78b72f4fa07cac009ddd6d43d54627381e3e9c21 | 74e82af7eefe1d0d5aa6ea1637d096e4728dea1f | "2022-12-06T16:57:16Z" | python | "2022-12-19T15:32:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,155 | ["airflow/www/views.py"] | Links to dag graph some times display incorrect dagrun | ### Apache Airflow version
2.5.0
### What happened
Open url `dags/gate/graph?dag_run_id=8256-8-1670328803&execution_date=2022-12-06T12%3A13%3A23.174592+00%3A00`
The graph is displaying a completely different dagrun.

If you are not careful to review all the content, you might continue looking at the wrong results, or worse cancel a run with Mark failed.
I got the link from one of our users, so not 100% sure if it was the original url. I believe there could be something wrong with the url-encoding of the last `+` character. In any case, if there are any inconsistencies in the URL parameters vs the found dagruns, it should not display another dagrun, rather redirect to grid-view or error message.
### What you think should happen instead
* dag_run_id should be only required parameter, or have precedence over execution_date
* Provided dag_run_id should always be the same run-id that is displayed in graph
* Inconsistencies in any parameters should display error or redirect to grid view.
### How to reproduce
_No response_
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28155 | https://github.com/apache/airflow/pull/29066 | 48cab7cfebf2c7510d9fdbffad5bd06d8f4751e2 | 9dedf81fa18e57755aa7d317f08f0ea8b6c7b287 | "2022-12-06T12:53:33Z" | python | "2023-01-21T03:13:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,146 | ["airflow/models/xcom.py", "tests/models/test_taskinstance.py"] | Dynamic task context fails to be pickled | ### Apache Airflow version
2.5.0
### What happened
When I upgrade to 2.5.0, run dynamic task test failed.
```py
from airflow.decorators import task, dag
import pendulum as pl
@dag(
dag_id='test-dynamic-tasks',
schedule=None,
start_date=pl.today().add(days=-3),
tags=['example'])
def test_dynamic_tasks():
@task.virtualenv(requirements=[])
def sum_it(values):
print(values)
@task.virtualenv(requirements=[])
def add_one(value):
return value + 1
added_values = add_one.expand(value = [1,2])
sum_it(added_values)
dag = test_dynamic_tasks()
```
```log
*** Reading local file: /home/andi/airflow/logs/dag_id=test-dynamic-tasks/run_id=manual__2022-12-06T10:07:41.355423+00:00/task_id=sum_it/attempt=1.log
[2022-12-06, 18:07:53 CST] {taskinstance.py:1087} INFO - Dependencies all met for <TaskInstance: test-dynamic-tasks.sum_it manual__2022-12-06T10:07:41.355423+00:00 [queued]>
[2022-12-06, 18:07:53 CST] {taskinstance.py:1087} INFO - Dependencies all met for <TaskInstance: test-dynamic-tasks.sum_it manual__2022-12-06T10:07:41.355423+00:00 [queued]>
[2022-12-06, 18:07:53 CST] {taskinstance.py:1283} INFO -
--------------------------------------------------------------------------------
[2022-12-06, 18:07:53 CST] {taskinstance.py:1284} INFO - Starting attempt 1 of 1
[2022-12-06, 18:07:53 CST] {taskinstance.py:1285} INFO -
--------------------------------------------------------------------------------
[2022-12-06, 18:07:53 CST] {taskinstance.py:1304} INFO - Executing <Task(_PythonVirtualenvDecoratedOperator): sum_it> on 2022-12-06 10:07:41.355423+00:00
[2022-12-06, 18:07:53 CST] {standard_task_runner.py:55} INFO - Started process 25873 to run task
[2022-12-06, 18:07:53 CST] {standard_task_runner.py:82} INFO - Running: ['airflow', 'tasks', 'run', 'test-dynamic-tasks', 'sum_it', 'manual__2022-12-06T10:07:41.355423+00:00', '--job-id', '41164', '--raw', '--subdir', 'DAGS_FOLDER/andi/test-dynamic-task.py', '--cfg-path', '/tmp/tmphudvake2']
[2022-12-06, 18:07:53 CST] {standard_task_runner.py:83} INFO - Job 41164: Subtask sum_it
[2022-12-06, 18:07:53 CST] {task_command.py:389} INFO - Running <TaskInstance: test-dynamic-tasks.sum_it manual__2022-12-06T10:07:41.355423+00:00 [running]> on host sh-dataops-airflow.jinde.local
[2022-12-06, 18:07:53 CST] {taskinstance.py:1511} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_EMAIL=andi@google.com
AIRFLOW_CTX_DAG_OWNER=andi
AIRFLOW_CTX_DAG_ID=test-dynamic-tasks
AIRFLOW_CTX_TASK_ID=sum_it
AIRFLOW_CTX_EXECUTION_DATE=2022-12-06T10:07:41.355423+00:00
AIRFLOW_CTX_TRY_NUMBER=1
AIRFLOW_CTX_DAG_RUN_ID=manual__2022-12-06T10:07:41.355423+00:00
[2022-12-06, 18:07:53 CST] {process_utils.py:179} INFO - Executing cmd: /home/andi/airflow/venv38/bin/python -m virtualenv /tmp/venv7lc4m6na --system-site-packages
[2022-12-06, 18:07:53 CST] {process_utils.py:183} INFO - Output:
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - created virtual environment CPython3.8.0.final.0-64 in 220ms
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - creator CPython3Posix(dest=/tmp/venv7lc4m6na, clear=False, no_vcs_ignore=False, global=True)
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/andi/.local/share/virtualenv)
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - added seed packages: pip==22.2.1, setuptools==63.2.0, wheel==0.37.1
[2022-12-06, 18:07:54 CST] {process_utils.py:187} INFO - activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
[2022-12-06, 18:07:54 CST] {process_utils.py:179} INFO - Executing cmd: /tmp/venv7lc4m6na/bin/pip install -r /tmp/venv7lc4m6na/requirements.txt
[2022-12-06, 18:07:54 CST] {process_utils.py:183} INFO - Output:
[2022-12-06, 18:07:55 CST] {process_utils.py:187} INFO - Looking in indexes: http://pypi:8081
[2022-12-06, 18:08:00 CST] {process_utils.py:187} INFO -
[2022-12-06, 18:08:00 CST] {process_utils.py:187} INFO - [notice] A new release of pip available: 22.2.1 -> 22.3.1
[2022-12-06, 18:08:00 CST] {process_utils.py:187} INFO - [notice] To update, run: python -m pip install --upgrade pip
[2022-12-06, 18:08:00 CST] {taskinstance.py:1772} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/decorators/base.py", line 217, in execute
return_value = super().execute(context)
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 356, in execute
return super().execute(context=serializable_context)
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 175, in execute
return_value = self.execute_callable()
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 553, in execute_callable
return self._execute_python_callable_in_subprocess(python_path, tmp_path)
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 397, in _execute_python_callable_in_subprocess
self._write_args(input_path)
File "/home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/operators/python.py", line 367, in _write_args
file.write_bytes(self.pickling_library.dumps({"args": self.op_args, "kwargs": self.op_kwargs}))
_pickle.PicklingError: Can't pickle <class 'sqlalchemy.orm.session.Session'>: it's not the same object as sqlalchemy.orm.session.Session
[2022-12-06, 18:08:00 CST] {taskinstance.py:1322} INFO - Marking task as FAILED. dag_id=test-dynamic-tasks, task_id=sum_it, execution_date=20221206T100741, start_date=20221206T100753, end_date=20221206T100800
[2022-12-06, 18:08:00 CST] {warnings.py:109} WARNING - /home/andi/airflow/venv38/lib/python3.8/site-packages/airflow/utils/email.py:120: RemovedInAirflow3Warning: Fetching SMTP credentials from configuration variables will be deprecated in a future release. Please set credentials using a connection instead.
send_mime_email(e_from=mail_from, e_to=recipients, mime_msg=msg, conn_id=conn_id, dryrun=dryrun)
[2022-12-06, 18:08:00 CST] {configuration.py:635} WARNING - section/key [smtp/smtp_user] not found in config
[2022-12-06, 18:08:00 CST] {email.py:229} INFO - Email alerting: attempt 1
[2022-12-06, 18:08:01 CST] {email.py:241} INFO - Sent an alert email to ['andi@google.com']
[2022-12-06, 18:08:01 CST] {standard_task_runner.py:100} ERROR - Failed to execute job 41164 for task sum_it (Can't pickle <class 'sqlalchemy.orm.session.Session'>: it's not the same object as sqlalchemy.orm.session.Session; 25873)
[2022-12-06, 18:08:01 CST] {local_task_job.py:159} INFO - Task exited with return code 1
[2022-12-06, 18:08:01 CST] {taskinstance.py:2582} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
### What you think should happen instead
I expect this sample run passed.
### How to reproduce
_No response_
### Operating System
centos 7.9 3.10.0-1160.el7.x86_64
### Versions of Apache Airflow Providers
```
airflow-code-editor==5.2.2
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-microsoft-mssql==3.1.0
apache-airflow-providers-microsoft-psrp==2.0.0
apache-airflow-providers-microsoft-winrm==3.0.0
apache-airflow-providers-mysql==3.0.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-samba==4.0.0
apache-airflow-providers-sftp==3.0.0
autopep8==1.6.0
brotlipy==0.7.0
chardet==3.0.4
pip-chill==1.0.1
pyopenssl==19.1.0
pysocks==1.7.1
python-ldap==3.4.2
requests-credssp==2.0.0
swagger-ui-bundle==0.0.9
tqdm==4.51.0
virtualenv==20.16.2
yapf==0.32.0
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28146 | https://github.com/apache/airflow/pull/28191 | 84a5faff0de2a56f898b8a02aca578b235cb12ba | e981dfab4e0f4faf1fb932ac6993c3ecbd5318b2 | "2022-12-06T10:40:01Z" | python | "2022-12-15T09:20:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,143 | ["airflow/www/static/js/api/useTaskLog.ts", "airflow/www/static/js/dag/details/taskInstance/Logs/LogBlock.tsx", "airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx"] | Logs tab is automatically scrolling to the bottom while user is reading logs | ### Apache Airflow version
2.5.0
### What happened
Open the logs tab for a task that is currently running.
Scroll up to read things further up the log.
Every 30 seconds or so the log automatically scrolls down to the bottom again.
### What you think should happen instead
If the user has scrolled away from the bottom in the logs-panel, the live tailing of new logs should not scroll the view back to the bottom automatically.
### How to reproduce
_No response_
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28143 | https://github.com/apache/airflow/pull/28386 | 5b54e8d21b1801d5e0ccd103592057f0b5a980b1 | 5c80d985a3102a46f198aec1c57a255e00784c51 | "2022-12-06T07:35:40Z" | python | "2022-12-19T01:00:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,121 | ["airflow/providers/sftp/sensors/sftp.py", "tests/providers/sftp/sensors/test_sftp.py"] | SFTP Sensor fails to locate file | ### Apache Airflow version
2.5.0
### What happened
While creating SFTP sensor I have tried to find a file under directory. But I was getting error as Time Out, not found.
So after debugging code found that there is a issue with [poke function](https://airflow.apache.org/docs/apache-airflow-providers-sftp/stable/_modules/airflow/providers/sftp/sensors/sftp.html#SFTPSensor.poke).
As after getting matched file we are trying to find last modified time of the file using [self.hook.get_mod_time](https://airflow.apache.org/docs/apache-airflow-providers-sftp/stable/_modules/airflow/providers/sftp/hooks/sftp.html#SFTPHook.get_mod_time) which take full path (path + filename) and we are giving only filename as arguments.
### What you think should happen instead
I have solved that issue by adding path with filename and then calling [self.hook.get_mod_time](https://airflow.apache.org/docs/apache-airflow-providers-sftp/stable/_modules/airflow/providers/sftp/hooks/sftp.html#SFTPHook.get_mod_time) function.
Here is modified code,
```
def poke(self, context: Context) -> bool:
self.hook = SFTPHook(self.sftp_conn_id)
self.log.info("Poking for %s, with pattern %s", self.path, self.file_pattern)
if self.file_pattern:
file_from_pattern = self.hook.get_file_by_pattern(self.path, self.file_pattern)
if file_from_pattern:
'''actual_file_to_check = file_from_pattern'''
actual_file_to_check = self.path + file_from_pattern
else:
return False
else:
actual_file_to_check = self.path
try:
mod_time = self.hook.get_mod_time(actual_file_to_check)
self.log.info("Found File %s last modified: %s", str(actual_file_to_check), str(mod_time))
except OSError as e:
if e.errno != SFTP_NO_SUCH_FILE:
raise e
return False
self.hook.close_conn()
if self.newer_than:
_mod_time = convert_to_utc(datetime.strptime(mod_time, "%Y%m%d%H%M%S"))
_newer_than = convert_to_utc(self.newer_than)
return _newer_than <= _mod_time
else:
return True
```
### How to reproduce
You can get same issue by creating a DAG as mentioned
```
with DAG(
dag_id='sftp_sensor_dag',
max_active_runs=1,
default_args=default_args,
) as dag:
file_sensing_task = SFTPSensor(
task_id='sensor_for_file',
path= "Weekly/11/",
file_pattern = "*pdf*,
sftp_conn_id='sftp_hook_conn',
poke_interval=30
)
```
### Operating System
Microsoft Windows [Version 10.0.19044.2251]
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28121 | https://github.com/apache/airflow/pull/29467 | 72c3817a44eea5005761ae3b621e8c39fde136ad | 8e24387d6db177c662342245bb183bfd73fb9ee8 | "2022-12-05T15:15:46Z" | python | "2023-02-13T23:12:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,071 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Kubernetes logging errors - attempting to adopt taskinstance which was not specified by database | ### Apache Airflow version
2.4.3
### What happened
Using following config
```
executor = CeleryKubernetesExecutor
delete_worker_pods = False
```
1. Start a few dags running in kubernetes, wait for them to complete.
2. Restart Scheduler.
3. Logs are flooded with hundreds of errors like` ERROR - attempting to adopt taskinstance which was not specified by database: TaskInstanceKey(dag_id='xxx', task_id='yyy', run_id='zzz', try_number=1, map_index=-1)`
This is problematic because:
* Our installation has thousands of dags and pods so this becomes very noisy and the adoption-process adds excessive startup-time to the scheduler, up to a minute some times.
* It's hiding actual errors with resetting orphaned tasks, something that also happens for inexplicable reasons on scheduler restart with following log: `Reset the following 6 orphaned TaskInstances`. Making such much harder to debug. The cause of them can not be easily correlated with those that were not specified by database.
The cause of these logs are the Kubernetes executor on startup loads all pods (`try_adopt_task_instances`), it then cross references them with all `RUNNING` TaskInstances loaded via `scheduler_job.adopt_or_reset_orphaned_tasks`.
For all pods where a running TI can not be found, it logs the error above - But for TIs that were already completed this is not an error, and the pods should not have to be loaded at all.
I have an idea of adding some code in the kubernetes_executor that patches in something like a `completion-acknowleged`-label whenever a pod is completed (unless `delete_worker_pods` is set). Then on startup, all pods having this label can be excluded. Is this a good idea or do you see other potential solutions?
Another potential solution is to inside `try_adopt_task_instances` only fetch the exact pod-id specified in each task-instance, instead of listing all to later cross-reference them.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28071 | https://github.com/apache/airflow/pull/28899 | f2bedcbd6722cd43772007eecf7f55333009dc1d | f64ac5978fb3dfa9e40a0e5190ef88e9f9615824 | "2022-12-02T17:46:41Z" | python | "2023-01-18T20:05:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,070 | ["airflow/www/static/js/dag/InstanceTooltip.test.tsx", "airflow/www/static/js/dag/InstanceTooltip.tsx", "airflow/www/static/js/dag/details/dagRun/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Details.tsx", "airflow/www/yarn.lock"] | task duration in grid view is different when viewed at different times. | ### Apache Airflow version
2.4.3
### What happened
I wrote this dag to test the celery executor's ability to tolerate OOMkills:
```python3
import numpy as np
from airflow import DAG
from airflow.decorators import task
from datetime import datetime, timedelta
from airflow.models.variable import Variable
import subprocess
import random
def boom():
np.ones((1_000_000_000_000))
def maybe_boom(boom_hostname, boom_count, boom_modulus):
"""
call boom(), but only under certain conditions
"""
try:
proc = subprocess.Popen("hostname", shell=True, stdout=subprocess.PIPE)
hostname = proc.stdout.readline().decode().strip()
# keep track of which hosts parsed the dag
parsed = Variable.get("parsed", {}, deserialize_json=True)
parsed.setdefault(hostname, 0)
parsed[hostname] = parsed[hostname] + 1
Variable.set("parsed", parsed, serialize_json=True)
# only blow up when the caller's condition is met
print(parsed)
try:
count = parsed[boom_hostname]
if hostname == boom_hostname and count % boom_modulus == boom_count:
print("boom")
boom()
except (KeyError, TypeError):
pass
print("no boom")
except:
# key errors show up because of so much traffic on the variable
# don't hold up parsing in those cases
pass
@task
def do_stuff():
# tasks randomly OOMkill also
if random.randint(1, 256) == 13:
boom()
run_size = 100
with DAG(
dag_id="oom_on_parse",
schedule=timedelta(seconds=30),
start_date=datetime(1970, 1, 1),
catchup=False,
):
# OOM part-way through the second run
# and every 3th run after that
maybe_boom(
boom_hostname="airflow-worker-0",
boom_count=run_size + 50,
boom_modulus=run_size * 3,
)
[do_stuff() for _ in range(run_size)]
```
I'm not surprised that tasks are failing. The dag occasionally tries to allocate 1Tb of memory. That's a good reason to fail. What surprises me is that occasionally, the run durations are reported as 23:59:30 when I've only been running the test for 5 minutes. Also, this number changes if I view it later, behold:

23:55:09 -> 23:55:03 -> 23:55:09, they're decreasing.
### What you think should happen instead
The duration should never be longer than I've had the deployment up, and whatever is reported, it should not change when viewed later on.
### How to reproduce
Using the celery executor, unpause the dag above. Wait for failures to show up. View their duration in the grid view.
This gist includes a script which shows all of the parameters I'm using (e.g. to helm and such): https://gist.github.com/MatrixManAtYrService/6e90a3b8c7c65b8d8b1deaccc8b6f042
### Operating System
k8s / helm / docker / macos
### Versions of Apache Airflow Providers
n/a
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
See script in this gist? https://gist.github.com/MatrixManAtYrService/6e90a3b8c7c65b8d8b1deaccc8b6f042
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28070 | https://github.com/apache/airflow/pull/28395 | 4d0fa01f72ac4a947db2352e18f4721c2e2ec7a3 | 11f30a887c77f9636e88e31dffd969056132ae8c | "2022-12-02T17:10:57Z" | python | "2022-12-16T18:04:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,065 | ["airflow/www/views.py", "tests/www/views/test_views_dagrun.py"] | Queue up new tasks always returns an empty list | ### Apache Airflow version
main (development)
### What happened
Currently when a new task is added to a dag and in the grid view, a user selects the top level of a dag run and then clicks on "Queue up new tasks", the list returned by the confirmation box is always empty.
It appears that where the list of tasks is expected to be set, [here](https://github.com/apache/airflow/blob/ada91b686508218752fee176d29d63334364a7f2/airflow/api/common/mark_tasks.py#L516), `res` will always be an empty list.
### What you think should happen instead
The UI should return a list of tasks that will be queued up once the confirmation button is pressed.
### How to reproduce
Create a dag, trigger the dag, allow it to complete.
Add a new task to the dag, click on "Queue up new tasks", the list will be empty.
### Operating System
n/a
### Versions of Apache Airflow Providers
2.3.3 and upwards including main. I've not looked at earlier releases.
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
I have a PR prepared for this issue.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28065 | https://github.com/apache/airflow/pull/28066 | e29d33b89f7deea6eafb03006c37b60692781e61 | af29ff0a8aa133f0476bf6662e6c06c67de21dd5 | "2022-12-02T11:45:05Z" | python | "2022-12-05T18:51:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,000 | ["airflow/providers/amazon/aws/hooks/redshift_sql.py", "docs/apache-airflow-providers-amazon/connections/redshift.rst", "tests/providers/amazon/aws/hooks/test_redshift_sql.py"] | Add IAM authentication to Amazon Redshift Connection by AWS Connection | ### Description
Allow authenticating to Redshift Cluster in `airflow.providers.amazon.aws.hooks.redshift_sql.RedshiftSQLHook` with temporary IAM Credentials.
This might be implemented by the same way as it already implemented into PostgreSQL Hook - manual obtain credentials by call [GetClusterCredentials](https://docs.aws.amazon.com/redshift/latest/APIReference/API_GetClusterCredentials.html) thought Redshift API.
https://github.com/apache/airflow/blob/56b5f3f4eed6a48180e9d15ba9bb9664656077b1/airflow/providers/postgres/hooks/postgres.py#L221-L235
Or by passing obtained temporary credentials into [redshift-connector](https://github.com/aws/amazon-redshift-python-driver#example-using-iam-credentials)
### Use case/motivation
This allows users connect to Redshift Cluster by re-use already existed [Amazon Web Services Connection](https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/connections/aws.html)
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28000 | https://github.com/apache/airflow/pull/28187 | b7e5b47e2794fa0eb9ac2b22f2150d2fdd9ef2b1 | 2f247a2ba2fb7c9f1fe71567a80f0063e21a5f55 | "2022-11-30T05:09:08Z" | python | "2023-05-02T13:58:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,978 | ["airflow/providers/snowflake/CHANGELOG.rst", "airflow/providers/snowflake/hooks/snowflake.py", "airflow/providers/snowflake/operators/snowflake.py", "tests/providers/snowflake/hooks/test_sql.py", "tests/providers/snowflake/operators/test_snowflake_sql.py"] | KeyError: 0 error with common-sql version 1.3.0 | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-apache-hive==4.0.1
apache-airflow-providers-apache-livy==3.1.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.3.0
apache-airflow-providers-databricks==3.3.0
apache-airflow-providers-dbt-cloud==2.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.4.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-azure==4.3.0
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-snowflake==3.3.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-ssh==3.2.0
```
### Apache Airflow version
2.4.3
### Operating System
Debian Bullseye
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
With the latest version of common-sql provider, the `get_records` from hook is now a ordinary dictionary, causing this KeyError with SqlSensor:
```
[2022-11-29, 00:39:18 UTC] {taskinstance.py:1851} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/sensors/base.py", line 189, in execute
poke_return = self.poke(context)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/sensors/sql.py", line 98, in poke
first_cell = records[0][0]
KeyError: 0
```
I have only tested with Snowflake, I haven't tested it with other databases. Reverting back to 1.2.0 solves the issue.
### What you think should happen instead
It should return an iterable list as usual with the query.
### How to reproduce
```
from datetime import datetime
from airflow import DAG
from airflow.providers.common.sql.sensors.sql import SqlSensor
with DAG(
dag_id="sql_provider_snowflake_test",
schedule=None,
start_date=datetime(2022, 1, 1),
catchup=False,
):
t1 = SqlSensor(
task_id="snowflake_test",
conn_id="snowflake",
sql="select 0",
fail_on_empty=False,
poke_interval=20,
mode="poke",
timeout=60 * 5,
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27978 | https://github.com/apache/airflow/pull/28006 | 6c62985055e7f9a715c3ae47f6ff584ad8378e2a | d9cefcd0c50a1cce1c3c8e9ecb99cfacde5eafbf | "2022-11-29T00:52:53Z" | python | "2022-12-01T13:53:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,976 | ["airflow/providers/snowflake/CHANGELOG.rst", "airflow/providers/snowflake/hooks/snowflake.py", "airflow/providers/snowflake/operators/snowflake.py", "tests/providers/snowflake/hooks/test_sql.py", "tests/providers/snowflake/operators/test_snowflake_sql.py"] | `SQLColumnCheckOperator` failures after upgrading to `common-sql==1.3.0` | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.2.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-salesforce==5.0.0
apache-airflow-providers-slack==5.1.0
apache-airflow-providers-snowflake==3.2.0
Issue:
apache-airflow-providers-common-sql==1.3.0
### Apache Airflow version
2.4.3
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
Problem occurred when upgrading from common-sql=1.2.0 to common-sql=1.3.0
Getting a `KEY_ERROR` when running a unique_check and null_check on a column.
1.3.0 log:
<img width="1609" alt="Screen Shot 2022-11-28 at 2 01 20 PM" src="https://user-images.githubusercontent.com/15257610/204390144-97ae35b7-1a2c-4ee1-9c12-4f3940047cde.png">
1.2.0 log:
<img width="1501" alt="Screen Shot 2022-11-28 at 2 00 15 PM" src="https://user-images.githubusercontent.com/15257610/204389994-7e8eae17-a346-41ac-84c4-9de4be71af20.png">
### What you think should happen instead
Potential causes:
- seems to be indexing based on the test query column `COL_NAME` instead of the table column `STRIPE_ID`
- the `record` from the test changed types went from a tuple to a list of dictionaries.
- no `tolerance` is specified for these tests, so `.get('tolerance')` looks like it will cause an error without a default specified like `.get('tolerance', None)`
Expected behavior:
- these tests continue to pass with the upgrade
- `tolerance` is not a required key.
### How to reproduce
```
from datetime import datetime
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
from airflow.providers.common.sql.operators.sql import SQLColumnCheckOperator
my_conn_id = "snowflake_default"
default_args={"conn_id": my_conn_id}
with DAG(
dag_id="airflow_providers_example",
schedule=None,
start_date=datetime(2022, 11, 27),
default_args=default_args,
) as dag:
create_table = SnowflakeOperator(
task_id="create_table",
sql=""" CREATE OR REPLACE TABLE testing AS (
SELECT
1 AS row_num,
'not null' AS field
UNION ALL
SELECT
2 AS row_num,
'test' AS field
UNION ALL
SELECT
3 AS row_num,
'test 2' AS field
)""",
)
column_checks = SQLColumnCheckOperator(
task_id="column_checks",
table="testing",
column_mapping={
"field": {"unique_check": {"equal_to": 0}, "null_check": {"equal_to": 0}}
},
)
create_table >> column_checks
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27976 | https://github.com/apache/airflow/pull/28006 | 6c62985055e7f9a715c3ae47f6ff584ad8378e2a | d9cefcd0c50a1cce1c3c8e9ecb99cfacde5eafbf | "2022-11-28T23:03:13Z" | python | "2022-12-01T13:53:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,936 | ["airflow/www/static/js/components/Table/Cells.tsx"] | Datasets triggered run modal is not scrollable | ### Apache Airflow version
main (development)
### What happened
Datasets modal which used to display triggered runs is not scrollable even if there are records

### What you think should happen instead
It should be scrollable if there are records to display
### How to reproduce
1. trigger a datasets dag with multiple triggered runs
2. click on datasets
3. click on uri which have multiple triggered runs
DAG-
```
from airflow import Dataset, DAG
from airflow.operators.python import PythonOperator
from datetime import datetime
fan_out = Dataset("fan_out")
fan_in = Dataset("fan_in")
# the leader
with DAG(
dag_id="momma_duck", start_date=datetime(1970, 1, 1), schedule_interval=None
) as leader:
PythonOperator(
task_id="has_outlet", python_callable=lambda: None, outlets=[fan_out]
)
# the many
for i in range(1, 40):
with DAG(
dag_id=f"duckling_{i}", start_date=datetime(1970, 1, 1), schedule=[fan_out]
) as duck:
PythonOperator(
task_id="has_outlet", python_callable=lambda: None, outlets=[fan_in]
)
globals()[f"duck_{i}"] = duck
# the straggler
with DAG(
dag_id="straggler_duck", start_date=datetime(1970, 1, 1), schedule=[fan_in]
) as straggler:
PythonOperator(task_id="has_outlet", python_callable=lambda: None)
```
### Operating System
mac os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27936 | https://github.com/apache/airflow/pull/27965 | a158fbb6bde07cd20003680a4cf5e7811b9eda98 | 5e4f4a3556db5111c2ae36af1716719a8494efc7 | "2022-11-26T07:18:43Z" | python | "2022-11-29T01:16:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,932 | ["airflow/executors/base_executor.py", "airflow/providers/celery/executors/celery_executor.py", "airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py", "docs/apache-airflow-providers-celery/cli-ref.rst", "docs/apache-airflow-providers-celery/index.rst", "docs/apache-airflow-providers-cncf-kubernetes/cli-ref.rst", "docs/apache-airflow-providers-cncf-kubernetes/index.rst"] | AIP-51 - Executor Specific CLI Commands | ### Overview
Some Executors have their own first class CLI commands (now that’s hardcoding/coupling!) which setup or modify various components related to that Executor.
### Examples
- **5a**) Celery Executor commands: https://github.com/apache/airflow/blob/27e2101f6ee5567b2843cbccf1dca0b0e7c96186/airflow/cli/cli_parser.py#L1689-L1734
- **5b**) Kubernetes Executor commands: https://github.com/apache/airflow/blob/27e2101f6ee5567b2843cbccf1dca0b0e7c96186/airflow/cli/cli_parser.py#L1754-L1771
- **5c**) Default CLI parser has hardcoded logic for Celery and Kubernetes Executors specifically: https://github.com/apache/airflow/blob/27e2101f6ee5567b2843cbccf1dca0b0e7c96186/airflow/cli/cli_parser.py#L63-L99
### Proposal
Update the BaseExecutor interface with a pluggable mechanism to vend CLI `GroupCommands` and parsers. Executor subclasses would then implement these methods, if applicable, which would then be called to fetch commands and parsers from within Airflow Core cli parser code. We would then migrate the existing Executor CLI code from cli_parser to the respective Executor class.
Pseudo-code example for vending `GroupCommand`s:
```python
# Existing code in cli_parser.py
...
airflow_commands: List[CLICommand] = [
GroupCommand(
name='dags',
help='Manage DAGs',
subcommands=DAGS_COMMANDS,
),
...
]
# New code to add groups vended by executor classes
executor_cls, _ = ExecutorLoader.import_executor_cls(conf.get('core', 'EXECUTOR'))
airflow_commands.append(executor_cls.get_cli_group_commands())
...
``` | https://github.com/apache/airflow/issues/27932 | https://github.com/apache/airflow/pull/33081 | bbc096890512ba2212f318558ca1e954ab399657 | 879fd34e97a5343e6d2bbf3d5373831b9641b5ad | "2022-11-25T23:28:44Z" | python | "2023-08-04T17:26:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,909 | ["airflow/providers/google/cloud/transfers/bigquery_to_gcs.py"] | Add export_format to template_fields of BigQueryToGCSOperator | ### Description
There might be an use case where the export_format can be based on some dynamic values. So, adding export_format will help developers in future
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27909 | https://github.com/apache/airflow/pull/27910 | 3fef6a47834b89b99523db6d97d6aa530657a008 | f0820e8d9e8a36325987278bcda2bd69bd53f3a5 | "2022-11-25T10:10:10Z" | python | "2022-11-25T20:26:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,907 | ["airflow/www/decorators.py"] | Password is not masked in audit logs for connections/variables | ### Apache Airflow version
main (development)
### What happened
Password for connections and variables with secret in the name are not masked in audit logs.
<img width="1337" alt="Screenshot 2022-11-25 at 12 58 59 PM" src="https://user-images.githubusercontent.com/88504849/203932123-c47fd66f-8e63-4bc6-9bf1-b9395cb26675.png">
<img width="1352" alt="Screenshot 2022-11-25 at 12 56 32 PM" src="https://user-images.githubusercontent.com/88504849/203932220-3f02984c-94b5-4773-8767-6f19cb0ceff0.png">
<img width="1328" alt="Screenshot 2022-11-25 at 1 43 40 PM" src="https://user-images.githubusercontent.com/88504849/203933183-e97b2358-9414-45c8-ab8f-d2f913117301.png">
### What you think should happen instead
Password/value should be masked
### How to reproduce
1. Create a connection or variable(with secret in the name i.e. test_secret)
2. Open audit logs
3. Observe the password
### Operating System
mac os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27907 | https://github.com/apache/airflow/pull/27923 | 5e45cb019995e8b80104b33da1c93eefae12d161 | 1e73b1cea2d507d6d09f5eac6a16b649f8b52522 | "2022-11-25T08:14:51Z" | python | "2022-11-25T21:23:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,842 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator no longer uses field_delimiter or time_partitioning | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
google=8.5.0
### Apache Airflow version
2.4.3
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### What happened
The newest version of the google providers no longer provides the `field_delimiter` or `time_partitioning` fields to the bq job configuration for the GCStoBQ transfers. Looking at the code it seems like this behavior was removed during the change to use deferrable operations
### What you think should happen instead
These fields should continue to be provided
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27842 | https://github.com/apache/airflow/pull/27961 | 5cdff505574822ad3d2a226056246500e4adea2f | 2d663df0552542efcef6e59bc2bc1586f8d1c7f3 | "2022-11-22T17:31:55Z" | python | "2022-12-04T19:02:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,837 | ["airflow/providers/databricks/operators/databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | Databricks - Run job by job name not working with DatabricksRunNowDeferrableOperator | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==3.3.0
### Apache Airflow version
2.4.2
### Operating System
Mac OS 13.0
### Deployment
Virtualenv installation
### Deployment details
Virtualenv deployment with Python 3.10
### What happened
Submitting a Databricks job run by name (`job_name`) with the deferrable version (`DatabricksRunNowDeferrableOperator`) does not actually fill the `job_id` and the Databricks API responds with an HTTP 400 bad request - attempting to run a job (POST `https://<databricks-instance>/api/2.1/jobs/run-now`) without an ID specidied.
Sample errors from the Airflow logs:
```
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://[subdomain].azuredatabricks.net/api/2.1/jobs/run-now
During handling of the above exception, another exception occurred:
[...truncated message...]
airflow.exceptions.AirflowException: Response: b'{"error_code":"INVALID_PARAMETER_VALUE","message":"Job 0 does not exist."}', Status Code: 400
```
### What you think should happen instead
The deferrable version (`DatabricksRunNowDeferrableOperator`) should maintain the behavior of the parent class (`DatabricksRunNowOperator`) and use the `job_name` to find the `job_id`.
The following logic is missing in the deferrable version:
```
# Sample from the DatabricksRunNowOperator#execute
hook = self._hook
if "job_name" in self.json:
job_id = hook.find_job_id_by_name(self.json["job_name"])
if job_id is None:
raise AirflowException(f"Job ID for job name {self.json['job_name']} can not be found")
self.json["job_id"] = job_id
del self.json["job_name"]
```
### How to reproduce
To reproduce, use a deferrable run now operator with the job name as an argument in an airflow task:
```
from airflow.providers.databricks.operators.databricks import DatabricksRunNowDeferrableOperator
DatabricksRunNowDeferrableOperator(
job_name='some-name',
# Other args
)
```
### Anything else
The problem occurs at every call.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27837 | https://github.com/apache/airflow/pull/32806 | c4b6f06f6e2897b3f1ee06440fc66f191acee9a8 | 58e21c66fdcc8a416a697b4efa852473ad8bd6fc | "2022-11-22T13:54:22Z" | python | "2023-07-25T03:21:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,824 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | DAG Run fails when chaining multiple empty mapped tasks | ### Apache Airflow version
2.4.3
### What happened
A significant fraction of the DAG Runs of a DAG that has 2+ consecutive mapped tasks which are are being passed an empty list are marked as failed when all tasks are either succeeding or being skipped. This was supposedly fixed with issue #25200 but the problem still persists.

### What you think should happen instead
The DAG Run should be marked success.
### How to reproduce
The real world version of this DAG has several mapped tasks that all point to the same list, and that list is frequently empty. I have made a minimal reproducible example.
```
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
with DAG(dag_id="break_mapping", start_date=datetime(2022, 3, 4)) as dag:
@task
def add_one(x: int):
return x + 1
@task
def say_hi():
print("Hi")
@task
def say_bye():
print("Bye")
added_values = add_one.expand(x=[])
added_more_values = add_one.expand(x=[])
added_more_more_values = add_one.expand(x=[])
say_hi() >> say_bye() >> added_values
added_values >> added_more_values >> added_more_more_values
```
### Operating System
Debian Bullseye
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27824 | https://github.com/apache/airflow/pull/27964 | b60006ae26c41e887ec0102bce8b726fce54007d | f89ca94c3e60bfae888dfac60c7472d207f60f22 | "2022-11-22T01:31:41Z" | python | "2022-11-29T07:34:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,715 | [".pre-commit-config.yaml", "STATIC_CODE_CHECKS.rst", "dev/breeze/src/airflow_breeze/pre_commit_ids.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_static-checks.svg"] | Add pre-commit rule to validate using `urlsplit` rather than `urlparse` | ### Body
Originally suggested in https://github.com/apache/airflow/pull/27389#issuecomment-1297252026
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/27715 | https://github.com/apache/airflow/pull/27841 | cd01650192b74573b49a20803e4437e611a4cf33 | a99254ffd36f9de06feda6fe45773495632e3255 | "2022-11-16T14:49:46Z" | python | "2023-02-20T01:06:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,714 | ["airflow/www/static/js/trigger.js", "airflow/www/templates/airflow/trigger.html", "airflow/www/utils.py", "airflow/www/views.py"] | Re-use recent DagRun JSON-configurations | ### Description
Allow users to re-use recent DagRun configurations upon running a DAG.
This can be achieved by adding a dropdown that contains some information about recent configurations. When user selects an item, the relevant JSON configuration can be pasted to the "Configuration JSON" textbox.
<img width="692" alt="Screen Shot 2022-11-16 at 16 22 30" src="https://user-images.githubusercontent.com/39705397/202209536-c709ec75-c768-48ab-97d4-82b02af60569.png">
<img width="627" alt="Screen Shot 2022-11-16 at 16 22 38" src="https://user-images.githubusercontent.com/39705397/202209553-08828521-dba2-4e83-8e2a-6dec850086de.png">
<img width="612" alt="Screen Shot 2022-11-16 at 16 38 40" src="https://user-images.githubusercontent.com/39705397/202209755-0946521a-e1a5-44cb-ae74-d43ca3735f31.png">
### Use case/motivation
Commonly, DAGs are triggered using repetitive configurations. Sometimes the same configuration is used for triggering a DAG, and sometimes, the configuration differs by just a few parameters.
This interaction forces a user to store the templates he uses somewhere on his machine or to start searching for the configuration he needs in `dagrun/list/`, which does take extra time.
It will be handy to offer a user an option to select one of the recent configurations upon running a DAG.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27714 | https://github.com/apache/airflow/pull/27805 | 7f0332de2d1e57cde2e031f4bb7b4e6844c4b7c1 | e2455d870056391eed13e32e2d0ed571cc7089b4 | "2022-11-16T14:39:23Z" | python | "2022-12-01T22:03:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,698 | ["airflow/kubernetes/pod_template_file_examples/git_sync_template.yaml", "chart/values.schema.json", "chart/values.yaml", "newsfragments/27698.significant.rst"] | Update git-sync with newer version | ### Official Helm Chart version
1.7.0 (latest released)
### What happened
The current git-sync image that is used is coming up on one year old. It is also using the deprecated `--wait` arg.
### What you think should happen instead
In order to stay current, we should update git-sync from 3.4.0 to 3.6.1.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27698 | https://github.com/apache/airflow/pull/27848 | af9143eacdff62738f6064ae7556dd8f4ca8d96d | 98221da0d96b102b009d422870faf7c5d3d931f4 | "2022-11-15T23:01:42Z" | python | "2023-01-21T18:00:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,695 | ["airflow/providers/apache/hive/hooks/hive.py", "tests/providers/apache/hive/hooks/test_hive.py"] | Improve filtering for invalid schemas in Hive hook | ### Description
#27647 has introduced filtering for invalid schemas in Hive hook based on the characters `;` and `!`. I'm wondering if a more generic filtering could be introduced, e.g. one that adheres to the regex `[^a-z0-9_]`, since Hive schemas (and table names) can only contain alphanumeric characters and the character `_`.
Note: since the Hive metastore [stores schemas and tables in lowercase](https://stackoverflow.com/questions/57181316/how-to-keep-column-names-in-camel-case-in-hive/57183048#57183048), checking against `[^a-z0-9_]` is probably better than `[^a-zA-Z0-9_]`.
### Use case/motivation
Ensure that Hive schemas used in `apache-airflow-providers-apache-hive` hooks contain no invalid characters.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27695 | https://github.com/apache/airflow/pull/27808 | 017ed9ac662d50b6e2767f297f36cb01bf79d825 | 2d45f9d6c30aabebce3449eae9f152ba6d2306e2 | "2022-11-15T17:04:45Z" | python | "2022-11-27T13:31:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,645 | ["airflow/www/views.py"] | Calendar view does not load when using CronTriggerTimeTable | ### Apache Airflow version
2.4.2
### What happened
Create a DAG and set the schedule parameter using a CronTriggerTimeTable instance. Enable the DAG so that there is DAG run data. Try to access the Calendar View for the DAG. An ERR_EMPTY_RESPONSE error is displayed instead of the page.
The Calendar View is accessible for other DAGs that are using the schedule_interval set to a cron string instead.
### What you think should happen instead
The Calendar View should have been displayed.
### How to reproduce
Create a DAG and set the schedule parameter to a CronTriggerTimeTable instance. Enable the DAG and allow some DAG runs to occur. Try to access the Calender View for the DAG.
### Operating System
Red Hat Enterprise Linux 8.6
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
Airflow 2.4.2 installed via pip with Python3.9 to venv using constraints.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27645 | https://github.com/apache/airflow/pull/28411 | 4b3eb77e65748b1a6a31116b0dd55f8295fe8a20 | 467a5e3ab287013db2a5381ef4a642e912f8b45b | "2022-11-13T19:53:24Z" | python | "2022-12-28T05:52:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,592 | ["airflow/providers/amazon/aws/hooks/glue.py", "tests/providers/amazon/aws/hooks/test_glue.py"] | AWS GlueJobOperator is not updating job config if job exists | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.0.0
### Apache Airflow version
2.2.5
### Operating System
Linux Ubuntu
### Deployment
Virtualenv installation
### Deployment details
Airflow deployed on ec2 instance
### What happened
`GlueJobOperator` from airflow-amazon-provider is not updating job configuration (like its arguments or number of workers for example) if the job already exists and if there was a change in the configuration for example:
```python
def get_or_create_glue_job(self) -> str:
"""
Creates(or just returns) and returns the Job name
:return:Name of the Job
"""
glue_client = self.get_conn()
try:
get_job_response = glue_client.get_job(JobName=self.job_name)
self.log.info("Job Already exist. Returning Name of the job")
return get_job_response['Job']['Name']
except glue_client.exceptions.EntityNotFoundException:
self.log.info("Job doesn't exist. Now creating and running AWS Glue Job")
...
```
Is there a particular reason to not doing it? Or it was just not done during the implementation of the operarot?
### What you think should happen instead
_No response_
### How to reproduce
Create a `GlueJobOperator` with a simple configuration:
```python
from airflow.providers.amazon.aws.operators.glue import GlueJobOperator
submit_glue_job = GlueJobOperator(
task_id='submit_glue_job',
job_name='test_glue_job
job_desc='test glue job',
script_location='s3://bucket/path/to/the/script/file',
script_args={},
s3_bucket='bucket',
concurrent_run_limit=1,
retry_limit=0,
num_of_dpus=5,
wait_for_completion=False
)
```
Then update one of the initial configuration like `num_of_dpus=10` and validate that the operator is not updating glue job configuration on AWS when it is run again.
### Anything else
There is `GlueCrawlerOperator` which is similar to GlueJobOperator and is doing it:
```python
def execute(self, context: Context):
"""
Executes AWS Glue Crawler from Airflow
:return: the name of the current glue crawler.
"""
crawler_name = self.config['Name']
if self.hook.has_crawler(crawler_name):
self.hook.update_crawler(**self.config)
else:
self.hook.create_crawler(**self.config)
...
```
This behavior could be reproduced in the AWSGlueJobOperator if we agree to do it.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27592 | https://github.com/apache/airflow/pull/27893 | 4fdfef909e3b9a22461c95e4ee123a84c47186fd | b609ab9001102b67a047b3078dc0b67fbafcc1e1 | "2022-11-10T16:00:05Z" | python | "2022-12-06T14:29:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,512 | ["airflow/www/static/js/dag/Main.tsx", "airflow/www/static/js/dag/details/Dag.tsx", "airflow/www/static/js/dag/details/dagRun/index.tsx", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Nav.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx", "airflow/www/static/js/dag/grid/index.tsx", "airflow/www/static/js/datasets/index.tsx", "airflow/www/static/js/utils/useOffsetHeight.tsx"] | Resizable grid view components | ### Description
~1. Ability to change change the split ratio of the grid section and the task details section.~ - already done in #27273

2. Ability for the log window to be resized.

3. Would love if the choices stuck between reloads as well.
### Use case/motivation
I love the new grid view and use it day to day to check logs quickly. It would be easier to do so without having to scroll within the text box if you could resize the grid view to accommodate a larger view of the logs.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27512 | https://github.com/apache/airflow/pull/27560 | 7ea8475128009b348a82d122747ca1df2823e006 | 65bfea2a20830baa10d2e1e8328c07a7a11bbb0c | "2022-11-04T21:09:12Z" | python | "2022-11-17T20:10:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,509 | ["airflow/models/dataset.py", "tests/models/test_taskinstance.py"] | Removing DAG dataset dependency when it is already ready results in SQLAlchemy cascading delete error | ### Apache Airflow version
2.4.2
### What happened
I have a DAG that is triggered by three datasets. When I remove one or more of these datasets, the web server fails to update the DAG, and `airflow dags reserialize` fails with an `AssertionError` within SQLAlchemy. Full stack trace below:
```
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper
docker-airflow-scheduler-1 | return func(*args, session=session, **kwargs)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/dag_processing/processor.py", line 781, in process_file
docker-airflow-scheduler-1 | dagbag.sync_to_db(processor_subdir=self._dag_directory, session=session)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
docker-airflow-scheduler-1 | return func(*args, **kwargs)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 644, in sync_to_db
docker-airflow-scheduler-1 | for attempt in run_with_db_retries(logger=self.log):
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __iter__
docker-airflow-scheduler-1 | do = self.iter(retry_state=retry_state)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 349, in iter
docker-airflow-scheduler-1 | return fut.result()
docker-airflow-scheduler-1 | File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result
docker-airflow-scheduler-1 | return self.__get_result()
docker-airflow-scheduler-1 | File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
docker-airflow-scheduler-1 | raise self._exception
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 658, in sync_to_db
docker-airflow-scheduler-1 | DAG.bulk_write_to_db(
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
docker-airflow-scheduler-1 | return func(*args, **kwargs)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 2781, in bulk_write_to_db
docker-airflow-scheduler-1 | session.flush()
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3345, in flush
docker-airflow-scheduler-1 | self._flush(objects)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3485, in _flush
docker-airflow-scheduler-1 | transaction.rollback(_capture_exception=True)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
docker-airflow-scheduler-1 | compat.raise_(
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
docker-airflow-scheduler-1 | raise exception
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3445, in _flush
docker-airflow-scheduler-1 | flush_context.execute()
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
docker-airflow-scheduler-1 | rec.execute(self)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 577, in execute
docker-airflow-scheduler-1 | self.dependency_processor.process_deletes(uow, states)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/dependency.py", line 552, in process_deletes
docker-airflow-scheduler-1 | self._synchronize(
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/dependency.py", line 610, in _synchronize
docker-airflow-scheduler-1 | sync.clear(dest, self.mapper, self.prop.synchronize_pairs)
docker-airflow-scheduler-1 | File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/sync.py", line 86, in clear
docker-airflow-scheduler-1 | raise AssertionError(
docker-airflow-scheduler-1 | AssertionError: Dependency rule tried to blank-out primary key column 'dataset_dag_run_queue.dataset_id' on instance '<DatasetDagRunQueue at 0xffff5d213d00>'
```
### What you think should happen instead
The DAG does not properly load in the UI, and no error is displayed. Instead, the old datasets that have been removed should be removed as dependencies and the DAG should be updated with the new dataset dependencies.
### How to reproduce
Initial DAG:
```python
def foo():
pass
@dag(
dag_id="test",
start_date=pendulum.datetime(2022, 1, 1),
catchup=False,
schedule=[
Dataset('test/1'),
Dataset('test/2'),
Dataset('test/3'),
]
)
def test_dag():
@task
def test_task():
foo()
test_task()
test_dag()
```
At least one of the datasets should be 'ready'. Now `dataset_dag_run_queue` will look something like below:
```
airflow=# SELECT * FROM dataset_dag_run_queue ;
dataset_id | target_dag_id | created_at
------------+-------------------------------------+-------------------------------
16 | test | 2022-11-02 19:47:53.938748+00
(1 row)
```
Then, update the DAG with new datasets:
```python
def foo():
pass
@dag(
dag_id="test",
start_date=pendulum.datetime(2022, 1, 1),
catchup=False,
schedule=[
Dataset('test/new/1'), # <--- updated
Dataset('test/new/2'),
Dataset('test/new/3'),
]
)
def test_dag():
@task
def test_task():
foo()
test_task()
test_dag()
```
Now you will observe the error in the web server logs or when running `airflow dags reserialize`.
I suspect this issue is related to handling of cascading deletes on the `dataset_id` foreign key for the run queue table. Dataset `id = 16` is one of the datasets that has been renamed.
### Operating System
docker image - apache/airflow:2.4.2-python3.9
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.2.0
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.4.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-azure==4.3.0
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-odbc==3.1.2
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-ssh==3.2.0
```
### Deployment
Docker-Compose
### Deployment details
Running using docker-compose locally.
### Anything else
To trigger this problem the dataset to be removed must be in the "ready" state so that there is an entry in `dataset_dag_run_queue`.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27509 | https://github.com/apache/airflow/pull/27538 | 7297892558e94c8cc869b175e904ca96e0752afe | fc59b02cfac7fd691602edc92a7abac38ed51531 | "2022-11-04T16:21:02Z" | python | "2022-11-07T13:03:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,507 | ["airflow/providers/http/hooks/http.py"] | Making logging for HttpHook optional | ### Description
In tasks that perform multiple requests, the log file is getting cluttered by the logging in `run`, line 129
I propose that we add a kwarg `log_request` with default value True to control this behavior
### Use case/motivation
reduce unnecessary entries in log files
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27507 | https://github.com/apache/airflow/pull/28911 | 185faab2112c4d3f736f8d40350401d8c1cac35b | a9d5471c66c788d8469ca65556e5820f1e96afc1 | "2022-11-04T16:04:07Z" | python | "2023-01-13T21:09:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,483 | ["airflow/www/views.py"] | DAG loading very slow in Graph view when using Dynamic Tasks | ### Apache Airflow version
2.4.2
### What happened
The web UI is very slow when loading the Graph view on DAGs that have a large number of expansions in the mapped tasks.
The problem is very similar to the one described in #23786 (resolved), but for the Graph view instead of the grid view.
It takes around 2-3 minutes to load DAGs that have ~1k expansions, with the default Airflow settings the web server worker will timeout. One can configure [web_server_worker_timeout](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#web-server-worker-timeout) to increase the timeout wait time.
### What you think should happen instead
The Web UI takes a reasonable amount of time to load the Graph view after the dag run is finished.
### How to reproduce
Same way as in #23786, you can create a mapped task that spans a large number of expansions then when you run it, the Graph view will take a very long amount of time to load and eventually time out.
You can use this code to generate multiple dags with `2^x` expansions. After running the DAGs you should notice how slow it is when attempting to open the Graph view of the DAGs with the largest number of expansions.
```python
from datetime import datetime
from airflow.models import DAG
from airflow.operators.empty import EmptyOperator
from airflow.operators.python import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
}
initial_scale = 7
max_scale = 12
scaling_factor = 2
for scale in range(initial_scale, max_scale + 1):
dag_id = f"dynamic_task_mapping_{scale}"
with DAG(
dag_id=dag_id,
default_args=default_args,
catchup=False,
schedule_interval=None,
start_date=datetime(1970, 1, 1),
render_template_as_native_obj=True,
) as dag:
start = EmptyOperator(task_id="start")
mapped = PythonOperator.partial(
task_id="mapped",
python_callable=lambda m: print(m),
).expand(
op_args=[[x] for x in list(range(2**scale))]
)
end = EmptyOperator(task_id="end")
start >> mapped >> end
globals()[dag_id] = dag
```
### Operating System
MacOS Version 12.6 (Apple M1)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==4.0.0
apache-airflow-providers-common-sql==1.2.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-sqlite==3.2.1
```
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27483 | https://github.com/apache/airflow/pull/29791 | 0db38ad1a2cf403eb546f027f2e5673610626f47 | 60d98a1bc2d54787fcaad5edac36ecfa484fb42b | "2022-11-03T08:46:08Z" | python | "2023-02-28T05:15:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,478 | ["airflow/models/dagrun.py", "airflow/models/taskinstance.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py"] | Scheduler crash when clear a previous run of a normal task that is now a mapped task | ### Apache Airflow version
2.4.2
### What happened
I have clear a task A that was a normal task but that is now a mapped task
```log
[2022-11-02 23:33:20 +0000] [17] [INFO] Worker exiting (pid: 17)
2022-11-02T23:33:20.390911528Z Traceback (most recent call last):
2022-11-02T23:33:20.390935788Z File "/usr/local/bin/airflow", line 8, in <module>
2022-11-02T23:33:20.390939798Z sys.exit(main())
2022-11-02T23:33:20.390942302Z File "/usr/local/lib/python3.10/site-packages/airflow/__main__.py", line 39, in main
2022-11-02T23:33:20.390944924Z args.func(args)
2022-11-02T23:33:20.390947345Z File "/usr/local/lib/python3.10/site-packages/airflow/cli/cli_parser.py", line 52, in command
2022-11-02T23:33:20.390949893Z return func(*args, **kwargs)
2022-11-02T23:33:20.390952237Z File "/usr/local/lib/python3.10/site-packages/airflow/utils/cli.py", line 103, in wrapper
2022-11-02T23:33:20.390954862Z return f(*args, **kwargs)
2022-11-02T23:33:20.390957163Z File "/usr/local/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 85, in scheduler
2022-11-02T23:33:20.390959672Z _run_scheduler_job(args=args)
2022-11-02T23:33:20.390961979Z File "/usr/local/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 50, in _run_scheduler_job
2022-11-02T23:33:20.390964496Z job.run()
2022-11-02T23:33:20.390966931Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/base_job.py", line 247, in run
2022-11-02T23:33:20.390969441Z self._execute()
2022-11-02T23:33:20.390971778Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 746, in _execute
2022-11-02T23:33:20.390974368Z self._run_scheduler_loop()
2022-11-02T23:33:20.390976612Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 866, in _run_scheduler_loop
2022-11-02T23:33:20.390979125Z num_queued_tis = self._do_scheduling(session)
2022-11-02T23:33:20.390981458Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 946, in _do_scheduling
2022-11-02T23:33:20.390984819Z callback_tuples = self._schedule_all_dag_runs(guard, dag_runs, session)
2022-11-02T23:33:20.390988440Z File "/usr/local/lib/python3.10/site-packages/airflow/utils/retries.py", line 78, in wrapped_function
2022-11-02T23:33:20.390991893Z for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs):
2022-11-02T23:33:20.391008515Z File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 384, in __iter__
2022-11-02T23:33:20.391012668Z do = self.iter(retry_state=retry_state)
2022-11-02T23:33:20.391016220Z File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 351, in iter
2022-11-02T23:33:20.391019633Z return fut.result()
2022-11-02T23:33:20.391022534Z File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
2022-11-02T23:33:20.391025820Z return self.__get_result()
2022-11-02T23:33:20.391029555Z File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
2022-11-02T23:33:20.391033787Z raise self._exception
2022-11-02T23:33:20.391037611Z File "/usr/local/lib/python3.10/site-packages/airflow/utils/retries.py", line 87, in wrapped_function
2022-11-02T23:33:20.391040339Z return func(*args, **kwargs)
2022-11-02T23:33:20.391042660Z File "/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job.py", line 1234, in _schedule_all_dag_runs
2022-11-02T23:33:20.391045166Z for dag_run in dag_runs:
2022-11-02T23:33:20.391047413Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2887, in __iter__
2022-11-02T23:33:20.391049815Z return self._iter().__iter__()
2022-11-02T23:33:20.391052252Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2894, in _iter
2022-11-02T23:33:20.391054786Z result = self.session.execute(
2022-11-02T23:33:20.391057119Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1688, in execute
2022-11-02T23:33:20.391059741Z conn = self._connection_for_bind(bind)
2022-11-02T23:33:20.391062247Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1529, in _connection_for_bind
2022-11-02T23:33:20.391065901Z return self._transaction._connection_for_bind(
2022-11-02T23:33:20.391069140Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 721, in _connection_for_bind
2022-11-02T23:33:20.391078064Z self._assert_active()
2022-11-02T23:33:20.391081939Z File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 601, in _assert_active
2022-11-02T23:33:20.391085250Z raise sa_exc.PendingRollbackError(
2022-11-02T23:33:20.391087747Z sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (psycopg2.errors.ForeignKeyViolation) update or delete on table "task_instance" violates foreign key constraint "task_fail_ti_fkey" on table "task_fail"
2022-11-02T23:33:20.391091226Z DETAIL: Key (dag_id, task_id, run_id, map_index)=(kubernetes_dag, task-one, scheduled__2022-11-01T00:00:00+00:00, -1) is still referenced from table "task_fail".
2022-11-02T23:33:20.391093987Z
2022-11-02T23:33:20.391102116Z [SQL: UPDATE task_instance SET map_index=%(map_index)s WHERE task_instance.dag_id = %(task_instance_dag_id)s AND task_instance.task_id = %(task_instance_task_id)s AND task_instance.run_id = %(task_instance_run_id)s AND task_instance.map_index = %(task_instance_map_index)s]
2022-11-02T23:33:20.391105554Z [parameters: {'map_index': 0, 'task_instance_dag_id': 'kubernetes_dag', 'task_instance_task_id': 'task-one', 'task_instance_run_id': 'scheduled__2022-11-01T00:00:00+00:00', 'task_instance_map_index': -1}]
2022-11-02T23:33:20.391108241Z (Background on this error at: https://sqlalche.me/e/14/gkpj) (Background on this error at: https://sqlalche.me/e/14/7s2a)
2022-11-02T23:33:20.489698500Z [2022-11-02 23:33:20 +0000] [7] [INFO] Shutting down: Master
```
### What you think should happen instead
Airflow should evaluate the existing and previous runs as mapped task of 1 task
cause I can't see the logs anymore of a task that is now a mapped task
### How to reproduce
dag with a normal task A
run dag
task A success
edit dag to make task A a mapped task ( without changing name of task )
clear task
scheduler crash
### Operating System
ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27478 | https://github.com/apache/airflow/pull/29645 | e02bfc870396387ef2052ab375cdd2a54e704ae2 | a770edfac493f3972c10a43e45bcd0e7cfaea65f | "2022-11-02T23:43:43Z" | python | "2023-02-20T19:45:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,462 | ["airflow/models/dag.py", "tests/sensors/test_external_task_sensor.py"] | Clearing the parent dag will not clear child dag's mapped tasks | ### Apache Airflow version
2.4.2
### What happened
In the scenario where we have 2 dags, 1 dag dependent on the other by having an ExternalTaskMarker on the parent dag pointing to the child dag and we have some number of mapped tasks in the child dag that have been expanded (map_index is not -1).
If we were to clear the parent dag, the child dag's mapped tasks will NOT be cleared. It will not appear in the "Task instances to be cleared" list
### What you think should happen instead
I believe the behaviour should be having the child dag's mapped tasks cleared when the parent dag is cleared.
### How to reproduce
1. Create a parent dag with an ExternalTaskMarker
2. Create a child dag which has some ExternalTaskSensor that the ExternalTaskMarker is pointing to
3. Add any number of mapped tasks downstream of that ExternalTaskSensor
4. Clear the parent dag's ExternalTaskMarker (or any task upstream of it)
### Operating System
Mac OS Monterey 12.6
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27462 | https://github.com/apache/airflow/pull/27501 | bc0063af99629e6b3eb5c76c88ac5bfaf92afaaf | 5ce9c827f7bcdef9c526fd4416533fc481de4675 | "2022-11-02T05:55:29Z" | python | "2022-11-17T01:54:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,402 | ["chart/values.yaml", "helm_tests/airflow_aux/test_configmap.py"] | #26415 Broke flower dashboard | ### Discussed in https://github.com/apache/airflow/discussions/27401
<div type='discussions-op-text'>
<sup>Originally posted by **Flogue** October 25, 2022</sup>
### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.4.1
### Kubernetes Version
1.24.6
### Helm Chart configuration
```
flower:
enabled: true
```
### Docker Image customisations
None
### What happened
Flower dashboard is unreachable.
"Failed to load resource: net::ERR_CONNECTION_RESET" in browser console
### What you think should happen instead
Dashboard should load.
### How to reproduce
Just enable flower:
```
helm install airflow-rl apache-airflow/airflow --namespace airflow-np --set flower.enables=true
kubectl port-forward svc/airflow-rl-flower 5555:5555 --namespace airflow-np
```
### Anything else
A quick fix for this is:
```
config:
celery:
flower_url_prefix: ''
```
Basically, the new default value '/' makes it so the scripts and links read:
`<script src="//static/js/jquery....`
where it should be:
`<script src="/static/js/jquery....`
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/27402 | https://github.com/apache/airflow/pull/33134 | ca5acda1617a5cdb1d04f125568ffbd264209ec7 | 6e4623ab531a1b6755f6847d2587d014a387560d | "2022-10-31T03:49:04Z" | python | "2023-08-07T20:04:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,396 | ["airflow/providers/amazon/aws/log/cloudwatch_task_handler.py", "tests/providers/amazon/aws/log/test_cloudwatch_task_handler.py"] | CloudWatch task handler doesn't fall back to local logs when Amazon CloudWatch logs aren't found | This is really a CloudWatch handler issue - not "airflow" core.
### Discussed in https://github.com/apache/airflow/discussions/27395
<div type='discussions-op-text'>
<sup>Originally posted by **matthewblock** October 24, 2022</sup>
### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We recently activated AWS Cloudwatch logs. We were hoping the logs server would gracefully handle task logs that previously existed but were not written to Cloudwatch, but when fetching the remote logs failed (expected), the logs server didn't fall back to local logs.
```
*** Reading remote log from Cloudwatch log_group: <our log group> log_stream: <our log stream>
```
### What you think should happen instead
According to documentation [Logging for Tasks](https://airflow.apache.org/docs/apache-airflow/stable/logging-monitoring/logging-tasks.html#writing-logs-locally), when fetching remote logs fails, the logs server should fall back to looking for local logs:
> In the Airflow UI, remote logs take precedence over local logs when remote logging is enabled. If remote logs can not be found or accessed, local logs will be displayed.
This should be indicated by the message `*** Falling back to local log`.
If this is not the intended behavior, the documentation should be modified to reflect the intended behavior.
### How to reproduce
1. Run a test DAG without [AWS CloudWatch logging configured](https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/logging/cloud-watch-task-handlers.html)
2. Configure AWS CloudWatch remote logging and re-run a test DAG
### Operating System
Debian buster-slim
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/27396 | https://github.com/apache/airflow/pull/27564 | 3aed495f50e8bc0e22ff90efee7671a73168b19e | c490a328f4d0073052d8b5205c7c4cab96c3d559 | "2022-10-31T02:25:54Z" | python | "2022-11-11T00:40:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,358 | ["docs/apache-airflow/executor/kubernetes.rst"] | Airflow 2.2.2 pod_override does not override `args` of V1Container | ### Apache Airflow version
2.2.2
### What happened
I have a bash sensor defined as follows:
```python
foo_sensor_task = BashSensor(
task_id="foo_task",
poke_interval=3600,
bash_command="python -m foo.run",
retries=0,
executor_config={
"pod_template_file: "path-to-file-yaml",
"pod_override": k8s.V1Pod(
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(name="base, image="foo-image", args=["abc"])
]
)
)
}
)
```
Entrypoint command in the `foo-image` is `python -m foo.run`. However, when I deploy the image onto Openshift (Kubernetes), the command somehow turns out to be the following:
```bash
python -m foo.run airflow tasks run foo_dag foo_sensor_task manual__2022-10-28T21:08:39+00:00 ...
```
which is wrong.
### What you think should happen instead
I assume the expected command should override `args` (see V1Container `args` value above) and therefore should be:
```bash
python -m foo.run abc
```
and **not**:
```bash
python -m foo.run airflow tasks run foo_dag foo_sensor_task manual__2022-10-28T21:08:39+00:00 ...
```
### How to reproduce
To reproduce the above issue, create a simple DAG and a sensor as defined above. Use a sample image and try to override the args. I cannot provide the same code due to NDA.
### Operating System
RHLS 7.9
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.4.0
apache-airflow-providers-cncf-kubernetes==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-sqlite==2.0.1
### Deployment
Other
### Deployment details
N/A
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27358 | https://github.com/apache/airflow/pull/27450 | aa36f754e2307ccd8a03987b81ea1e1a04b03c14 | 8f5e100f30764e7b1818a336feaa8bb390cbb327 | "2022-10-29T01:08:10Z" | python | "2022-11-02T06:08:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,345 | ["airflow/utils/log/file_task_handler.py", "airflow/utils/log/logging_mixin.py", "tests/utils/test_logging_mixin.py"] | Duplicate log lines in CloudWatch after upgrade to 2.4.2 | ### Apache Airflow version
2.4.2
### What happened
We upgraded airflow from 2.4.1 to 2.4.2 and immediately notice that every task log line is duplicated _into_ CloudWatch. Comparing logs from tasks run before upgrade and after upgrade indicates that the issue is not in how the logs are displayed in Airflow, but rather that it now produces two log lines instead of one.
When observing both the CloudWatch log streams and the Airflow UI, we can see duplicate log lines for ~_all_~ most log entries post upgrade, whilst seeing single log lines in tasks before upgrade.
This happens _both_ for tasks ran in a remote `EcsRunTaskOperator`'s as well as in regular `PythonOperator`'s.
### What you think should happen instead
A single non-duplicate log line should be produced into CloudWatch.
### How to reproduce
From my understanding now, any setup on 2.4.2 that uses CloudWatch remote logging will produce duplicate log lines. (But I have not been able to confirm other setups)
### Operating System
Docker: `apache/airflow:2.4.2-python3.9` - Running on AWS ECS Fargate
### Versions of Apache Airflow Providers
```
apache-airflow[celery,postgres,apache.hive,jdbc,mysql,ssh,amazon,google,google_auth]==2.4.2
apache-airflow-providers-amazon==6.0.0
```
### Deployment
Other Docker-based deployment
### Deployment details
We are running a docker inside Fargate ECS on AWS.
The following environment variables + config in CloudFormation control remote logging:
```
- Name: AIRFLOW__LOGGING__REMOTE_LOGGING
Value: True
- Name: AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER
Value: !Sub "cloudwatch://${TasksLogGroup.Arn}"
```
### Anything else
We did not change any other configuration during the upgrade, simply bumped the requirements for provider list + docker image from 2.4.1 to 2.4.2.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27345 | https://github.com/apache/airflow/pull/27591 | 85ec17fbe1c07b705273a43dae8fbdece1938e65 | 933fefca27a5cd514c9083040344a866c7f517db | "2022-10-28T10:32:13Z" | python | "2022-11-10T17:58:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,290 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | Publish a container's port(s) to the host with DockerOperator | ### Description
[`create_container` method](https://github.com/docker/docker-py/blob/bc0a5fbacd7617fd338d121adca61600fc70d221/docker/api/container.py#L370) has a `ports` param to open inside the container, and the `host_config` to [declare port bindings](https://github.com/docker/docker-py/blob/bc0a5fbacd7617fd338d121adca61600fc70d221/docker/api/container.py#L542).
We can learn from [Expose port using DockerOperator](https://stackoverflow.com/questions/65157416/expose-port-using-dockeroperator) for this feature on DockerOperator. I have already tested it and works, also created a custom docker decorator based on this DockerOperator extension.
### Use case/motivation
I would like to publish the container's port(s) that is created with DockerOperator to the host. These changes should also be applied to the Docker decorator.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27290 | https://github.com/apache/airflow/pull/30730 | cb1ecb0647d459999041ee6018f8f282fc25b09b | d8c0e3009a649ce057595539b96a566b7faa5584 | "2022-10-26T07:56:51Z" | python | "2023-05-17T09:03:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,282 | ["airflow/providers/cncf/kubernetes/operators/pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | KubernetesPodOperator: Option to show logs from all containers in a pod | ### Description
Currently, KubernetesPodOperator fetches logs using
```
self.pod_manager.fetch_container_logs(
pod=self.pod,
container_name=self.BASE_CONTAINER_NAME,
follow=True,
)
```
and so only shows log from the main container in a pod. It would be very useful/helpful to have the possibility to fetch logs for all the containers in a pod.
### Use case/motivation
Making the cause of failed KubernetesPodOperator tasks a lot more visible.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27282 | https://github.com/apache/airflow/pull/31663 | e7587b3369af30848c3cf1c7eff9e801b1440793 | 9a0f41ba53185031bc2aa56ead2928ae4b20de99 | "2022-10-25T23:29:19Z" | python | "2023-07-06T09:49:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,228 | ["airflow/serialization/serialized_objects.py", "tests/www/views/test_views_trigger_dag.py"] | Nested Parameters Break for DAG Run Configurations | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow Version Used: 2.3.3
This bug report is being created out of the following discussion - https://github.com/apache/airflow/discussions/25064
With the following DAG definition (with nested params):
```
DAG(
dag_id="some_id",
start_date=datetime(2021, 1, 1),
catchup=False,
doc_md=__doc__,
schedule_interval=None,
params={
"accounts": Param(
[{'name': 'account_name_1', 'country': 'usa'}],
schema = {
"type": "array",
"minItems": 1,
"items": {
"type": "object",
"default": {"name": "account_name_1", "country": "usa"},
"properties": {
"name": {"type": "string"},
"country": {"type": "string"},
},
"required": ["name", "country"]
},
}
),
}
)
```
**Note:** It does not matter whether `Param` and JSONSchema is used or not, I mean you can try to put a simple nested object too.
Then the UI displays the following:
```
{
"accounts": null
}
```
### What you think should happen instead
Following is what the UI should display instead:
```
{
"accounts": [
{
"name": "account_name_1",
"country": "usa"
}
]
}
```
### How to reproduce
_No response_
### Operating System
Debian Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
Although I am personally using Composer, it is most likely related to Airflow only given there are more non-Composer folks facing this (from the discussion's original author and the Slack community).
### Anything else
I have put some more explanation and a quick way to reproduce this [as a comment in the discussion](https://github.com/apache/airflow/discussions/25064#discussioncomment-3907974) linked.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27228 | https://github.com/apache/airflow/pull/27482 | 2d2f0daad66416d565e874e35b6a487a21e5f7b1 | 9409293514cef574179a5320ed3ed50881064423 | "2022-10-24T09:58:34Z" | python | "2022-11-08T13:43:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,225 | ["airflow/www/templates/analytics/google_analytics.html"] | Tracking User Activity Issue: Google Analytics tag version is not up-to-date | ### Apache Airflow version
2.4.1
### What happened
Airflow uses the previous Google Analytics tag version so Google Analytics does not collect User Activity Metric from Airflow
### What you think should happen instead
The Tracking User Activity feature should work properly with Google Analytics
### How to reproduce
- Configure to use Google Analytics with Airflow
- Google Analytics does not collect User Activity Metric from Airflow
Note: with the upgraded Google Analytics tag it works properly
https://support.google.com/analytics/answer/9304153#add-tag&zippy=%2Cadd-your-tag-using-google-tag-manager%2Cfind-your-g--id-for-any-platform-that-accepts-a-g--id%2Cadd-the-google-tag-directly-to-your-web-pages
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27225 | https://github.com/apache/airflow/pull/27226 | 55f8a63d012d4ca5ca726195bed4b38e9b1a05f9 | 5e6cec849a5fa90967df1447aba9521f1cfff3d0 | "2022-10-24T09:00:49Z" | python | "2022-10-27T13:25:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,182 | ["airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/operators/ssh.py", "tests/providers/ssh/hooks/test_ssh.py", "tests/providers/ssh/operators/test_ssh.py"] | SSHOperator ignores cmd_timeout | ### Apache Airflow Provider(s)
ssh
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.4.1
### Operating System
linux
### Deployment
Other
### Deployment details
_No response_
### What happened
Hi,
SSHOperator documentation states that we should be using cmd_timeout instead of timeout
```
:param timeout: (deprecated) timeout (in seconds) for executing the command. The default is 10 seconds.
Use conn_timeout and cmd_timeout parameters instead.
```
But the code doesn't use cmd_timeout at all - and it's still passing `self.timeout` when running the ssh command:
```
return self.ssh_hook.exec_ssh_client_command(
ssh_client, command, timeout=self.timeout, environment=self.environment, get_pty=self.get_pty
)
```
It seems to me that we should `self.cmd_timeout` here instead. When creating the hook, it correctly uses `self.conn_timeout`.
I'll try to work on a PR for this.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27182 | https://github.com/apache/airflow/pull/27184 | cfd63df786e0c40723968cb8078f808ca9d39688 | dc760b45eaeccc3ff35a5acdfe70968ca0451331 | "2022-10-21T12:29:48Z" | python | "2022-11-07T02:07:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,166 | ["airflow/www/static/css/flash.css", "airflow/www/static/js/dag/grid/TaskName.test.tsx", "airflow/www/static/js/dag/grid/TaskName.tsx", "airflow/www/static/js/dag/grid/index.test.tsx"] | Carets in Grid view are the wrong way around | ### Apache Airflow version
main (development)
### What happened
When expanding tasks to see sub-tasks in the Grid UI, the carets to expand the task are pointing the wrong way.
### What you think should happen instead
Can you PLEASE use the accepted Material UI standard for expansion & contraction - https://mui.com/material-ui/react-list/#nested-list
### How to reproduce
_No response_
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27166 | https://github.com/apache/airflow/pull/28624 | 69ab7d8252f830d8c1a013d34f8305a16da26bcf | 0ab881a4ab78ca7d30712c893a6f01b83eb60e9e | "2022-10-20T15:52:50Z" | python | "2023-01-02T21:01:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,165 | ["airflow/providers/google/cloud/hooks/workflows.py", "tests/providers/google/cloud/hooks/test_workflows.py"] | WorkflowsCreateExecutionOperator execution argument only receive bytes | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
`apache-airflow-providers-google==7.0.0`
### Apache Airflow version
2.3.2
### Operating System
Ubuntu 20.04.5 LTS (Focal Fossa)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
WorkflowsCreateExecutionOperator triggers google cloud workflows and execution param receives argument as {"argument": {"key": "val", "key", "val"...}
But, When I passed argument as dict using render_template_as_native_obj=True, protobuf error occured TypeError: {'projectId': 'project-id', 'location': 'us-east1'} has type dict, but expected one of: bytes, unicode.
When I passed argument as bytes {"argument": b'{\n "projectId": "project-id",\n "location": "us-east1"\n}' It working.
### What you think should happen instead
execution argument should be Dict instead of bytes.
### How to reproduce
not working
```python
from airflow import DAG
from airflow.models.param import Param
from airflow.operators.dummy_operator import DummyOperator
from airflow.providers.google.cloud.operators.workflows import WorkflowsCreateExecutionOperator
with DAG(
dag_id="continual_learning_deid_norm_h2h_test",
params={
"location": Param(type="string", default="us-east1"),
"project_id": Param(type="string", default="project-id"),
"workflow_id": Param(type="string", default="orkflow"),
"workflow_execution_info": {
"argument": {
"projectId": "project-id",
"location": "us-east1"
}
}
},
render_template_as_native_obj=True
) as dag:
execution = "{{ params.workflow_execution_info }}"
create_execution = WorkflowsCreateExecutionOperator(
task_id="create_execution",
location="{{ params.location }}",
project_id="{{ params.project_id }}",
workflow_id="{{ params.workflow_id }}",
execution="{{ params.workflow_execution_info }}"
)
start_operator = DummyOperator(task_id='test_task')
start_operator >> create_execution
```
working
```python
from airflow import DAG
from airflow.models.param import Param
from airflow.operators.dummy_operator import DummyOperator
from airflow.providers.google.cloud.operators.workflows import WorkflowsCreateExecutionOperator
with DAG(
dag_id="continual_learning_deid_norm_h2h_test",
params={
"location": Param(type="string", default="us-east1"),
"project_id": Param(type="string", default="project-id"),
"workflow_id": Param(type="string", default="orkflow"),
"workflow_execution_info": {
"argument": b'{\n "projectId": "project-id",\n "location": "us-east1"\n}'
}
},
render_template_as_native_obj=True
) as dag:
execution = "{{ params.workflow_execution_info }}"
create_execution = WorkflowsCreateExecutionOperator(
task_id="create_execution",
location="{{ params.location }}",
project_id="{{ params.project_id }}",
workflow_id="{{ params.workflow_id }}",
execution="{{ params.workflow_execution_info }}"
)
start_operator = DummyOperator(task_id='test_task')
start_operator >> create_execution
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27165 | https://github.com/apache/airflow/pull/27361 | 9c41bf35e6149d4edfc585d97c348a4f864e7973 | 332c01d6e0bef41740e8fbc2c9600e7b3066615b | "2022-10-20T14:50:46Z" | python | "2022-10-31T05:35:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,146 | ["airflow/providers/dbt/cloud/hooks/dbt.py", "docs/apache-airflow-providers-dbt-cloud/connections.rst", "tests/providers/dbt/cloud/hooks/test_dbt_cloud.py"] | dbt Cloud Provider Not Compatible with emea.dbt.com | ### Apache Airflow Provider(s)
dbt-cloud
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.3
### Operating System
Linux
### Deployment
Composer
### Deployment details
_No response_
### What happened
Trying to use the provider with dbt Cloud's new EMEA region (https://docs.getdbt.com/docs/deploy/regions) but not able to use the emea.dbt.com as a tenant, as it automatically adds `.getdbt.com` to the tenant
### What you think should happen instead
We should be able to change the entire URL - and it could still default to cloud.getdbt.com
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27146 | https://github.com/apache/airflow/pull/28890 | ed8788bb80764595ba2872cba0d2da9e4b137e07 | 141338b24efeddb9460b53b8501654b50bc6b86e | "2022-10-19T15:41:37Z" | python | "2023-01-12T19:25:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,140 | ["airflow/cli/commands/dag_processor_command.py", "airflow/jobs/dag_processor_job.py", "tests/cli/commands/test_dag_processor_command.py"] | Invalid livenessProbe for Standalone DAG Processor | ### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.3.4
### Kubernetes Version
1.22.12-gke.1200
### Helm Chart configuration
```yaml
dagProcessor:
enabled: true
```
### Docker Image customisations
```dockerfile
FROM apache/airflow:2.3.4-python3.9
USER root
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
RUN apt-get update && apt-get install -y google-cloud-cli
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
RUN sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
USER airflow
```
### What happened
Current DAG Processor livenessProbe is the following:
```
CONNECTION_CHECK_MAX_COUNT=0 AIRFLOW__LOGGING__LOGGING_LEVEL=ERROR exec /entrypoint \
airflow jobs check --hostname $(hostname)
```
This command checks the metadata DB searching for an active job whose hostname is the current pod's one (_airflow-dag-processor-xxxx_).
However, after running the dag-processor pod for more than 1 hour, there are no jobs with the processor hostname in the jobs table.


As a consequence, the livenessProbe fails and the pod is constantly restarting.
After investigating the code, I found out that DagFileProcessorManager is not creating jobs in the metadata DB, so the livenessProbe is not valid.
### What you think should happen instead
A new job should be created for the Standalone DAG Processor.
By doing that, the _airflow jobs check --hostname <hostname>_ command would work correctly and the livenessProbe wouldn't fail
### How to reproduce
1. Deploy airflow with a standalone dag-processor.
2. Wait for ~ 5 minutes
3. Check that the livenessProbe has been failing for 5 minutes and the pod has been restarted.
### Anything else
I think this behavior is inherited from the NOT standalone dag-processor mode (the livenessProbe checks for a SchedulerJob, that in fact contains the "DagProcessorJob")
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27140 | https://github.com/apache/airflow/pull/28799 | 1edaddbb1cec740db2ff2a86fb23a3a676728cb0 | 0018b94a4a5f846fc87457e9393ca953ba0b5ec6 | "2022-10-19T14:02:51Z" | python | "2023-02-21T09:54:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,096 | ["airflow/providers/amazon/aws/hooks/rds.py", "airflow/providers/amazon/aws/operators/rds.py", "airflow/providers/amazon/aws/sensors/rds.py", "tests/providers/amazon/aws/hooks/test_rds.py", "tests/providers/amazon/aws/operators/test_rds.py", "tests/providers/amazon/aws/sensors/test_rds.py"] | Use Boto waiters instead of customer _await_status method for RDS Operators | ### Description
Currently some code in RDS Operators use boto waiters and some uses a custom `_await_status`, the former is preferred over the later (waiters are vetted code provided by the boto sdk, they have features like exponential backoff, etc). See [this discussion thread](https://github.com/apache/airflow/pull/27076#discussion_r997325535) for more details/context.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27096 | https://github.com/apache/airflow/pull/27410 | b717853e4c17d67f8ea317536c98c7416eb080ca | 2bba98f109cc7737f4293a195e03a0cc21a624cb | "2022-10-17T17:46:53Z" | python | "2022-11-17T17:02:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,079 | ["airflow/macros/__init__.py", "tests/macros/test_macros.py"] | Option to deserialize JSON from last log line in BashOperator and DockerOperator before sending to XCom | ### Description
In order to create an XCom value with a BashOperator or a DockerOperator, we can use the option `do_xcom_push` that pushes to XCom the last line of the command logs.
It would be interesting to provide an option `xcom_json` to deserialize this last log line in case it's a JSON string, before sending it as XCom. This would allow to access its attributes later in other tasks with the `xcom_pull()` method.
### Use case/motivation
See my StackOverflow post : https://stackoverflow.com/questions/74083466/how-to-deserialize-xcom-strings-in-airflow
Consider a DAG containing two tasks: `DAG: Task A >> Task B` (BashOperators or DockerOperators). They need to communicate through XComs.
- `Task A` outputs the informations through a one-line json in stdout, which can then be retrieve in the logs of `Task A`, and so in its *return_value* XCom key if `xcom_push=True`. For instance : `{"key1":1,"key2":3}`
- `Task B` only needs the `key2` information from `Task A`, so we need to deserialize the *return_value* XCom of `Task A` to extract only this value and pass it directly to `Task B`, using the jinja template `{{xcom_pull('task_a')['key2']}}`. Using it as this results in `jinja2.exceptions.UndefinedError: 'str object' has no attribute 'key2'` because *return_value* is just a string.
For example we can deserialize Airflow Variables in jinja templates (ex: `{{ var.json.my_var.path }}`). Globally I would like to do the same thing with XComs.
**Current workaround**:
We can create a custom Operator (inherited from BashOperator or DockerOperator) and augment the `execute` method:
1. execute the original `execute` method
2. intercepts the last log line of the task
3. tries to `json.loads()` it in a Python dictionnary
4. finally return the output (which is now a dictionnary, not a string)
The previous jinja template `{{ xcom_pull('task_a')['key2'] }}` is now working in `task B`, since the XCom value is now a Python dictionnary.
```python
class BashOperatorExtended(BashOperator):
def execute(self, context):
output = BashOperator.execute(self, context)
try:
output = json.loads(output)
except:
pass
return output
class DockerOperatorExtended(DockerOperator):
def execute(self, context):
output = DockerOperator.execute(self, context)
try:
output = json.loads(output)
except:
pass
return output
```
But creating a new operator just for that purpose is not really satisfying..
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27079 | https://github.com/apache/airflow/pull/28930 | d20300018a38159f5452ae16bc9df90b1e7270e5 | ffdc696942d96a14a5ee0279f950e3114817055c | "2022-10-16T20:14:05Z" | python | "2023-02-19T14:41:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,065 | ["airflow/config_templates/airflow_local_settings.py", "airflow/utils/log/non_caching_file_handler.py", "newsfragments/27065.misc.rst"] | Log files are still being cached causing ever-growing memory usage when scheduler is running | ### Apache Airflow version
2.4.1
### What happened
My Airflow scheduler memory usage started to grow after I turned on the `dag_processor_manager` log by doing
```bash
export CONFIG_PROCESSOR_MANAGER_LOGGER=True
```
see the red arrow below

By looking closely at the memory usage as mentioned in https://github.com/apache/airflow/issues/16737#issuecomment-917677177, I discovered that it was the cache memory that's keep growing:

Then I turned off the `dag_processor_manager` log, memory usage returned to normal (not growing anymore, steady at ~400 MB)
This issue is similar to #14924 and #16737. This time the culprit is the rotating logs under `~/logs/dag_processor_manager/dag_processor_manager.log*`.
### What you think should happen instead
Cache memory shouldn't keep growing like this
### How to reproduce
Turn on the `dag_processor_manager` log by doing
```bash
export CONFIG_PROCESSOR_MANAGER_LOGGER=True
```
in the `entrypoint.sh` and monitor the scheduler memory usage
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
k8s
### Anything else
I'm not sure why the previous fix https://github.com/apache/airflow/pull/18054 has stopped working :thinking:
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27065 | https://github.com/apache/airflow/pull/27223 | 131d339696e9568a2a2dc55c2a6963897cdc82b7 | 126b7b8a073f75096d24378ffd749ce166267826 | "2022-10-14T20:50:24Z" | python | "2022-10-25T08:38:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,057 | ["airflow/models/trigger.py"] | Race condition in multiple triggerer process can lead to both picking up same trigger. | ### Apache Airflow version
main (development)
### What happened
Currently airflow triggerer loop picks triggers to process by below steps
query_unassinged_Triggers
update_triggers from above id
query which triggers are assigned to current process
If two triggerer process executes above queries in below order
query unassigned trigger both will get all triggers then if one triggerer completes 2nd and 3rd operation before 2nd triggerer does 2nd operation that will lead to both triggerer running same triggers
there is sync happening after that but unnecessary cleanup operations are done in that case.
### What you think should happen instead
There should be locking on rows which are updated.
### How to reproduce
_No response_
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
HA setup with multiple triggerers can have this issue
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27057 | https://github.com/apache/airflow/pull/27072 | 4e55d7fa2b7d5f8d63465d2c5270edf2d85f08c6 | 9c737f6d192ef864dd4cde89a0a90c53f5336566 | "2022-10-14T11:29:13Z" | python | "2022-10-31T01:30:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,029 | ["airflow/providers/apache/druid/hooks/druid.py"] | Druid Operator is not getting host | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We use Airflow 2.3.3. I see that this test is successful, but I take a this error. This is the picture
```
File "/home/airflow/.local/lib/python3.7/site-packages/requests/sessions.py", line 792, in get_adapter
raise InvalidSchema(f"No connection adapters were found for {url!r}")
```
<img width="1756" alt="Screen Shot 2022-10-12 at 15 34 40" src="https://user-images.githubusercontent.com/47830986/195560866-0527c5f6-3795-460b-b78b-2488e2a77bfb.png">
<img width="1685" alt="Screen Shot 2022-10-12 at 15 37 27" src="https://user-images.githubusercontent.com/47830986/195560954-f5604d10-eb7d-4bab-b10b-2684d8fbe4a2.png">
I take dag like this


Also I tried this type but I failed
```python
ingestion_2 = SimpleHttpOperator(
task_id='test_task',
method='POST',
http_conn_id=DRUID_CONN_ID,
endpoint='/druid/indexer/v1/task',
data=json.dumps(read_file),
dag=dag,
do_xcom_push=True,
headers={
'Content-Type': 'application/json'
},
response_check=lambda response: response.json()['Status'] == 200)
```
I get this log
```
[2022-10-13, 06:16:46 UTC] {http.py:143} ERROR - {"error":"Missing type id when trying to resolve subtype of [simple type, class org.apache.druid.indexing.common.task.Task]: missing type id property 'type'\n at [Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1, column: 1]"}
```
I don't know this is bug or issue or networking problem but can we check this?
P.S - We use Airflow on Kubernetes so that we can not debug it.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Kubernetes
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27029 | https://github.com/apache/airflow/pull/27174 | 7dd7400dd4588e063078986026e14ea606a55a76 | 8b5f1d91936bb87ba9fa5488715713e94297daca | "2022-10-13T09:42:34Z" | python | "2022-10-31T10:19:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,010 | ["airflow/dag_processing/manager.py", "tests/dag_processing/test_manager.py"] | DagProcessor doesnt pick new files until queued file parsing completes | ### Apache Airflow version
2.4.1
### What happened
When there are large number of dag files, lets say 10K and each takes sometime to parse, dag_parser doesnt pick any newly created files till all 10k files are finished parsing
`if not self._file_path_queue:
self.emit_metrics()
self.prepare_file_path_queue()`
Above logic only adds new files to queue when queue is empty
### What you think should happen instead
Every loop of dag processor should pick new files and add into file for parsing queue
### How to reproduce
_No response_
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/27010 | https://github.com/apache/airflow/pull/27060 | fb9e5e612e3ddfd10c7440b7ffc849f0fd2d0b09 | 65b78b7dbd1d824d2c22b65922149985418acbc8 | "2022-10-12T11:34:30Z" | python | "2022-11-14T01:43:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,987 | ["airflow/providers/google/cloud/operators/dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py"] | DataprocLink is not available for dataproc workflow operators | ### Apache Airflow version
main (development)
### What happened
For DataprocInstantiateInlineWorkflowTemplateOperator and DataprocInstantiateWorkflowTemplateOperator, the dataproc link is available only for the jobs that have succeeded. Incase of Failure, the DataprocLink is not available
### What you think should happen instead
Like other dataproc operators, this should be available for workflow operators as well
### How to reproduce
_No response_
### Operating System
MacOS Monterey
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.5.0
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26987 | https://github.com/apache/airflow/pull/26986 | 7cfa1be467b995b886a97b68498137a76a31f97c | 0cb6450d6df853e1061dbcafbc437c07a8e0e555 | "2022-10-11T09:17:26Z" | python | "2022-11-16T21:30:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,960 | ["airflow/api/common/mark_tasks.py", "airflow/models/taskinstance.py", "airflow/utils/log/file_task_handler.py", "airflow/utils/log/log_reader.py", "airflow/utils/state.py", "airflow/www/utils.py", "airflow/www/views.py", "tests/www/views/test_views_grid.py"] | can't see failed sensor task log on webpage | ### Apache Airflow version
2.4.1
### What happened

when the sensor running, I can see the log above, but when I manual set the task state to failed or the task failed by other reason, I can't see log at here

In other version airflow, like 2.3/2.2, this still happens
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04.4 LTS (Focal Fossa)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26960 | https://github.com/apache/airflow/pull/26993 | ad7f8e09f8e6e87df2665abdedb22b3e8a469b49 | f110cb11bf6fdf6ca9d0deecef9bd51fe370660a | "2022-10-10T06:42:09Z" | python | "2023-01-05T16:42:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,912 | ["airflow/www/static/js/api/useTaskLog.ts", "airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx"] | Log-tab under grid view is automatically re-fetching completed logs every 3 sec. | ### Apache Airflow version
2.4.1
### What happened
The new inline log-tab under grid view is fantastic.
What's not so great though, is that it is automatically reloading the logs on the `/api/v1/dags/.../dagRuns/.../taskInstances/.../logs/1` api endpoint every 3 seconds. Same interval as the reload of the grid status it seems.
This:
* Makes it difficult for users to scroll in the log panel and to select text in the log panel, because it is replaced all the time
* Put unnecessary load on the client and the link between client-webserver.
* Put unnecssary load on the webserver and on the logging-backend, in our case it involves queries to an external Loki server.
This happens even if the TaskLogReader has set `metadata["end_of_log"] = True`
### What you think should happen instead
Logs should not automatically be reloaded if `end_of_log=True`
For logs which are not at end, some other slower reload or more incremental query/streaming is preferred.
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.1.0
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-postgres==5.2.0
apache-airflow-providers-sqlite==3.1.0
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26912 | https://github.com/apache/airflow/pull/27233 | 8d449ae04aa67ecbabf84f35a34fc2e53665ee17 | e73e90e388f7916ae5eea48ba39687d99f7a94b1 | "2022-10-06T12:38:34Z" | python | "2022-10-25T14:26:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,910 | ["dev/provider_packages/MANIFEST_TEMPLATE.in.jinja2", "dev/provider_packages/SETUP_TEMPLATE.py.jinja2"] | python_kubernetes_script.jinja2 file missing from apache-airflow-providers-cncf-kubernetes==4.4.0 release | ### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
```
$ pip freeze | grep apache-airflow-providers
apache-airflow-providers-cncf-kubernetes==4.4.0
```
### Apache Airflow version
2.4.1
### Operating System
macos-12.6
### Deployment
Other Docker-based deployment
### Deployment details
Using the astro cli.
### What happened
Trying to test the `@task.kubernetes` decorator with Airflow 2.4.1 and the `apache-airflow-providers-cncf-kubernetes==4.4.0` package, I get the following error:
```
[2022-10-06, 10:49:01 UTC] {taskinstance.py:1851} ERROR - Task failed with exception
Traceback (most recent call last):
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/decorators/kubernetes.py", line 95, in execute
write_python_script(jinja_context=jinja_context, filename=script_filename)
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/python_kubernetes_script.py", line 79, in write_python_script
template = template_env.get_template('python_kubernetes_script.jinja2')
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/jinja2/environment.py", line 1010, in get_template
return self._load_template(name, globals)
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/jinja2/environment.py", line 969, in _load_template
template = self.loader.load(self, name, self.make_globals(globals))
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/jinja2/loaders.py", line 126, in load
source, filename, uptodate = self.get_source(environment, name)
File "/Users/jeff/tmp/penv/lib/python3.9/site-packages/jinja2/loaders.py", line 218, in get_source
raise TemplateNotFound(template)
jinja2.exceptions.TemplateNotFound: python_kubernetes_script.jinja2
```
Looking the [source file](https://files.pythonhosted.org/packages/5d/54/0ea031a9771ded6036d05ad951359f7361995e1271a302ba2af99bdce1af/apache-airflow-providers-cncf-kubernetes-4.4.0.tar.gz) for the `apache-airflow-providers-cncf-kubernetes==4.4.0` package, I can see that `python_kubernetes_script.py` is there but not `python_kubernetes_script.jinja2`
```
$ tar -ztvf apache-airflow-providers-cncf-kubernetes-4.4.0.tar.gz 'apache-airflow-providers-cncf-kubernetes-4.4.0/airflow/providers/cncf/kubernetes/py*'
-rw-r--r-- 0 root root 2949 Sep 22 15:25 apache-airflow-providers-cncf-kubernetes-4.4.0/airflow/providers/cncf/kubernetes/python_kubernetes_script.py
```
### What you think should happen instead
The `python_kubernetes_script.jinja2` file that is available here https://github.com/apache/airflow/blob/main/airflow/providers/cncf/kubernetes/python_kubernetes_script.jinja2 should be included in the `apache-airflow-providers-cncf-kubernetes==4.4.0` pypi package.
### How to reproduce
With a default installation of `apache-airflow==2.4.1` and `apache-airflow-providers-cncf-kubernetes==4.4.0`, running the following DAG will reproduce the issue.
```
import pendulum
from airflow.decorators import dag, task
@dag(
schedule_interval=None,
start_date=pendulum.datetime(2022, 7, 20, tz="UTC"),
catchup=False,
tags=['xray_classifier'],
)
def k8s_taskflow():
@task.kubernetes(
image="python:3.8-slim-buster",
name="k8s_test",
namespace="default",
in_cluster=False,
config_file="/path/to/config"
)
def execute_in_k8s_pod():
import time
print("Hello from k8s pod")
time.sleep(2)
execute_in_k8s_pod_instance = execute_in_k8s_pod()
k8s_taskflow_dag = k8s_taskflow()
```
### Anything else
If I manually add the `python_kubernetes_script.jinja2` into my `site-packages/airflow/providers/cncf/kubernetes/` folder, then it works as expected.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26910 | https://github.com/apache/airflow/pull/27451 | 4cdea86d4cc92a51905aa44fb713f530e6bdadcf | 8975d7c8ff00841f4f2f21b979cb1890e6d08981 | "2022-10-06T11:33:31Z" | python | "2022-11-01T20:31:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,905 | ["airflow/www/static/js/api/index.ts", "airflow/www/static/js/api/useTaskXcom.ts", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Nav.tsx", "airflow/www/static/js/dag/details/taskInstance/Xcom/XcomEntry.tsx", "airflow/www/static/js/dag/details/taskInstance/Xcom/index.tsx", "airflow/www/templates/airflow/dag.html"] | Display selected task outputs (xcom) in task UI | ### Description
I often find myself checking the stats of a passed task, e.g. "inserted 3 new rows" or "discovered 4 new files" in the task logs. It would be very handy to see these on the UI directly, as part of the task details or elsewhere.
One idea would be to choose in the Task definition, which XCOM keys should be output in the task details, like so:

### Use case/motivation
As a developer, I want to better monitor the results of my tasks in terms of key metrics, so I can see the data processed by them. While for production, this can be achieved by forwarding/outputting metrics to other systems, like notification hooks, or ingesting them into e.g. grafana, I would like to do this already in AirFlow to a certain extent. This would certainly cut down on my clicks while running beta DAGs.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26905 | https://github.com/apache/airflow/pull/35719 | d0f4512ecb9c0683a60be7b0de8945948444df8e | 77c01031d6c569d26f6fabd331597b7e87274baa | "2022-10-06T07:05:39Z" | python | "2023-12-04T15:59:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,892 | ["airflow/www/views.py"] | Dataset Next Trigger Modal Not Populating Latest Update | ### Apache Airflow version
2.4.1
### What happened
When using dataset scheduling, it isn't obvious which datasets a downstream dataset consumer is awaiting in order for the DAG to be scheduled.
I would assume that this is supposed to be solved by the `Latest Update` column in the modal that opens when selecting `x of y datasets updated`, but it appears that the data isn't being populated.
<img width="601" alt="image" src="https://user-images.githubusercontent.com/5778047/194116186-d582cede-c778-47f7-8341-fc13a69a2358.png">
Although one of the datasets has been produced, there is no data in the `Latest Update` column of the modal.
In the above example, both datasets have been produced > 1 time.
<img width="581" alt="image" src="https://user-images.githubusercontent.com/5778047/194116368-ceff241f-a623-4893-beb7-637b821c4b53.png">
<img width="581" alt="image" src="https://user-images.githubusercontent.com/5778047/194116410-19045f5a-8400-47b0-afcb-9fbbffca26ee.png">
### What you think should happen instead
The `Latest Update` column should be populated with the latest update timestamp for each dataset required to schedule a downstream, dataset consuming DAG.
Ideally there would be some form of highlighting on the "missing" datasets for quick visual feedback when DAGs have a large number of datasets required for scheduling.
### How to reproduce
1. Create a DAG (or 2 individual DAGs) that produces 2 datasets
2. Produce both datasets
3. Then produce _only one_ dataset
4. Check the modal by clicking from the home screen on the `x of y datasets updated` button.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26892 | https://github.com/apache/airflow/pull/29441 | 0604033829787ebed59b9982bf08c1a68d93b120 | 6f9efbd0537944102cd4a1cfef06e11fe0a3d03d | "2022-10-05T16:51:49Z" | python | "2023-02-20T08:42:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,875 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py"] | SQLToGCSOperators Add Support for Dumping Json value in CSV | ### Description
If output format is `CSV`, then any "dict" type object returned from a database, for example a Postgres JSON column, is not dumped to a string and is kept as a dict object.
### Use case/motivation
Currently if export_format is `CSV` and data column in Postgres is defined as `json` or `jsonb` data type, the param `stringify_dict` in abstract method `convert_type` has been hardcoded to `False`, which results in param `stringify_dict` in subclass cannot be customised, for instance in subclass `PostgresToGCSOperator`.
Function `convert_types` in base class `BaseSQLToGCSOperator`:
```
def convert_types(self, schema, col_type_dict, row, stringify_dict=False) -> list:
"""Convert values from DBAPI to output-friendly formats."""
return [
self.convert_type(value, col_type_dict.get(name), stringify_dict=stringify_dict)
for name, value in zip(schema, row)
]
```
Function `convert_type` in subclass `PostgresToGCSOperator`:
```
def convert_type(self, value, schema_type, stringify_dict=True):
"""
Takes a value from Postgres, and converts it to a value that's safe for
JSON/Google Cloud Storage/BigQuery.
Timezone aware Datetime are converted to UTC seconds.
Unaware Datetime, Date and Time are converted to ISO formatted strings.
Decimals are converted to floats.
:param value: Postgres column value.
:param schema_type: BigQuery data type.
:param stringify_dict: Specify whether to convert dict to string.
"""
if isinstance(value, datetime.datetime):
iso_format_value = value.isoformat()
if value.tzinfo is None:
return iso_format_value
return pendulum.parse(iso_format_value).float_timestamp
if isinstance(value, datetime.date):
return value.isoformat()
if isinstance(value, datetime.time):
formatted_time = time.strptime(str(value), "%H:%M:%S")
time_delta = datetime.timedelta(
hours=formatted_time.tm_hour, minutes=formatted_time.tm_min, seconds=formatted_time.tm_sec
)
return str(time_delta)
if stringify_dict and isinstance(value, dict):
return json.dumps(value)
if isinstance(value, Decimal):
return float(value)
return value
```
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26875 | https://github.com/apache/airflow/pull/26876 | bab6dbec3883084e5872123b515c2a8491c32380 | a67bcf3ecaabdff80c551cff1f987523211e7af4 | "2022-10-04T23:21:37Z" | python | "2022-10-06T08:42:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,802 | ["airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"] | pdb no longer works with airflow test command since 2.3.3 | Converted back to issue as I've reproduced it and traced the issue back to https://github.com/apache/airflow/pull/24362
### Discussed in https://github.com/apache/airflow/discussions/26352
<div type='discussions-op-text'>
<sup>Originally posted by **GuruComposer** September 12, 2022</sup>
### Apache Airflow version
2.3.4
### What happened
I used to be able to use ipdb to debug DAGs by running `airflow tasks test <dag_name> <dag_id>`, and hitting an ipdb breakpoint (ipdb.set_trace()).
This no longer works. I get a strange type error:
``` File "/usr/local/lib/python3.9/bdb.py", line 88, in trace_dispatch
return self.dispatch_line(frame)
File "/usr/local/lib/python3.9/bdb.py", line 112, in dispatch_line
self.user_line(frame)
File "/usr/local/lib/python3.9/pdb.py", line 262, in user_line
self.interaction(frame, None)
File "/home/astro/.local/lib/python3.9/site-packages/IPython/core/debugger.py", line 336, in interaction
OldPdb.interaction(self, frame, traceback)
File "/usr/local/lib/python3.9/pdb.py", line 357, in interaction
self._cmdloop()
File "/usr/local/lib/python3.9/pdb.py", line 322, in _cmdloop
self.cmdloop()
File "/usr/local/lib/python3.9/cmd.py", line 126, in cmdloop
line = input(self.prompt)
TypeError: an integer is required (got type NoneType)```
### What you think should happen instead
I should get the ipdb shell.
### How to reproduce
1. Add ipdb breakpoint anywhere in airflow task.
import ipdb; ipdb.set_trace()
2. Run that task:
Run `airflow tasks test <dag_name> <dag_id>`, and
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
2.3.4 | https://github.com/apache/airflow/issues/26802 | https://github.com/apache/airflow/pull/26806 | 677df102542ab85aab4efbbceb6318a3c7965e2b | 029ebacd9cbbb5e307a03530bdaf111c2c3d4f51 | "2022-09-30T13:51:53Z" | python | "2022-09-30T17:46:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,774 | ["airflow/providers/trino/provider.yaml", "generated/provider_dependencies.json"] | Trino and Presto hooks do not properly execute statements other than SELECT | ### Apache Airflow Provider(s)
presto, trino
### Versions of Apache Airflow Providers
apache-airflow-providers-trino==4.0.1
apache-airflow-providers-presto==4.0.1
### Apache Airflow version
2.4.0
### Operating System
macOS 12.6
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When using the TrinoHook (PrestoHook also applies), only the `get_records()` and `get_first()` methods work as expected, the `run()` and `insert_rows()` do not.
The SQL statements sent by the problematic methods reach the database (visible in logs and UI), but they don't get executed.
The issue is caused by the hook not making the required subsequent requests to the Trino HTTP endpoints after the first request. More info [here](https://trino.io/docs/current/develop/client-protocol.html#overview-of-query-processing):
> If the JSON document returned by the POST to /v1/statement does not contain a nextUri link, the query has completed, either successfully or unsuccessfully, and no additional requests need to be made. If the nextUri link is present in the document, there are more query results to be fetched. The client should loop executing a GET request to the nextUri returned in the QueryResults response object until nextUri is absent from the response.
SQL statements made by methods like `get_records()` do get executed because internally they call `fetchone()` or `fetchmany()` on the cursor, which do make the subsequent requests.
### What you think should happen instead
The Hook is able to execute SQL statements other than SELECT.
### How to reproduce
Connect to a Trino or Presto instance and execute any SQL statement (INSERT or CREATE TABLE) using `TrinoHook.run()`, the statements will reach the API but they won't get executed.
Then, provide a dummy handler function like this:
`TrinoHook.run(..., handler=lambda cur: cur.description)`
The `description` property internally iterates over the cursor requests, causing the statement getting executed.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26774 | https://github.com/apache/airflow/pull/27168 | e361be74cd800efe1df9fa5b00a0ad0df88fcbfb | 09c045f081feeeea09e4517d05904b38660f525c | "2022-09-29T11:32:29Z" | python | "2022-10-26T03:13:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,767 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | MaxID logic for GCSToBigQueryOperator Causes XCom Serialization Error | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google 8.4.0rc1
### Apache Airflow version
2.3.4
### Operating System
OSX
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
The Max ID parameter, when used, causes an XCom serialization failure when trying to retrieve the value back out of XCom
### What you think should happen instead
Max ID value is returned from XCom call
### How to reproduce
Set `max_id_key=column,` on the operator, check XCom of the operator after it runs.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26767 | https://github.com/apache/airflow/pull/26768 | 9a6fc73aba75a03b0dd6c700f0f8932f6a474ff7 | b7203cd36eef20de583df3e708f49073d689ac84 | "2022-09-29T03:03:25Z" | python | "2022-10-01T13:39:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,571 | ["airflow/providers/amazon/aws/secrets/secrets_manager.py", "docs/apache-airflow-providers-amazon/img/aws-secrets-manager-json.png", "docs/apache-airflow-providers-amazon/img/aws-secrets-manager-uri.png", "docs/apache-airflow-providers-amazon/img/aws-secrets-manager.png", "docs/apache-airflow-providers-amazon/secrets-backends/aws-secrets-manager.rst", "tests/providers/amazon/aws/secrets/test_secrets_manager.py"] | Migrate Amazon Provider Package's `SecretsManagerBackend`'s `full_url_mode=False` implementation. | # Objective
I am following up on all the changes I've made in PR #25432 and which were originally discussed in issue #25104.
The objective of the deprecations introduced in #25432 is to flag and remove "odd" behaviors in the `SecretsManagerBackend`.
The objective of _this issue_ being opened is to discuss them, and hopefully reach a consensus on how to move forward implementing the changes.
I realize that a lot of the changes I made and their philosophy were under-discussed, so I will place the discussion here.
## What does it mean for a behavior to be "odd"?
You can think of the behaviors of `SecretsManagerBackend`, and which secret encodings it supports, as a Venn diagram.
Ideally, `SecretsManagerBackend` supports _at least_ everything the base API supports. This is a pretty straightforward "principle of least astonishment" requirement.
For example, it would be "astonishing" if copy+pasting a secret that works with the base API did not work in the `SecretsManagerBackend`.
To be clear, it would also be "astonishing" if the reverse were not true-- i.e. copy+pasting a valid secret from `SecretsManagerBackend` doesn't work with, say, environment variables. That said, adding new functionality is less astonishing than when a subclass doesn't inherit 100% of the supported behaviors of what it is subclassing. So although adding support for new secret encodings is arguably not desirable (all else equal), I think we can all agree it's not as bad as the reverse.
## Examples
I will cover two examples where we can see the "Venn diagram" nature of the secrets backend, and how some behaviors that are supported in one implementation are not supported in another:
### Example 1
Imagine the following environment variable secret:
```shell
export AIRFLOW_CONN_POSTGRES_DEFAULT='{
"conn_type": "postgres",
"login": "usr",
"password": "not%url@encoded",
"host": "myhost"
}'
```
Prior to #25432, this was _**not**_ a secret that worked with the `SecretsManagerBackend`, even though it did work with base Airflow's `EnvironmentVariablesBackend`(as of 2.3.0) due to the values not being URL-encoded.
Additionally, the `EnvironmentVariablesBackend` is smart enough to choose whether a secret should be treated as a JSON or a URI _without having to be explicitly told_. In a sense, this is also an incompatibility-- why should the `EnvironmentVariablesBackend` be "smarter" than the `SecretsManagerBackend` when it comes to discerning JSONs from URIs, and supporting both at the same time rather than needing secrets to be always one type of serialization?
### Example 2
Imagine the following environment variable secret:
```shell
export AIRFLOW_CONN_POSTGRES_DEFAULT="{
'conn_type': 'postgres',
'user': 'usr',
'pass': 'is%20url%20encoded',
'host': 'myhost'
}"
```
This is _not_ a valid secret in Airflow's base `EnvironmentVariablesBackend` implementation, although it _is_ a valid secret in `SecretsManagerBackend`.
There are two things that make it invalid in the `EnvironmentVariablesBackend` but valid in `SecretsManagerBackend`:
- `ast.litera_eval` in `SecretsManagerBackend` means that a Python dict repr is allowed to be read in.
- `user` and `pass` are invalid field names in the base API; these should be `login` and `password`, respectively. But the `_standardize_secret_keys()` method in the `SecretsManagerBackend` implementation makes it valid.
Additionally, note that this secret also parses differently in the `SecretsManagerBackend` than the `EnvironmentVariablesBackend`: the password `"is%20url%20encoded"` renders as `"is url encoded"` in the `SecretsManagerBackend`, but is left untouched by the base API when using a JSON.
## List of odd behaviors
Prior to #25432, the following behaviors were a part of the `SecretsManagerBackend` specification that I would consider "odd" because they are not part of the base API implementation:
1. `full_url_mode=False` still required URL-encoded parameters for JSON values.
2. `ast.literal_eval` was used instead of `json.dumps`, which means that the `SecretsManagerBackend` on `full_url_mode=False` was supporting Python dict reprs and other non-JSONs.
3. The Airflow config required setting `full_url_mode=False` to determine what is a JSON or URI.
4. `get_conn_value()` always must return a URI.
5. The `SecretsManagerBackend` allowed for atypical / flexible field names (such as `user` instead of `login`) via the `_standardize_secret_keys()` method.
We also introduced a new odd behavior in order to assist with the migration effort, which is:
6. New kwarg called `are_secret_values_urlencoded` to support secrets whose encodings are "non-idempotent".
In the below sections, I discuss each behavior in more detail, and I've added my own opinions about how odd these behaiors are and the estimated adverse impact of removing the behaviors.
### Behavior 1: URL-encoding JSON values
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Deprecated|High|High|
This was the original behavior that frustrated me and motivated me to open issues + submit PRs.
With the "idempotency" check, I've done my best to smooth out the transition away from URL-encoded JSON values.
The requirement is now _mostly_ removed, to the extent that the removal of this behavior can be backwards compatible and as smooth as possible:
- Users whose secrets do not contain special characters will not have even noticed a change took place.
- Users who _do_ have secrets with special characters hopefully are checking their logs and will have seen a deprecation warning telling them to remove the URL-encoding.
- In a _select few rare cases_ where a user has a secret with a "non-idempotent" encoding, the user will have to reconfigure their `backend_kwargs` to have `are_secret_values_urlencoded` set.
I will admit that I was surprised at how smooth we could make the developer experience around migrating this behavior for the majority of use cases.
When you consider...
- How smooth migrating is (just remove the URL-encoding! In most cases you don't need to do anything else!), and
- How disruptive full removal of URL-encoding is (to people who have not migrated yet),
.. it makes me almost want to hold off on fully removing this behavior for a little while longer, just to make sure developers read their logs and see the deprecation warning.
### Behavior 2: `ast.literal_eval` for deserializing JSON secrets
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Deprecated|High|Low|
It is hard to feel bad for anyone who is adversely impacted by this removal:
- This behavior should never have been introduced
- This behavior was never a documented behavior
- A reasonable and educated user will have known better than to rely on non-JSONs.
Providing a `DeprecationWarning` for this behavior was already going above and beyond, and we can definitely remove this soon.
### Behavior 3: `full_url_mode=False` is required for JSON secrets
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Active|Medium|Low|
This behavior is odd because the base API does not require such a thing-- whether it is a JSON or a URI can be inferred by checking whether the first character of the string is `{`.
Because it is possible to modify this behavior without introducing breaking changes, moving from _lack_ of optionality for the `full_url_mode` kwarg can be considered a feature addition.
Ultimately, we would want to switch from `full_url_mode: bool = True` to `full_url_mode: bool | None = None`.
In the proposed implementation, when `full_url_mode is None`, we just use whether the value starts with `{` to check if it is a JSON. _Only when_ `full_url_mode` is a `bool` would we explicitly raise errors e.g. if a JSON was given when `full_url_mode=True`, or a URI was given when `full_url_mode=False`.
### Behavior 4: `get_conn_value()` must return URI
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Deprecated + Active (until at least October 11th)|Low|Medium|
The idea that the callback invoked by `get_connection()` (now called `get_conn_value()`, but previously called `get_conn_uri()`) can return a JSON is a new Airflow 2.3.0 behavior.
This behavior _**cannot**_ change until at least October 11th, because it is required for `2.2.0` backwards compatibility. Via Airflow's `README.md`:
> [...] by default we upgrade the minimum version of Airflow supported by providers to 2.3.0 in the first Provider's release after 11th of October 2022 (11th of October 2021 is the date when the first PATCHLEVEL of 2.2 (2.2.0) has been released.
Changing this behavior _after_ October 11th is just a matter of whether maintainers are OK with introduce a breaking change to the `2.2.x` folks who are relying on JSON secrets.
Note that right now, `get_conn_value()` is avoided _entirely_ when `full_url_mode=False` and `get_connection()` is called. So although there is a deprecation warning, it is almost never hit.
```python
if self.full_url_mode:
return self._get_secret(self.connections_prefix, conn_id)
else:
warnings.warn(
f'In future versions, `{type(self).__name__}.get_conn_value` will return a JSON string when'
' full_url_mode is False, not a URI.',
DeprecationWarning,
)
```
### Behavior 5: Flexible field names via `_standardize_secret_keys()`
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Active|Medium|High|
This is one of those things that is very hard to remove. Removing it can be quite disruptive!
It is also a low priority to remove because unlike some other behaviors, it does not detract from `SecretsManagerBackend` being a "strict superset" with the base API.
Maybe it will just be the case that `SecretsManagerBackend` has this incompatibility with the base API going forward indefinitely?
Even still, we should consider the two following proposals:
1. The default field name of `user` should probably be switched to `login`, both in the `dict[str, list[str]]` used to implement the find+replace, and also in the documentation. I do not forsee any issues with doing this.
2. Remove documentation for this feature if we think it is "odd" enough to warrant discouraging users from seeking it out.
I think # 1 should be uncontroversial, but # 2 may be more controversial. I do not want to detract from my other points by staking out too firm an opinion on this one, so the best solution may simply be to not touch this for now. In fact, not touching this is exactly what I did with the original PR.
### Behavior 6: `are_secret_values_urlencoded` kwarg
|Current Status|Oddness|Estimated Adverse Impact of Removal|
|---|---|---|
|Pending Deprecation|Medium|Medium|
In the original discussion #25104, @potiuk told me to add something like this. I tried my best to avoid users needing to do this, hence the "idempotency" check. So only a few users actually need to specify this to assist in the migration of their secrets.
This was introduced as a "pending" deprecation because frankly, it is an odd behavior to have ever been URL-encoding these JSON values, and it only exists as a necessity to aid in migrating secrets. In our ideal end state, this doesn't exist.
Eventually when it comes time, removing this will not be _all_ that disruptive:
- This only impacts users who have `full_url_mode=False`
- This only impacts users with secrets that have non-idempotent encodings.
- `are_secret_values_urlencoded` should be set to `False`. Users should never be manually setting to `True`!
So we're talking about a small percent of a small minority of users who will ever see or need to set this `are_secret_values_urlencoded` kwarg. And even then, they should have set `are_secret_values_urlencoded` to `False` to assist in migrating.
# Proposal for Next Steps
All three steps require breaking changes.
## Proposed Step 1
- Remove: **Behavior 2: `ast.literal_eval` for deserializing JSON secrets**
- Remove: **Behavior 3: `full_url_mode=False` is required for JSON secrets**
- Remove: **Behavior 4: `get_conn_value()` must return URI**
- Note: Must wait until at least October 11th!
Right now the code is frankly a mess. I take some blame for that, as I did introduce the mess. But the mess is all inservice of backwards compatibility.
Removing Behavior 4 _**vastly**_ simplifies the code, and means we don't need to continue overriding the `get_connection()` method.
Removing Behavior 2 also simplifies the code, and is a fairly straightforward change.
Removing Behavior 3 is fully backwards compatible (so no deprecation warnings required) and provides a much nicer user experience overall.
The main thing blocking "Proposed Step 1" is the requirement that `2.2.x` be supported until at least October 11th.
### Alternative to Proposed Step 1
It _is_ possible to remove Behavior 2 and Behavior 3 without removing Behavior 4, and do so in a way that keeps `2.2.x` backwards compatibility.
It will still however leave a mess of the code.
I am not sure how eager the Amazon Provider Package maintainers are to keep backwards compatibility here. Between the 1 year window, plus the fact that this can only possibly impact people using both the `SecretsManagerBackend` _and_ who have `full_url_mode=False` turned on, it seems like not an incredibly huge deal to just scrap support for `2.2.x` here when the time comes. But it is not appropriate for me to decide this, so I must be clear and say that we _can_ start trimming away some of the odd behaviors _without_ breaking Airflow `2.2.x` backwards compatibility, and that the main benefit of breaking backwards compatibility is the source code becoming way simpler.
## Proposed Step 2
- Remove: **Behavior 1: URL-encoding JSON values**
- Switch status from Pending Deprecation to Deprecation: **Behavior 6: `are_secret_values_urlencoded` kwarg**
Personally, I don't think we should rush on this. The reason I think we should take our time here is because the current way this works is surprisingly non-disruptive (i.e. no config changes required to migrate for most users), but fully removing the behavior may be pretty disruptive, especially to people who don't read their logs carefully.
### Alternative to Proposed Step 2
The alternative to this step is to combine this step with step 1, instead of holding off for a future date.
The main arguments in favor of combining with step 1 are:
- Reducing the number of version bumps that introduce breaking changes by simply combining all breaking changes into one step. It's unclear how many users even use `full_url_mode` and it is arguable that we're being too delicate with what was arguably a semi-experimental and odd feature to begin with; it's only become less experimental by the stroke of luck that Airflow 2.3.0 supports JSON-encoded secrets in the base API.
- A sort of "rip the BandAid" ethos, or a "get it done and over with" ethos. I don't think this is very nice to users, but I see the appeal of not keeping odd code around for a while.
## Proposed Step 3
- Remove: **Behavior 6: `are_secret_values_urlencoded` kwarg**
Once URL-encoding is no longer happening for JSON secrets, and all non-idempotent secrets have been cast or explicitly handled, and we've deprecated everything appropriately, we can finally remove `are_secret_values_urlencoded`.
# Conclusion
The original deprecations introduced were under-discussed, but hopefully now you both know where I was coming from, and also agree with the changes I made.
If you _disagree_ with the deprecations that I introduced, I would also like to hear about that and why, and we can see about rolling any of them back.
Please let me know what you think about the proposed steps for changes to the code base.
Please also let me know what you think an appropriate schedule is for introducing the changes, and whether you think I should consider one of the alternatives (both considered and otherwise) to the steps I outlined in the penultimate section.
# Other stuff
### Use case/motivation
(See above)
### Related issues
- #25432
- #25104
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26571 | https://github.com/apache/airflow/pull/27920 | c8e348dcb0bae27e98d68545b59388c9f91fc382 | 8f0265d0d9079a8abbd7b895ada418908d8b9909 | "2022-09-21T18:31:22Z" | python | "2022-12-05T19:21:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,566 | ["docs/apache-airflow/concepts/tasks.rst"] | Have SLA docs reflect reality | ### What do you see as an issue?
The [SLA documentation](https://airflow.apache.org/docs/apache-airflow/stable/concepts/tasks.html#slas) currently states the following:
> An SLA, or a Service Level Agreement, is an expectation for the maximum time a Task should take. If a task takes longer than this to run...
However this is not how SLAs currently work in Airflow, the SLA time is calculated from the start of the DAG not from the start of the task.
For example if you have a DAG like this the SLA will always trigger after the DAG has started for 5 minutes even though the task never takes 5 minutes to run:
```python
import datetime
from airflow import DAG
from airflow.sensors.time_sensor import TimeSensor
from airflow.operators.python import PythonOperator
with DAG(dag_id="my_dag", schedule_interval="0 0 * * *") as dag:
wait_time_mins = TimeSensor(target_time=datetime.time(minute=10))
run_fast = PythonOperator(
python_callable=lambda *a, **kw: True,
sla=datetime.timedelta(minutes=5),
)
run_fast.set_upstream(wait_time_mins)
```
### Solving the problem
Update the docs to explain how SLAs work in reality.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26566 | https://github.com/apache/airflow/pull/27111 | 671029bebc33a52d96f9513ae997e398bd0945c1 | 639210a7e0bfc3f04f28c7d7278292d2cae7234b | "2022-09-21T16:00:36Z" | python | "2022-10-27T14:34:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,565 | ["docs/apache-airflow/core-concepts/executor/local.rst"] | Documentation unclear about multiple LocalExecutors on HA Scheduler deployment | ### What do you see as an issue?
According to Airflow documentation, it's now possible to run multiple Airflow Schedulers starting with Airflow 2.x.
What's not clear from the documentation is what happens if each of the machines running the Scheduler has executor = LocalExecutor in the [core] section of airflow.cfg. In this context, if I have Airflow Scheduler running on 3 machines, does this mean that there will also be 3 LocalExecutors processing tasks in a distributed fashion?
### Solving the problem
Enhancing documentation to clarify the details about multiple LocalExecutors on HA Scheduler deployment
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26565 | https://github.com/apache/airflow/pull/32310 | 61f33304d587b3b0a48a876d3bfedab82e42bacc | e53320d62030a53c6ffe896434bcf0fc85803f31 | "2022-09-21T15:53:02Z" | python | "2023-07-05T09:22:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,544 | ["airflow/utils/db.py"] | Choose setting for sqlalchemy SQLALCHEMY_TRACK_MODIFICATIONS | ### Body
We need to determine what to do about this warning:
```
/Users/dstandish/.virtualenvs/2.4.0/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py:872 FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
```
Should we set to true or false?
@ashb @potiuk @jedcunningham @uranusjr
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26544 | https://github.com/apache/airflow/pull/26617 | 3396d1f822caac7cbeb14e1e67679b8378a84a6c | 051ba159e54b992ca0111107df86b8abfd8b7279 | "2022-09-21T00:57:27Z" | python | "2022-09-23T07:18:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,499 | ["airflow/models/xcom_arg.py"] | Dynamic task mapping zip() iterates unexpected number of times | ### Apache Airflow version
2.4.0
### What happened
When running `zip()` with different-length lists, I get an unexpected result:
```python
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
with DAG(
dag_id="demo_dynamic_task_mapping_zip",
start_date=datetime(2022, 1, 1),
schedule=None,
):
@task
def push_letters():
return ["a", "b", "c"]
@task
def push_numbers():
return [1, 2, 3, 4]
@task
def pull(value):
print(value)
pull.expand(value=push_letters().zip(push_numbers()))
```
Iterates over `[("a", 1), ("b", 2), ("c", 3), ("a", 1)]`, so it iterates for the length of the longest collection, but restarts iterating elements when reaching the length of the shortest collection.
I would expect it to behave like Python's builtin `zip` and iterate for the length of the shortest collection, so 3x in the example above, i.e. `[("a", 1), ("b", 2), ("c", 3)]`.
Additionally, I went digging in the source code and found the `fillvalue` argument which works as expected:
```python
pull.expand(value=push_letters().zip(push_numbers(), fillvalue="foo"))
```
Iterates over `[("a", 1), ("b", 2), ("c", 3), ("foo", 4)]`.
However, with `fillvalue` not set, I would expect it to iterate only for the length of the shortest collection.
### What you think should happen instead
I expect `zip()` to iterate over the number of elements of the shortest collection (without `fillvalue` set).
### How to reproduce
See above.
### Operating System
MacOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
OSS Airflow
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26499 | https://github.com/apache/airflow/pull/26636 | df3bfe3219da340c566afc9602278e2751889c70 | f219bfbe22e662a8747af19d688bbe843e1a953d | "2022-09-19T18:51:49Z" | python | "2022-09-26T09:02:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,497 | ["airflow/migrations/env.py", "airflow/migrations/versions/0118_2_4_2_add_missing_autoinc_fab.py", "airflow/migrations/versions/0119_2_5_0_add_updated_at_to_dagrun_and_ti.py", "airflow/settings.py", "airflow/utils/db.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/migrations-ref.rst"] | Upgrading to airflow 2.4.0 from 2.3.4 causes NotNullViolation error | ### Apache Airflow version
2.4.0
### What happened
Stopped existing processes, upgraded from airflow 2.3.4 to 2.4.0, and ran airflow db upgrade successfully. Upon restarting the services, I'm not seeing any dag runs from the past 10 days. I kick off a new job, and I don't see it show up in the grid view. Upon checking the systemd logs, I see that there are a lot of postgress errors with webserver. Below is a sample of such errors.
```
[SQL: INSERT INTO ab_view_menu (name) VALUES (%(name)s) RETURNING ab_view_menu.id]
[parameters: {'name': 'Datasets'}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,183] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 13, null).
[SQL: INSERT INTO ab_permission_view (permission_id, view_menu_id) VALUES (%(permission_id)s, %(view_menu_id)s) RETURNING ab_permission_view.id]
[parameters: {'permission_id': 13, 'view_menu_id': None}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,209] {manager.py:420} ERROR - Add View Menu Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, Datasets).
[SQL: INSERT INTO ab_view_menu (name) VALUES (%(name)s) RETURNING ab_view_menu.id]
[parameters: {'name': 'Datasets'}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,212] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 17, null).
[SQL: INSERT INTO ab_permission_view (permission_id, view_menu_id) VALUES (%(permission_id)s, %(view_menu_id)s) RETURNING ab_permission_view.id]
[parameters: {'permission_id': 17, 'view_menu_id': None}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,229] {manager.py:420} ERROR - Add View Menu Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, DAG Warnings).
[SQL: INSERT INTO ab_view_menu (name) VALUES (%(name)s) RETURNING ab_view_menu.id]
[parameters: {'name': 'DAG Warnings'}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,232] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 17, null).
[SQL: INSERT INTO ab_permission_view (permission_id, view_menu_id) VALUES (%(permission_id)s, %(view_menu_id)s) RETURNING ab_permission_view.id]
[parameters: {'permission_id': 17, 'view_menu_id': None}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
[2022-09-19 14:03:16,250] {manager.py:511} ERROR - Creation of Permission View Error: (psycopg2.errors.NotNullViolation) null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, 13, 23).
```
I tried running airflow db check, init, check-migration, upgrade without any errors, but the errors still remain.
Please let me know if I missed any steps during the upgrade, or if this is a known issue with a workaround.
### What you think should happen instead
All dag runs should be visible
### How to reproduce
upgrade airflow, upgrade db, restart the services
### Operating System
Ubuntu 18.04.6 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26497 | https://github.com/apache/airflow/pull/26885 | 2f326a6c03efed8788fe0263df96b68abb801088 | 7efdeed5eccbf5cb709af40c8c66757e59c957ed | "2022-09-19T18:13:02Z" | python | "2022-10-07T16:37:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,492 | ["airflow/utils/log/file_task_handler.py"] | Cannot fetch log from Celery worker | ### Discussed in https://github.com/apache/airflow/discussions/26490
<div type='discussions-op-text'>
<sup>Originally posted by **emredjan** September 19, 2022</sup>
### Apache Airflow version
2.4.0
### What happened
When running tasks on a remote celery worker, webserver fails to fetch logs from the machine, giving a '403 - Forbidden' error on version 2.4.0. This behavior does not happen on 2.3.3, where the remote logs are retrieved and displayed successfully.
The `webserver / secret_key` configuration is the same in all nodes (the config files are synced), and their time is synchronized using a central NTP server, making the solution in the warning message not applicable.
My limited analysis pointed to the `serve_logs.py` file, and the flask request object that's passed to it, but couldn't find the root cause.
### What you think should happen instead
It should fetch and show remote celery worker logs on the webserver UI correctly, as it did in previous versions.
### How to reproduce
Use airflow version 2.4.0
Use CeleryExecutor with RabbitMQ
Use a separate Celery worker machine
Run a dag/task on the remote worker
Try to display task log on the web UI
### Operating System
Red Hat Enterprise Linux 8.6 (Ootpa)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-common-sql==1.1.0
apache-airflow-providers-ftp==3.0.0
apache-airflow-providers-hashicorp==3.0.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-mssql==3.0.0
apache-airflow-providers-mysql==3.0.0
apache-airflow-providers-odbc==3.0.0
apache-airflow-providers-sftp==3.0.0
apache-airflow-providers-sqlite==3.0.0
apache-airflow-providers-ssh==3.0.0
```
### Deployment
Virtualenv installation
### Deployment details
Using CeleryExecutor / rabbitmq with 2 servers
### Anything else
All remote task executions has the same problem.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/26492 | https://github.com/apache/airflow/pull/26493 | b9c4e98d8f8bcc129cbb4079548bd521cd3981b9 | 52560b87c991c9739791ca8419219b0d86debacd | "2022-09-19T14:10:25Z" | python | "2022-09-19T16:37:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 27,425 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg"] | get_dags does not fetch more than 100 dags. | Hi,
The function does not return more than 100 dags even setting the limit to more than 100. So `get_dags(limit=500)` will only return max of 100 dags.
I have to do the hack to mitigate this problem.
```
def _get_dags(self, max_dags: int = 500):
i = 0
responses = []
while i <= max_dags:
response = self._api.get_dags(offset=i)
responses += response['dags']
i = i + 100
return [dag['dag_id'] for dag in responses]
```
Versions I am using are:
```
apache-airflow==2.3.2
apache-airflow-client==2.3.0
```
and
```
apache-airflow==2.2.2
apache-airflow-client==2.1.0
```
Best,
Hamid | https://github.com/apache/airflow/issues/27425 | https://github.com/apache/airflow/pull/29773 | a0e13370053452e992d45e7956ff33290563b3a0 | 228d79c1b3e11ecfbff5a27c900f9d49a84ad365 | "2022-09-16T22:11:08Z" | python | "2023-02-26T16:19:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,427 | ["airflow/www/static/js/main.js", "airflow/www/utils.py"] | Can not get task which status is null | ### Apache Airflow version
Other Airflow 2 version
### What happened
with List Task Instance airflow webUI,when we search the task which state is null,the result is:no records found.
### What you think should happen instead
should list the task which status is null
### How to reproduce
use airflow webui
List Task Instance
add filter
state equal to null
### Operating System
oracle linux
### Versions of Apache Airflow Providers
2.2.3
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26427 | https://github.com/apache/airflow/pull/26584 | 64622929a043436b235b9fb61fb076c5d2e02124 | 8e2e80a0ce0e1819874e183fb1662e879cdd8a08 | "2022-09-16T06:41:55Z" | python | "2022-10-11T19:31:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,424 | ["airflow/www/extensions/init_views.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | `POST /taskInstances/list` with wildcards returns unhelpful error | ### Apache Airflow version
2.3.4
### What happened
https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_task_instances_batch
fails with an error with wildcards while
https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_task_instances
succeeds with wildcards
Error:
```
400
"None is not of type 'object'"
```
### What you think should happen instead
_No response_
### How to reproduce
1) `astro dev init`
2) `astro dev start`
3) `test1.py` and `python test1.py`
```
import requests
host = "http://localhost:8080/api/v1"
kwargs = {
'auth': ('admin', 'admin'),
'headers': {'content-type': 'application/json'}
}
r = requests.post(f'{host}/dags/~/dagRuns/~/taskInstances/list', **kwargs, timeout=10)
print(r.url, r.text)
```
output
```
http://localhost:8080/api/v1/dags/~/dagRuns/~/taskInstances/list
{
"detail": "None is not of type 'object'",
"status": 400,
"title": "Bad Request",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest"
}
```
3) `test2.py` and `python test2.py`
```
import requests
host = "http://localhost:8080/api/v1"
kwargs = {
'auth': ('admin', 'admin'),
'headers': {'content-type': 'application/json'}
}
r = requests.get(f'{host}/dags/~/dagRuns/~/taskInstances', **kwargs, timeout=10) # change here
print(r.url, r.text)
```
```
<correct output>
```
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26424 | https://github.com/apache/airflow/pull/30596 | c2679c57aa0281dd455c6a01aba0e8cfbb6a0e1c | e89a7eeea6a7a5a5a30a3f3cf86dfabf7c343412 | "2022-09-15T22:52:20Z" | python | "2023-04-12T12:40:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,380 | ["airflow/datasets/__init__.py", "tests/datasets/test_dataset.py", "tests/models/test_dataset.py"] | UI doesn't handle whitespace/empty dataset URI's well | ### Apache Airflow version
main (development)
### What happened
Here are some poor choices for dataset URI's:
```python3
empty = Dataset("")
colons = Dataset("::::::")
whitespace = Dataset("\t\n")
emoji = Dataset("😊")
long = Dataset(5000 * "x")
injection = Dataset("105'; DROP TABLE 'dag")
```
And a dag file which replicates the problems mentioned below: https://gist.github.com/MatrixManAtYrService/a32bba5d382cd9a925da72571772b060 (full tracebacks included as comments)
Here's how they did:
|dataset|behavior|
|:-:|:--|
|empty| dag triggered with no trouble, not selectable in the datasets UI|
|emoji| `airflow dags reserialize`: `UnicodeEncodeError: 'ascii' codec can't encode character '\U0001f60a' in position 0: ordinal not in range(128)`|
|colons| no trouble|
|whitespace| dag triggered with no trouble, selectable in the datasets UI, but shows no history|
|long|sqlalchemy error during serialization|
|injection| no trouble|
Finally, here's a screenshot:
<img width="1431" alt="Screen Shot 2022-09-13 at 11 29 02 PM" src="https://user-images.githubusercontent.com/5834582/190069341-dc17c66a-f941-424d-a455-cd531580543a.png">
Notice that there are two empty rows in the datasets list, one for `empty`, the other for `whitespace`. Only `whitespace` is selectable, both look weird.
### What you think should happen instead
I propose that we add a uri sanity check during serialization and just reject dataset URI's that are:
- only whitespace
- empty
- long enough that they're going to cause a database problem
The `emoji` case failed in a nice way. Ideally `whitespace`, `long` and `empty` can fail in the same way. If implemented, this would prevent any of the weird cases above from making it to the UI in the first place.
### How to reproduce
Unpause the above dags
### Operating System
Docker/debian
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astro dev start`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26380 | https://github.com/apache/airflow/pull/26389 | af39faafb7fdd53adbe37964ba88a3814f431cd8 | bd181daced707680ed22f5fd74e1e13094f6b164 | "2022-09-14T05:53:23Z" | python | "2022-09-14T16:11:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,375 | ["airflow/www/extensions/init_views.py", "airflow/www/templates/airflow/error.html", "airflow/www/views.py", "tests/api_connexion/test_error_handling.py"] | Airflow Webserver returns incorrect HTTP Error Response for custom REST API endpoints | ### Apache Airflow version
Other Airflow 2 version
### What happened
We are using Airflow 2.3.1 Version. Apart from Airflow provided REST endpoints, we are also using the airflow webserver to host our own application REST API endpoints. We are doing this by loading our own blueprints and registering Flask Blueprint routes within the airflow plugin.
Issue: Our Custom REST API endpoints are returning incorrect HTTP Error response code of 404 when 405 is expected (Invoke the REST API endpoint with an incorrect HTTP method, say POST instead of PUT) . This was working in airflow 1.x but is giving an issue with airflow 2.x
Here is a sample airflow plugin code . If the '/sample-app/v1' API below is invoked with POST method, I would expect a 405 response. However, it returns a 404.
I tried registering a blueprint error handler for 405 inside the plugin, but that did not work.
```
test_bp = flask.Blueprint('test_plugin', __name__)
@test_bp.route(
'/sample-app/v1/tags/<tag>', methods=['PUT'])
def initialize_deployment(tag):
"""
Initialize the deployment of the metadata tag
:rtype: flask.Response
"""
return 'Hello, World'
class TestPlugin(plugins_manager.AirflowPlugin):
name = 'test_plugin'
flask_blueprints = [test_bp]
```
### What you think should happen instead
Correct HTTP Error response code should be returned.
### How to reproduce
Issue the following curl request after loading the plugin -
curl -X POST "http://localhost:8080/sample-app/v1/tags/abcd" -d ''
The response will be 404 instead of 405.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26375 | https://github.com/apache/airflow/pull/26880 | ea55626d79fdbd96b6d5f371883ac1df2a6313ec | 8efb678e771c8b7e351220a1eb7eb246ae8ed97f | "2022-09-13T21:56:54Z" | python | "2022-10-18T12:50:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,367 | ["airflow/providers/google/cloud/operators/bigquery.py", "docs/apache-airflow-providers-google/operators/cloud/bigquery.rst", "tests/system/providers/google/cloud/bigquery/example_bigquery_queries.py"] | Add SQLColumnCheck and SQLTableCheck Operators for BigQuery | ### Description
New operators under the Google provider for table and column data quality checking that is integrated with OpenLineage.
### Use case/motivation
Allow OpenLineage support for BigQuery when using column and table check operators.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26367 | https://github.com/apache/airflow/pull/26368 | 3cd4df16d4f383c27f7fc6bd932bca1f83ab9977 | c4256ca1a029240299b83841bdd034385665cdda | "2022-09-13T15:21:52Z" | python | "2022-09-21T08:49:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,283 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator max_id_key Not Written to XCOM | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.3.0
### Apache Airflow version
2.3.4
### Operating System
OSX
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
`max_id` is not returned through XCOM when `max_id_key` is set.
### What you think should happen instead
When `max_id_key` is set, the `max_id` value should be returned as the default XCOM value.
This is based off of the parameter description:
```
The results will be returned by the execute() command, which in turn gets stored in XCom for future operators to use.
```
### How to reproduce
Execute the `GCSToBigQueryOperator` operator with a `max_id_key` parameter set. No XCOM value is set.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26283 | https://github.com/apache/airflow/pull/26285 | b4f8a069f07b18ce98c9b1286da5a5fcde2bff9f | 07fe356de0743ca64d936738b78704f7c05774d1 | "2022-09-09T20:01:59Z" | python | "2022-09-18T20:12:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,273 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py"] | SQLToGCSOperators Add Support for Dumping JSON | ### Description
If your output format for a SQLToGCSOperator is `json`, then any "dict" type object returned from a database, for example a Postgres JSON column, is not dumped to a string and is kept as a nested JSON object.
Add option to dump `dict` objects to string in JSON exporter.
### Use case/motivation
Currently JSON type columns are hard to ingest into BQ since a JSON field in a source database does not enforce a schema, and we can't reliably generate a `RECORD` schema for the column.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26273 | https://github.com/apache/airflow/pull/26277 | 706a618014a6f94d5ead0476f26f79d9714bf93d | b4f8a069f07b18ce98c9b1286da5a5fcde2bff9f | "2022-09-09T15:25:54Z" | python | "2022-09-18T20:11:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,262 | ["docs/helm-chart/manage-dags-files.rst"] | helm chart doc Manage DAGs files recommended Bake DAGs in Docker image need improvement. | ### What do you see as an issue?
https://airflow.apache.org/docs/helm-chart/1.6.0/manage-dags-files.html#bake-dags-in-docker-image
> The recommended way to update your DAGs with this chart is to build a new Docker image with the latest DAG code:
In this doc , recommended user manage dags way is build in image.
But , ref this issue:
https://github.com/airflow-helm/charts/issues/211#issuecomment-859678503
> but having the scheduler being restarted and not scheduling any task each time you do a change that is not even scheduler related (just to deploy a new DAG!!)
> Helm Chart should be used to deploy "application" not to deploy another version of DAGs.
So, I think bake dags in docker image should not be the most recommended way.
At least. We should say this way weaknesses (restart all components when jsut deploy a new DAG!) in docs.
### Solving the problem
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26262 | https://github.com/apache/airflow/pull/26401 | 2382c12cc3aa5d819fd089c73e62f8849a567a0a | 11f8be879ba2dd091adc46867814bcabe5451540 | "2022-09-09T08:11:29Z" | python | "2022-09-15T21:09:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,259 | ["airflow/models/dag.py", "airflow/models/dagrun.py", "airflow/www/views.py", "tests/models/test_dag.py"] | should we limit max queued dag runs for dataset-triggered dags | if a dataset-triggered dag is running, and upstreams are updated multiple times, many dag runs will be queued up because the scheduler checks frequently for new dag runs needed.
you can easily limit max active dag runs but cannot easily limit max queued dag runs. in the dataset case this represents a meaningful difference in behavior and seems undesirable.
i think it may make sense to limit max queued dag runs (for datasets) to 1. cc @ash @jedcunningham @uranusjr @blag @norm
the graph below illustrates what happens in this scenario. you can reproduce with the example datasets dag file. change consumes 1 to be `sleep 60` , produces 1 to be `sleep 1`, then trigger producer repeatedly.

| https://github.com/apache/airflow/issues/26259 | https://github.com/apache/airflow/pull/26348 | 9444d9789bc88e1063d81d28e219446b2251c0e1 | b99d1cd5d32aea5721c512d6052b6b7b3e0dfefb | "2022-09-09T03:15:54Z" | python | "2022-09-14T12:28:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,256 | ["airflow/datasets/manager.py", "airflow/jobs/scheduler_job.py", "tests/models/test_taskinstance.py"] | "triggered runs" dataset counter doesn't update until *next* run and never goes above 1 | ### Apache Airflow version
2.4.0b1
### What happened
I have [this test dag](https://gist.github.com/MatrixManAtYrService/2cf0ebbd85faa2aac682d9c441796c58) which I created to report [this issue](https://github.com/apache/airflow/issues/25210). The idea is that if you unpause "sink" and all of the "sources" then the sources will wait until the clock is like \*:\*:00 and they'll terminate at the same time.
Since each source triggers the sink with a dataset called "counter", the "sink" dag will run just once, and it will have output like: `INFO - [(16, 1)]`, that's 16 sources and 1 sink that ran.
At this point, you can look at the dataset history for "counter" and you'll see this:
<img width="524" alt="Screen Shot 2022-09-08 at 6 07 44 PM" src="https://user-images.githubusercontent.com/5834582/189248999-d31141a4-2d0b-4ec2-9ea5-c4c3536b3a28.png">
So we've got a timestamp, but the "triggered runs" count is empty. That's weird. One run was triggered (and it finished by the time the screenshot was taken), so why doesn't it say `1`?
So I redeploy and try it again, except this time I wait several seconds between each "unpause" click, the idea being that maybe some of them fire at 07:16:00 and the others fire at 07:17:00. I end up with this:
<img width="699" alt="Screen Shot 2022-09-08 at 6 19 12 PM" src="https://user-images.githubusercontent.com/5834582/189252116-69067189-751d-40e7-89c5-8d1da1720237.png">
So fifteen of them finished at once and caused the dataset to update, and then just one straggler (number 9) is waiting for an additional minute. I wait for the straggler to complete and go back to the dataset view:
<img width="496" alt="Screen Shot 2022-09-08 at 6 20 41 PM" src="https://user-images.githubusercontent.com/5834582/189253874-87bb3eb3-2237-42a1-bc3f-9fc210419f1a.png">
Now it's the straggler that is blank, but the rest of them are populated. Continuing to manually run these, I find that whichever one I have run most recently is blank, and all of the others are 1, even if this is the second or third time I've run them
### What you think should happen instead
- The triggered runs counter should increment beyond 1
- It should increment immediately after the dag was triggered, not wait until after the *next* dag gets triggered.
### How to reproduce
See dags in in this gist: https://gist.github.com/MatrixManAtYrService/2cf0ebbd85faa2aac682d9c441796c58
1. unpause "sink"
2. unpause half of sources
3. wait one minute
4. unpause the other half of the sources
5. wait for "sink" to run a second time
6. view the dataset history for "counter"
7. ask why only half of the counts are populated
8. manually trigger some sources, wait for them to trigger sink
9. view the dataset history again
10. ask why none of them show more than 1 dagrun triggered
### Operating System
Kubernetes in Docker, deployed via helm
### Versions of Apache Airflow Providers
n/a
### Deployment
Other 3rd-party Helm chart
### Deployment details
see "deploy.sh" in the gist:
https://gist.github.com/MatrixManAtYrService/2cf0ebbd85faa2aac682d9c441796c58
It's just a fresh install into a k8s cluster
### Anything else
n/a
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26256 | https://github.com/apache/airflow/pull/26276 | eb03959e437e11891b8c3696b76f664a991a37a4 | 954349a952d929dc82087e4bb20d19736f84d381 | "2022-09-09T01:45:19Z" | python | "2022-09-09T20:15:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,215 | ["airflow/example_dags/example_params_ui_tutorial.py", "airflow/www/static/js/trigger.js"] | Trigger DAG UI Extension w/ Flexible User Form Concept | ### Description
Proposal for Contribution for an extensible Trigger UI feature in Airflow.
## Design proposal (Feedback welcome)
### Part 1) Specifying Trigger UI on DAG Level
We propose to extend the DAG class with an additional attribute so that UI(s) (one or multiple per DAG) can be specified in the DAG.
* Attribute name proposal: `trigger_ui`
* Type proposal: `Union[TriggerUIBase, List[TriggerUIBase]` (One or a list of UI definition inherited from an abstract UI class which implements the trigger UI)
* Default value proposal: `[TriggerNoUI(), TriggerJsonUI()]` (Means the current/today's state, user can pick to trigger with or without parameters)
With this extension the current behavior is continued and users can specify if a specific or multiple UIs are offered for the Trigger DAG option.
### Part 2) UI Changes for Trigger Button
The function of the trigger DAG button in DAG overview landing ("Home" / `templates/airflow/dags.html`) as well as DAG detail pages (grid, graph, ... view / `templates/airflow/dag.html`) is adjusted so that:
1) If there is a single Trigger UI specified for the DAG, the button directly opens the form on click
2) If a list of Trigger UIs is defined for the DAG, then a list of UI's is presented, similar like today's drop-down with the today's two options (with and without parameters).
Menu names for (2) and URLs are determined by the UI class members linked to the DAG.
### Part 3) Standard implementations for TriggerNoUI, TriggerJsonUI
Two implementations for triggering w/o UI and parameters and the current JSON entry form will be migrated to the new UI structure, so that users can define that one, the other or both can be used for DAGs.
Name proposals:
0) TriggerUIBase: Base class for any Trigger UI, defines the base parameters and defaults which every Trigger UI is expected to provide:
* `url_template`: URL template (into which the DAG name is inserted to direct the user to)
* `name`: Name of the trigger UI to display in the drop-down
* `description`: Optional descriptive test to supply as hover-over/tool-tip)
1) TriggerNoUI (inherits TriggerUIBase): Skips a user confirmation and entry form and upon call runs the DAG w/o parameters (`DagRun.conf = {}`)
2) TriggerJsonUI (inherits TriggerUIBase): Same like the current UI to enter a JSON into a text box and trigger the DAG. Any valid JSON accepted.
### Part 4) Standard Implementation for Simple Forms (Actually core new feature)
Implement/Contribute a user-definable key/value entry form named `TriggerFormUI` (inherits TriggerUIBase) which allows the user to easily enter parameters for triggering a DAG. Form could look like:
```
Parameter 1: <HTML input box for entering a value>
(Optional Description and hints)
Parameter 2: <HTML Select box of options>
(Optional Description and hints)
Parameter 3: <HTML Checkbox on/off>
(Optional Description and hints)
<Trigger DAG Button>
```
The resulting JSON would use the parameter keys and values and render the following `DagRun.conf` and trigger the DAG:
```
{
"parameter_1": "user input",
"parameter_2": "user selection",
"parameter_3": true/false value
}
```
The number of form values, parameter names, parameter types, options, order and descriptions should be freely configurable in the DAG definition.
The trigger form should provide the following general parameters (at least):
* `name`: The name of the form to be used in pick lists and in the headline
* `description`: Descriptive test which is printed in hover over of menus and which will be rendered as description between headline and form start
* (Implicitly the DAG to which the form is linked to which will be triggered)
The trigger form elements (list of elements can be picked freely):
* General options of each form element (Base class `TriggerFormUIElement`:
* `name` (str): Name of the parameter, used as technical key in the JSON, must be unique per form (e.g. "param1")
* `display` (str): Label which is displayed on left side of entry field (e.g. "Parameter 1")
* `help` (Optional[str]=Null): Descriptive help text which is optionally rendered below the form element, might contain HTML formatting code
* `required` (Optional[bool]=False): Flag if the user is required to enter/pick a value before submission is possible
* `default` (Optional[str]=Null): Default value to present when the user opens the form
* Element types provided in the base implementation
* `TriggerFormUIString` (inherits `TriggerFormUIElement`): Provides a simple HTML string input box.
* `TriggerFormUISelect` (inherits `TriggerFormUIElement`): Provides a HTML select box with a list of pre-defined string options. Options are provided static as array of strings.
* `TriggerFormUIArray` (inherits `TriggerFormUIElement`): Provides a simple HTML text area allowing to enter multiple lines of text. Each line entered will be converted to a string and the strings will be used as value array.
* `TriggerFormUICheckbox` (inherits `TriggerFormUIElement`): Provides a HTML Checkbox to select on/off, will be converted to true/false as value
* Other element types (optionally, might be added later?) for making futher cool features - depending on how much energy is left
* `TriggerFormUIHelp` (inherits `TriggerFormUIElement`): Provides no actual parameter value but allows to add a HTML block of help
* `TriggerFormUIBreak` (inherits `TriggerFormUIElement`): Provides no actual parameter value but adds a horizontal splitter
* Adding the options to validate string values e.g. with a RegEx
* Allowing to provide int values (besides just strings)
* Allowing to have an "advanced" section for more options which the user might not need in all cases
* Allowing to view the generated `DagRun.conf` so that a user can copy/paste as well
* Allowing to user extend the form elements...
### Part 5) (Optional) Extended for Templated Form based on the Simple form but uses fields to run a template through Jinja
Implement (optionally, might be future extension as well?) a `TriggerTemplateFormUI` (inherits TriggerFormUI) which adds a Jinja2 JSON template which will be templated with the collected form fields so that more complex `DagRun.conf` parameter structures can be created on top of just key/value
### Part 6) Examples
Provide 1-2 example DAGs which show how the trigger forms can be used. Adjust existing examples as needed.
### Part 7) Documentation
Provide needed documentation to describe the feature and options. This would include an description how to add custom forms above the standards via Airflow Plugins and custom Python code.
### Use case/motivation
As user of Airflow for our custom workflows we often use `DagRun.conf` attributes to control content and flow. Current UI allows (only) to launch via REST API with given parameters or using a JSON structure in the UI to trigger with parameters. This is technically feasible but not user friendly. A user needs to model, check and understand the JSON and enter parameters manually without the option to validate before trigger.
Similar like Jenkins or Github/Azure pipelines we desire an UI option to trigger with a UI and specifying parameters. We'd like to have a similar capability in Airflow.
Current workarounds used in multiple places are:
1) Implementing a custom (additional) Web UI which implements the required forms outside/on top of Airflow. This UI accepts user input and in the back-end triggers Airflow via REST API. This is flexible but replicates the efforts for operation, deployment, release as well and redundantly need to implement access control, logging etc.
2) Implementing an custom Airflow Plugin which hosts additional launch/trigger UIs inside Airflow. We are using this but it is actually a bit redundant to other trigger options and is only 50% user friendly
I/we propose this as a feature and would like to contribute this with a following PR - would this be supported if we contribute this feature to be merged?
### Related issues
Note: This proposal is similar and/or related to #11054 but a bit more detailed and concrete. Might be also related to #22408 and contribute to AIP-38 (https://github.com/apache/airflow/projects/9)?
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26215 | https://github.com/apache/airflow/pull/29376 | 7ee1a5624497fc457af239e93e4c1af94972bbe6 | 9c6f83bb6f3e3b57ae0abbe9eb0582fcde265702 | "2022-09-07T14:36:30Z" | python | "2023-02-11T14:38:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,194 | ["airflow/www/static/js/dag/details/taskInstance/Logs/index.test.tsx", "airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx"] | Extra entry for logs generated with 0 try number when clearing any task instances | ### Apache Airflow version
main (development)
### What happened
When clearing any task instances an extra logs entry generated with Zero try number.
<img width="1344" alt="Screenshot 2022-09-07 at 1 06 54 PM" src="https://user-images.githubusercontent.com/88504849/188819289-13dd4936-cd03-48b6-8406-02ee5fbf293f.png">
### What you think should happen instead
It should not create a entry with zero try number
### How to reproduce
Clear a task instance by hitting clear button on UI and then observe the entry for logs in logs tab
### Operating System
mac os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26194 | https://github.com/apache/airflow/pull/26556 | 6f1ab37d2091e26e67717d4921044029a01d6a22 | 6a69ad033fdc224aee14b8c83fdc1b672d17ac20 | "2022-09-07T07:43:59Z" | python | "2022-09-22T19:39:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,189 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py"] | GCSToBigQueryOperator Schema in Alternate GCS Bucket | ### Description
Currently the `GCSToBigQueryOperator` requires that a Schema object located in GCS be located in the same bucket as the Source Object(s). I'd like an option to have it located in a different bucket.
### Use case/motivation
I have a GCS bucket where I store files with a 90 day auto-expiration on the whole bucket. I want to be able to store a fixed schema in GCS, but since this bucket has an auto-expiration of 90 days the schema is auto deleted at that time.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26189 | https://github.com/apache/airflow/pull/26190 | 63562d7023a9d56783f493b7ea13accb2081121a | 8cac96918becf19a4a04eef1e5bcf175f815f204 | "2022-09-07T01:50:01Z" | python | "2022-09-07T20:26:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,185 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | Webserver fails to pull secrets from Hashicorp Vault on start up | ### Apache Airflow version
2.3.4
### What happened
Since upgrading to Airflow 2.3.4 our webserver fails on start up to pull secrets from our Vault instance. Setting AIRFLOW__WEBSERVER_WORKERS = 1 allowed the webserver to start up successfully, but reverting the change added here [https://github.com/apache/airflow/pull/25556](url) was the only way we found to fix the issue without adjusting the webserver's worker count.
### What you think should happen instead
The airflow webserver should be able to successfully read from Vault with AIRFLOW__WEBSERVERS__WORKERS > 1.
### How to reproduce
Star a Webserver instance set to authenticate with Vault using the approle method and AIRFLOW__DATABASE__SQL_ALCHEMY_CONN_SECRET and AIRFLOW__WEBSERVER__SECRET_KEY_SECRET set. The webserver should fail to initialize all of the gunicorn workers and exit.
### Operating System
Fedora 29
### Versions of Apache Airflow Providers
apache-airflow-providers-hashicorp==3.1.0
### Deployment
Docker-Compose
### Deployment details
Python 3.9.13
Vault 1.9.4
### Anything else
None
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26185 | https://github.com/apache/airflow/pull/26223 | ebef9ed3fa4a9a1e69b4405945e7cd939f499ee5 | c63834cb24c6179c031ce0d95385f3fa150f442e | "2022-09-06T21:36:02Z" | python | "2022-09-08T00:35:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,174 | ["airflow/api_connexion/endpoints/xcom_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_xcom_endpoint.py"] | API Endpoints - /xcomEntries/{xcom_key} cannot deserialize customized xcom backend | ### Description
We use S3 as our xcom backend database and write serialize/deserialize method for xcoms.
However, when we want to access xcom through REST API, it returns the s3 file url instead of the deserialized value. Could you please add the feature to support customized xcom backend for REST API access?
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26174 | https://github.com/apache/airflow/pull/26343 | 3c9c0f940b67c25285259541478ebb413b94a73a | ffee6bceb32eba159a7a25a4613d573884a6a58d | "2022-09-06T09:35:30Z" | python | "2022-09-12T21:05:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,155 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/role_command.py", "tests/cli/commands/test_role_command.py"] | Add CLI to add/remove permissions from existed role | ### Body
Followup on https://github.com/apache/airflow/pull/25854
[Roles CLI](https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#roles) currently support create, delete, export, import, list
It can be useful to have the ability to add/remove permissions from existed role.
This has also been asked in https://github.com/apache/airflow/issues/15318#issuecomment-872496184
cc @chenglongyan
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26155 | https://github.com/apache/airflow/pull/26338 | e31590039634ff722ad005fe9f1fc02e5a669699 | 94691659bd73381540508c3c7c8489d60efb2367 | "2022-09-05T08:01:19Z" | python | "2022-09-20T08:18:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,130 | ["Dockerfile.ci", "airflow/serialization/serialized_objects.py", "setup.cfg"] | Remove `cattrs` from project | Cattrs is currently only used in two places: Serialization for operator extra links, and for Lineage.
However cattrs is not a well maintained project and doesn't support many features that attrs itself does; in short, it's not worth the brain cycles to keep cattrs. | https://github.com/apache/airflow/issues/26130 | https://github.com/apache/airflow/pull/34672 | 0c8e30e43b70e9d033e1686b327eb00aab82479c | e5238c23b30dfe3556fb458fa66f28e621e160ae | "2022-09-02T12:15:18Z" | python | "2023-10-05T07:34:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,101 | ["airflow/utils/sqlalchemy.py", "tests/utils/test_sqlalchemy.py"] | Kubernetes Invalid executor_config, pod_override filled with Encoding.VAR | ### Apache Airflow version
2.3.4
### What happened
Trying to start Kubernetes tasks using a `pod_override` results in pods not starting after upgrading from 2.3.2 to 2.3.4
The pod_override look very odd, filled with many Encoding.VAR objects, see following scheduler log:
```
{kubernetes_executor.py:550} INFO - Add task TaskInstanceKey(dag_id='commit_check', task_id='sync_and_build', run_id='5776-2-1662037155', try_number=1, map_index=-1) with command ['airflow', 'tasks', 'run', 'commit_check', 'sync_and_build', '5776-2-1662037155', '--local', '--subdir', 'DAGS_FOLDER/dag_on_commit.py'] with executor_config {'pod_override': {'Encoding.VAR': {'Encoding.VAR': {'Encoding.VAR': {'metadata': {'Encoding.VAR': {'annotations': {'Encoding.VAR': {}, 'Encoding.TYPE': 'dict'}}, 'Encoding.TYPE': 'dict'}, 'spec': {'Encoding.VAR': {'containers': REDACTED 'Encoding.TYPE': 'k8s.V1Pod'}, 'Encoding.TYPE': 'dict'}}
{kubernetes_executor.py:554} ERROR - Invalid executor_config for TaskInstanceKey(dag_id='commit_check', task_id='sync_and_build', run_id='5776-2-1662037155', try_number=1, map_index=-1)
```
Looking in the UI, the task get stuck in scheduled state forever. By clicking instance details, it shows similar state of the pod_override with many Encoding.VAR.
This appears like a recent addition, in 2.3.4 via https://github.com/apache/airflow/pull/24356.
@dstandish do you understand if this is connected?
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.3.0
apache-airflow-providers-common-sql==1.1.0
apache-airflow-providers-docker==3.1.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-postgres==5.2.0
apache-airflow-providers-sqlite==3.2.0
kubernetes==23.6.0
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26101 | https://github.com/apache/airflow/pull/26191 | af3a07427023d7089f3bc74a708723d13ce3cf73 | 87108d7b62a5c79ab184a50d733420c0930fdd93 | "2022-09-01T13:26:56Z" | python | "2022-09-07T22:44:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,099 | ["airflow/models/baseoperator.py", "airflow/ti_deps/deps/trigger_rule_dep.py", "airflow/utils/trigger_rule.py", "docs/apache-airflow/concepts/dags.rst", "tests/ti_deps/deps/test_trigger_rule_dep.py", "tests/utils/test_trigger_rule.py"] | Add one_done trigger rule | ### Body
Action: trigger as soon as 1 upstream task is in success or failuire
This has been requested in https://stackoverflow.com/questions/73501232/how-to-implement-the-one-done-trigger-rule-for-airflow
I think this can be useful for the community.
**The Task:**
Add support for new trigger rule `one_done`
You can use as reference previous PRs that added other trigger rules for example: https://github.com/apache/airflow/pull/21662
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/26099 | https://github.com/apache/airflow/pull/26146 | 55d11464c047d2e74f34cdde75d90b633a231df2 | baaea097123ed22f62c781c261a1d9c416570565 | "2022-09-01T07:27:12Z" | python | "2022-09-23T17:05:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,095 | ["airflow/providers/google/cloud/hooks/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py"] | Creative use of BigQuery Hook Leads to Exception | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
8.3.0
### Apache Airflow version
2.3.4
### Operating System
Debian
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When executing a query through a BigQuery Hook Cursor that does not have a schema, an exception is thrown.
### What you think should happen instead
If a cursor does not contain a schema, revert to a `self.description` of None, like before the update.
### How to reproduce
Execute an `UPDATE` sql statement using a cursor.
```
conn = bigquery_hook.get_conn()
cursor = conn.cursor()
cursor.execute(sql)
```
### Anything else
I'll be the first to admit that my users are slightly abusing cursors in BigQuery by running all statement types through them, but BigQuery doesn't care and lets you.
Ref: https://github.com/apache/airflow/issues/22328
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26095 | https://github.com/apache/airflow/pull/26096 | b7969d4a404f8b441efda39ce5c2ade3e8e109dc | 12cbc0f1ddd9e8a66c5debe7f97b55a2c8001502 | "2022-08-31T21:43:47Z" | python | "2022-09-07T15:56:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,071 | ["airflow/example_dags/example_branch_day_of_week_operator.py", "airflow/operators/weekday.py", "airflow/sensors/weekday.py"] | BranchDayOfWeekOperator documentation don't mention how to use parameter use_taks_execution_day or how to use WeekDay | ### What do you see as an issue?
The constructor snippet shows clearly that there's a keyword parameter `use_task_exection_day=False`, but the doc does not explain how to use it. It also has `{WeekDay.TUESDAY}, {WeekDay.SATURDAY, WeekDay.SUNDAY}` as options for `week_day` but does not clarify how to import WeekDay. The tutorial is also very basic and only shows one usecase. The sensor has the same issues.
### Solving the problem
I think docs should be added for `use_taks_execution_day` and there should be mentions of how one uses `WeekDay` class and where to import it from. The tutorial is also incomplete there. I would like to see examples for, say, multiple different workdays branches and/or some graph for resulting dags
### Anything else
I feel like BranchDayOfWeekOperator is tragically underrepresented and hard to find, and I hope that improving docs would help make its use more common
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26071 | https://github.com/apache/airflow/pull/26098 | 4b26c8c541a720044fa96475620fc70f3ac6ccab | dd6b2e4e6cb89d9eea2f3db790cb003a2e89aeff | "2022-08-30T16:30:15Z" | python | "2022-09-09T02:05:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,067 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | Include external_executor_id in zombie detection method | ### Description
Adjust the SimpleTaskInstance to include the external_executor_id so that it shows up when the zombie detection method prints the SimpleTaskInstance to logs.
### Use case/motivation
Since the zombie detection message originates in the dag file processor, further troubleshooting of the zombie task requires figuring out which worker was actually responsible for the task. Printing the external_executor_id makes it easier to find the task in a log aggregator like Kibana or Splunk than it is when using the combination of dag_id, task_id, logical_date, and map_index, at least for executors like Celery.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26067 | https://github.com/apache/airflow/pull/26141 | b6ba11ebece2c3aaf418738cb157174491a1547c | ef0b97914a6d917ca596200c19faed2f48dca88a | "2022-08-30T13:27:51Z" | python | "2022-09-03T13:23:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,059 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | [Graph view] After clearing the task (and its downstream tasks) in a task group the task group becomes disconnected from the dag | ### Apache Airflow version
2.3.4
### What happened
n the graph view of the dag, after clearing the task (and its downstream tasks) in a task group and refreshing the page the browser the task group becomes disconnected from the dag. See attached gif.

The issue is not persistent and consistent. The graph view becomes disconnected from time to time as you can see on the attached video.
### What you think should happen instead
The graph should be rendered properly and consistently.
### How to reproduce
1. Add the following dag to the dag folder:
```
import logging
import time
from typing import List
import pendulum
from airflow import DAG
from airflow.operators.python import PythonOperator
from airflow.utils.task_group import TaskGroup
def log_function(message: str, **kwargs):
logging.info(message)
time.sleep(3)
def create_file_handling_task_group(supplier):
with TaskGroup(group_id=f"file_handlig_task_group_{supplier}", ui_color='#666666') as file_handlig_task_group:
entry = PythonOperator(
task_id='entry',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_task_group-Entry-task'}
)
with TaskGroup(group_id=f"file_handling_task_sub_group-{supplier}",
ui_color='#666666') as file_handlig_task_sub_group:
sub_group_submit = PythonOperator(
task_id='sub_group_submit',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_sub_group_submit'}
)
sub_group_monitor = PythonOperator(
task_id='sub_group_monitor',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_sub_group_monitor'}
)
sub_group_submit >> sub_group_monitor
entry >> file_handlig_task_sub_group
return file_handlig_task_group
def get_stage_1_taskgroups(supplierlist: List) -> List[TaskGroup]:
return [create_file_handling_task_group(supplier) for supplier in supplierlist]
def connect_stage1_to_stage2(self, stage1_tasks: List[TaskGroup], stage2_tasks: List[TaskGroup]) -> None:
if stage2_tasks:
for stage1_task in stage1_tasks:
supplier_code: str = self.get_supplier_code(stage1_task)
stage2_task = self.get_suppliers_tasks(supplier_code, stage2_tasks)
stage1_task >> stage2_task
def get_stage_2_taskgroup(taskgroup_id: str):
with TaskGroup(group_id=taskgroup_id, ui_color='#666666') as stage_2_taskgroup:
sub_group_submit = PythonOperator(
task_id='sub_group_submit',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_sub_group_submit'}
)
sub_group_monitor = PythonOperator(
task_id='sub_group_monitor',
python_callable=log_function,
op_kwargs={'message': 'create_file_handlig_sub_group_monitor'}
)
sub_group_submit >> sub_group_monitor
return stage_2_taskgroup
def create_dag():
with DAG(
dag_id="horizon-task-group-bug",
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
description="description"
) as dag:
start = PythonOperator(
task_id='start_main',
python_callable=log_function,
op_kwargs={'message': 'Entry-task'}
)
end = PythonOperator(
task_id='end_main',
python_callable=log_function,
op_kwargs={'message': 'End-task'}
)
with TaskGroup(group_id=f"main_file_task_group", ui_color='#666666') as main_file_task_group:
end_main_file_task_stage_1 = PythonOperator(
task_id='end_main_file_task_stage_1',
python_callable=log_function,
op_kwargs={'message': 'end_main_file_task_stage_1'}
)
first_stage = get_stage_1_taskgroups(['9001', '9002'])
first_stage >> get_stage_2_taskgroup("stage_2_1_taskgroup")
first_stage >> get_stage_2_taskgroup("stage_2_2_taskgroup")
first_stage >> end_main_file_task_stage_1
start >> main_file_task_group >> end
return dag
dag = create_dag()
```
2. Go to de graph view of the dag.
3. Run the dag.
4. After the dag run has finished. Clear the "sub_group_submit" task within the "stage_2_1_taskgroup" with downstream tasks.
5. Refresh the page multiple times and notice how from time to time the "stage_2_1_taskgroup" becomes disconnected from the dag.
6. Clear the "sub_group_submit" task within the "stage_2_2_taskgroup" with downstream tasks.
7. Refresh the page multiple times and notice how from time to time the "stage_2_2_taskgroup" becomes disconnected from the dag.
### Operating System
Mac OS, Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Custom docker image based on apache/airflow:2.3.4-python3.10
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26059 | https://github.com/apache/airflow/pull/30129 | 4dde8ececf125abcded5910817caad92fcc82166 | 76a884c552a78bfb273fe8b65def58125fc7961a | "2022-08-30T10:12:04Z" | python | "2023-03-15T20:05:12Z" |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.