You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This section describes the major changes that have been made in this release.
@@ -238,6 +240,7 @@ You should update the import paths if you are setting log configurations with th
238
240
The old import paths still works but can be abandoned.
239
241
240
242
#### SendGrid emailer has been moved
243
+
241
244
Formerly the core code was maintained by the original creators - Airbnb. The code that was in the contrib
242
245
package was supported by the community. The project was passed to the Apache community and currently the
243
246
entire code is maintained by the community, so now the division has no justification, and it is only due
@@ -411,6 +414,7 @@ has also been changed to `Running Slots`.
411
414
The Mesos Executor is removed from the code base as it was not widely used and not maintained. [Mailing List Discussion on deleting it](https://lists.apache.org/thread.html/daa9500026b820c6aaadeffd66166eae558282778091ebbc68819fb7@%3Cdev.airflow.apache.org%3E).
412
415
413
416
#### Change dag loading duration metric name
417
+
414
418
Change DAG file loading duration metric from
415
419
`dag.loading-duration.<dag_id>` to `dag.loading-duration.<dag_file>`. This is to
416
420
better handle the case when a DAG file has multiple DAGs.
@@ -503,6 +507,7 @@ To maintain consistent behavior, both successful or skipped downstream task can
503
507
`wait_for_downstream=True` flag.
504
508
505
509
#### `airflow.utils.helpers.cross_downstream`
510
+
506
511
#### `airflow.utils.helpers.chain`
507
512
508
513
The `chain` and `cross_downstream` methods are now moved to airflow.models.baseoperator module from
@@ -532,6 +537,7 @@ from airflow.models.baseoperator import cross_downstream
The `do_xcom_push` flag (a switch to push the result of an operator to xcom or not) was appearing in different incarnations in different operators. It's function has been unified under a common name (`do_xcom_push`) on `BaseOperator`. This way it is also easy to globally disable pushing results to xcom.
@@ -665,6 +679,7 @@ replaced with its corresponding new path.
In the `PubSubPublishOperator` and `PubSubHook.publsh` method the data field in a message should be bytestring (utf-8 encoded) rather than base64 encoded string.
@@ -1267,10 +1298,15 @@ Detailed information about connection management is available:
1267
1298
* The `maxResults` parameter in `GoogleCloudStorageHook.list` has been renamed to `max_results` for consistency.
The default value for the [aws_conn_id](https://airflow.apache.org/howto/manage-connections.html#amazon-web-services) was accidently set to 's3_default' instead of 'aws_default' in some of the emr operators in previous
1386
1428
versions. This was leading to EmrStepSensor not being able to find their corresponding emr cluster. With the new
1387
1429
changes in the EmrAddStepsOperator, EmrTerminateJobFlowOperator and EmrCreateJobFlowOperator this issue is
@@ -1468,6 +1510,7 @@ Remove unnecessary parameter ``open`` in PostgresHook function ``copy_expert`` f
1468
1510
Change parameter name from ``visibleTo`` to ``visible_to`` in OpsgenieAlertOperator for pylint compatible
The above code returned `None` previously, now it will return `''`.
1698
1741
1699
1742
### Make behavior of `none_failed` trigger rule consistent with documentation
1743
+
1700
1744
The behavior of the `none_failed` trigger rule is documented as "all parents have not failed (`failed` or
1701
1745
`upstream_failed`) i.e. all parents have succeeded or been skipped." As previously implemented, the actual behavior
1702
1746
would skip if all parents of a task had also skipped.
1703
1747
1704
1748
### Add new trigger rule `none_failed_or_skipped`
1749
+
1705
1750
The fix to `none_failed` trigger rule breaks workflows that depend on the previous behavior.
1706
1751
If you need the old behavior, you should change the tasks with `none_failed` trigger rule to `none_failed_or_skipped`.
1707
1752
@@ -1716,6 +1761,7 @@ No breaking changes.
1716
1761
## Airflow 1.10.8
1717
1762
1718
1763
### Failure callback will be called when task is marked failed
1764
+
1719
1765
When task is marked failed by user or task fails due to system failures - on failure call back will be called as part of clean up
1720
1766
1721
1767
See [AIRFLOW-5621](https://jira.apache.org/jira/browse/AIRFLOW-5621) for details
@@ -1887,6 +1933,7 @@ they contain the strings "airflow" and "DAG". For backwards
1887
1933
compatibility, this option is enabled by default.
1888
1934
1889
1935
### RedisPy dependency updated to v3 series
1936
+
1890
1937
If you are using the Redis Sensor or Hook you may have to update your code. See
1891
1938
[redis-py porting instructions] to check if your code might be affected (MSET,
1892
1939
MSETNX, ZADD, and ZINCRBY all were, but read the full doc).
@@ -1961,6 +2008,7 @@ Hooks involved:
1961
2008
Other GCP hooks are unaffected.
1962
2009
1963
2010
### Changed behaviour of using default value when accessing variables
2011
+
1964
2012
It's now possible to use `None` as a default value with the `default_var` parameter when getting a variable, e.g.
1965
2013
1966
2014
```python
@@ -2076,6 +2124,7 @@ that he has permissions on. If a new role wants to access all the dags, the admi
2076
2124
We also provide a new cli command(``sync_perm``) to allow admin to auto sync permissions.
2077
2125
2078
2126
### Modification to `ts_nodash` macro
2127
+
2079
2128
`ts_nodash` previously contained TimeZone information along with execution date. For Example: `20150101T000000+0000`. This is not user-friendly for file or folder names which was a popular use case for `ts_nodash`. Hence this behavior has been changed and using `ts_nodash` will no longer contain TimeZone information, restoring the pre-1.10 behavior of this macro. And a new macro `ts_nodash_with_tz` has been added which can be used to get a string with execution date and timezone info without dashes.
2080
2129
2081
2130
Examples:
@@ -2088,6 +2137,7 @@ Examples:
2088
2137
next_ds/prev_ds now map to execution_date instead of the next/previous schedule-aligned execution date for DAGs triggered in the UI.
2089
2138
2090
2139
### User model changes
2140
+
2091
2141
This patch changes the `User.superuser` field from a hardcoded boolean to a `Boolean()` database column. `User.superuser` will default to `False`, which means that this privilege will have to be granted manually to any users that may require it.
2092
2142
2093
2143
For example, open a Python shell and
@@ -2590,6 +2640,7 @@ indefinitely. This is only available on the command line.
2590
2640
After how much time should an updated DAG be picked up from the filesystem.
2591
2641
2592
2642
#### min_file_parsing_loop_time
2643
+
2593
2644
CURRENTLY DISABLED DUE TO A BUG
2594
2645
How many seconds to wait between file-parsing loops to prevent the logs from being spammed.
Copy file name to clipboardExpand all lines: UPGRADING_TO_2.0.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,7 @@
16
16
specific language governing permissions and limitations
17
17
under the License.
18
18
-->
19
+
19
20
# Upgrading to Airflow 2.0+
20
21
21
22
This file documents any backwards-incompatible changes in Airflow and
@@ -92,6 +93,7 @@ goal is that any Airflow setup that can pass these tests will be able to upgrade
92
93
93
94
94
95
## Step 3: Set Operators to Backport Providers
96
+
95
97
Now that you are set up in airflow 1.10.13 with python a 3.6+ environment, you are ready to start porting your DAGs to Airfow 2.0 compliance!
96
98
97
99
The most important step in this transition is also the easiest step to do in pieces. All Airflow 2.0 operators are backwards compatible with Airflow 1.10
@@ -443,9 +445,11 @@ For Airflow 2.0, the traditional `executor_config` will continue operation with
443
445
but will be removed in a future version.
444
446
445
447
## Appendix
448
+
446
449
### Changed Parameters for the KubernetesPodOperator
447
450
448
451
#### port has migrated from a List[Port] to a List[V1ContainerPort]
452
+
449
453
Before:
450
454
```python
451
455
from airflow.kubernetes.pod import Port
@@ -475,6 +479,7 @@ k = KubernetesPodOperator(
475
479
```
476
480
477
481
#### volume_mounts has migrated from a List[VolumeMount] to a List[V1VolumeMount]
482
+
478
483
Before:
479
484
```python
480
485
from airflow.kubernetes.volume_mount import VolumeMount
@@ -509,6 +514,7 @@ k = KubernetesPodOperator(
509
514
```
510
515
511
516
#### volumes has migrated from a List[Volume] to a List[V1Volume]
517
+
512
518
Before:
513
519
```python
514
520
from airflow.kubernetes.volume import Volume
@@ -545,7 +551,9 @@ k = KubernetesPodOperator(
545
551
task_id="task",
546
552
)
547
553
```
554
+
548
555
#### env_vars has migrated from a Dict to a List[V1EnvVar]
556
+
549
557
Before:
550
558
```python
551
559
k = KubernetesPodOperator(
@@ -720,6 +728,7 @@ k = KubernetesPodOperator(
720
728
resources=resources,
721
729
)
722
730
```
731
+
723
732
#### image_pull_secrets has migrated from a String to a List[k8s.V1LocalObjectReference]
### Migration Guide from Experimental API to Stable API v1
761
+
752
762
In Airflow 2.0, we added the new REST API. Experimental API still works, but support may be dropped in the future.
753
763
If your application is still using the experimental API, you should consider migrating to the stable API.
754
764
@@ -757,6 +767,7 @@ differences between the two endpoints that will help you migrate from the
757
767
experimental REST API to the stable REST API.
758
768
759
769
#### Base Endpoint
770
+
760
771
The base endpoint for the stable API v1 is ``/api/v1/``. You must change the
761
772
experimental base endpoint from ``/api/experimental/`` to ``/api/v1/``.
762
773
The table below shows the differences:
@@ -777,6 +788,7 @@ The table below shows the differences:
777
788
| DAG Lineage(GET) | /api/experimental/lineage/<DAG_ID>/<string:execution_date>/ | /api/v1/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries |
778
789
779
790
#### Note
791
+
780
792
This endpoint ``/api/v1/dags/{dag_id}/dagRuns`` also allows you to filter dag_runs with parameters such as ``start_date``, ``end_date``, ``execution_date`` etc in the query string.
781
793
Therefore the operation previously performed by this endpoint
Copy file name to clipboardExpand all lines: airflow/providers/apache/hive/example_dags/example_twitter_README.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,7 @@
16
16
specific language governing permissions and limitations
17
17
under the License.
18
18
-->
19
+
19
20
# Example Twitter DAG
20
21
21
22
***Introduction:*** This example dag depicts a typical ETL process and is a perfect use case automation scenario for Airflow. Please note that the main scripts associated with the tasks are returning None. The purpose of this DAG is to demonstrate how to write a functional DAG within Airflow.
0 commit comments