Skip to content

Commit

Permalink
Move earlier
Browse files Browse the repository at this point in the history
  • Loading branch information
pedrogk committed Sep 8, 2024
1 parent 087ec23 commit d357349
Show file tree
Hide file tree
Showing 8 changed files with 35 additions and 62 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,17 @@ speakers:
- Freddy Demiane
- Rahul Vats
- Dennis Ferruzzi
time_start: 2024-09-12 15:45:00
time_end: 2024-09-12 16:45:00
time_start: 2024-09-12 15:10:00
time_end: 2024-09-12 16:10:00
room: California West
track: Community
day: 20243
timeslot: 107
gridarea: "14/3/16/4"
timeslot: 104
gridarea: "13/3/15/4"
images:
- /images/sessions/2024/hello-quality.jpg
---

Airflow operators are a core feature of Apache Airflow and it’s extremely important that we maintain high quality of operators, prevent regressions and on the other hand we help developers with automated tests results to double check if introduced changes don’t cause regressions or backward incompatible changes and we provide Airflow release managers with information whether a given version of a provider should be released or not yet.



Recently a new approach to assuring production quality was implemented for AWS, Google and Astronomer-provided operators - standalone Continuous Integration processes were configured for them and test results dashboards show the results of the last test runs. What has been working well for these operator providers might be a pattern to follow for others - during this presentation, AWS, Google and Astronomer engineers are going to share the information about the internals of Test Dashboards implemented for AWS, Google and Astronomer-provided operators. This approach might be a a ‘blueprint’ to follow for other providers.
Recently a new approach to assuring production quality was implemented for AWS, Google and Astronomer-provided operators - standalone Continuous Integration processes were configured for them and test results dashboards show the results of the last test runs. What has been working well for these operator providers might be a pattern to follow for others - during this presentation, AWS, Google and Astronomer engineers are going to share the information about the internals of Test Dashboards implemented for AWS, Google and Astronomer-provided operators. This approach might be a a ‘blueprint’ to follow for other providers.

This file was deleted.

19 changes: 0 additions & 19 deletions content/sessions/2024/92-lessons-learned-airflow-open-source.md

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
title: "Lessons from the Ecosystem: What can Airflow Learn from Other Open-source Communities?"
slug: lessons-from-the-ecosystem-what-can-airflow-learn-from-other-open-source-communities
speakers:
- Michael Robinson

time_start: 2024-09-12 12:00:00
time_end: 2024-09-12 12:25:00
room: California West
track: Community
day: 20243
timeslot: 93
gridarea: "7/3/8/4"

images:
- /images/sessions/2024/lessons-ecosystem.jpg
---

The Apache Airflow community is so large and active that it’s tempting to take the view that “if it ain’t broke don’t fix it.”

In a community as in a codebase, however, improvement and attention are essential to sustaining growth. And bugs are just as inevitable in community management as they are in software development. If only the fixes were, too!

Airflow is large and growing because users love Airflow and our community. But what steps could be taken to enhance the typical user’s and developer’s experience of the community?

This talk will provide an overview of potential learnings for Airflow community management efforts, such as project governance and analytics, derived from the speaker's experience managing the OpenLineage and Marquez open-source communities.

The talk will answer questions such as: What can we learn from other open-source communities when it comes to supporting users and developers and learning from them? For example, what options exist for getting historical data out of Slack despite the limitations of the free tier? What tools can be used to make adoption metrics more reliable? What are some effective supplements to asynchronous governance?
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ time_end: 2024-09-12 13:15:00
room: California East
track: Use cases
day: 20243
timeslot: 93
timeslot: 94
gridarea: "8/2/10/3"
images:
- /images/sessions/2024/adaptive-memory-scaling.jpg
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ time_end: 2024-09-12 13:15:00
room: California West
track: Community
day: 20243
timeslot: 94
timeslot: 95
gridarea: "8/3/10/4"
images:
- /images/sessions/2024/silent-symphony.jpg
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ time_end: 2024-09-12 13:15:00
room: Elizabethan A+B
track: New features
day: 20243
timeslot: 95
timeslot: 96
gridarea: "8/4/10/5"
images:
- /images/sessions/2024/data-centric.jpg
Expand Down
File renamed without changes.

0 comments on commit d357349

Please sign in to comment.