Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decorator configuration improvements #67

Closed
wants to merge 18 commits into from
Closed

Conversation

venkatajagannath
Copy link
Contributor

Currently, ray configuration to the ray.task decorator can only be a static input.

This PR fixes that behavior and also allows users to provide the configuration at runtime.

We are also making the following updates --

  • Documentation updates to mention that config can be static or dynamic
  • Added a new example dag that generates a config and passing in the callable

@codecov-commenter
Copy link

codecov-commenter commented Sep 17, 2024

Codecov Report

Attention: Patch coverage is 88.88889% with 1 line in your changes missing coverage. Please review.

Project coverage is 95.47%. Comparing base (a783526) to head (bb5b117).

Files with missing lines Patch % Lines
ray_provider/decorators/ray.py 88.88% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #67      +/-   ##
==========================================
+ Coverage   95.42%   95.47%   +0.04%     
==========================================
  Files           5        5              
  Lines         546      552       +6     
==========================================
+ Hits          521      527       +6     
  Misses         25       25              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@venkatajagannath venkatajagannath marked this pull request as draft September 19, 2024 05:59
When running the DAG:
"""

This tutorial demonstrates how to use the Ray provider in Airflow to parallelize
a task using Ray.
"""

from airflow.decorators import dag, task
from ray_provider.decorators.ray import ray

CONN_ID = "ray_conn_2"
RAY_TASK_CONFIG = {
    "conn_id": CONN_ID,
    "num_cpus": 1,
    "num_gpus": 0,
    "memory": 0,
    "poll_interval": 5,
}

@dag(
    start_date=None,
    schedule=None,
    catchup=False,
    tags=["ray", "example", "TEST"],
    doc_md=__doc__,
)
def test_taskflow_ray_tutorial():

    @task
    def generate_data() -> list:
        """
        Generate sample data
        Returns:
            list: List of integers
        """
        import random

        return [random.randint(1, 100) for _ in range(10)]

    # use the @ray.task decorator to parallelize the task
    @ray.task(config=RAY_TASK_CONFIG)
    def get_mean_squared_value(data: list) -> float:
        """
        Get the mean squared value from a list of integers
        Args:
            data (list): List of integers
        Returns:
            float: Mean value of the list
        """
        import numpy as np
        import ray

        @ray.remote
        def square(x: int) -> int:
            """
            Square a number
            Args:
                x (int): Number to square
            Returns:
                int: Squared number
            """
            return x**2

        ray.init()
        data = np.array(data)
        futures = [square.remote(x) for x in data]
        results = ray.get(futures)
        mean = np.mean(results)
        print(f"Mean squared value: {mean}")

    data = generate_data()
    get_mean_squared_value(data)

test_taskflow_ray_tutorial()

We faced the issue:

Traceback (most recent call last):
  File "/usr/local/lib/python3.12/site-packages/ray_provider/operators/ray.py", line 286, in execute
    self.defer(
  File "/usr/local/lib/python3.12/site-packages/airflow/models/baseoperator.py", line 1777, in defer
    raise TaskDeferred(trigger=trigger, method_name=method_name, kwargs=kwargs, timeout=timeout)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/airflow/exceptions.py", line 431, in __init__
    raise ValueError("Timeout value must be a timedelta")
ValueError: Timeout value must be a timedelta
It works when not setting job_timeout_seconds or setting it to a positive integer but not with 0:
    get_mean_squared_value = SubmitRayJob(
        task_id="SubmitRayJob",
        conn_id=CONN_ID,
        entrypoint="python ray_script.py {{ ti.xcom_pull(task_ids='generate_data') | join(' ') }}",
        runtime_env=RAY_RUNTIME_ENV,
        num_cpus=1,
        num_gpus=0,
        memory=0,
        resources={},
        xcom_task_key="SubmitRayJob.dashboard",
        fetch_logs=True,
        wait_for_completion=True,
        job_timeout_seconds=0,
        poll_interval=5,
    )
failed with
[2024-09-27, 10:29:53 UTC] {taskinstance.py:3310} ERROR - Task failed with exception
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/site-packages/ray_provider/operators/ray.py", line 287, in execute
    job_timeout_seconds = timedelta(seconds=self.job_timeout_seconds) if self.job_timeout_seconds > 0 else None
                                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '>' not supported between instances of 'NoneType' and 'int'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/site-packages/airflow/models/taskinstance.py", line 767, in _execute_task
    result = _execute_callable(context=context, **execute_callable_kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/airflow/models/taskinstance.py", line 733, in _execute_callable
    return ExecutionCallableRunner(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/airflow/utils/operator_helpers.py", line 252, in run
    return self.func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/airflow/models/baseoperator.py", line 406, in wrapper
    return func(self, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/ray_provider/operators/ray.py", line 317, in execute
    raise AirflowException(f"SubmitRayJob operator failed due to {e}. Cleaning up resources...")
airflow.exceptions.AirflowException: SubmitRayJob operator failed due to '>' not supported between instances of 'NoneType' and 'int'. Cleaning up resources...
    get_mean_squared_value = SubmitRayJob(
        task_id="SubmitRayJob",
        conn_id=CONN_ID,
        entrypoint="python ray_script.py {{ ti.xcom_pull(task_ids='generate_data') | join(' ') }}",
        runtime_env=RAY_RUNTIME_ENV,
        num_cpus=1,
        num_gpus=0,
        memory=0,
        resources={},
        xcom_task_key="SubmitRayJob.dashboard",
        fetch_logs=True,
        wait_for_completion=True,
        job_timeout_seconds=0,
        poll_interval=5,
    )

That resulted

[2024-09-27, 10:55:06 UTC] {local_task_job_runner.py:123} ▶ Pre task execution logs
[2024-09-27, 10:55:06 UTC] {ray.py:219} INFO - Dashboard URL retrieved from XCom: None
[2024-09-27, 10:55:06 UTC] {base.py:84} INFO - Retrieving connection 'ray_conn_2'
[2024-09-27, 10:55:06 UTC] {ray.py:87} INFO - Ray cluster address is: http://172.23.0.3:30487
[2024-09-27, 10:55:06 UTC] {ray.py:155} INFO - Address URL is: http://172.23.0.3:30487
[2024-09-27, 10:55:06 UTC] {ray.py:156} INFO - Dashboard URL is: None
[2024-09-27, 10:55:06 UTC] {ray.py:183} INFO - Submitted job with ID: raysubmit_G22MqPHyvLv8ghRV
[2024-09-27, 10:55:06 UTC] {ray.py:278} INFO - Ray job submitted with id: raysubmit_G22MqPHyvLv8ghRV
[2024-09-27, 10:55:06 UTC] {ray.py:208} INFO - Job raysubmit_G22MqPHyvLv8ghRV status: PENDING
[2024-09-27, 10:55:06 UTC] {ray.py:282} INFO - Current job status for raysubmit_G22MqPHyvLv8ghRV is: PENDING
[2024-09-27, 10:55:06 UTC] {ray.py:290} INFO - Deferring the polling to RayJobTrigger...
[2024-09-27, 10:55:06 UTC] {taskinstance.py:3310} ERROR - Task failed with exception
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/site-packages/ray_provider/operators/ray.py", line 302, in execute
    timeout=job_timeout_seconds,
            ^^^^^^^^^^^^^^^^^^^
UnboundLocalError: cannot access local variable 'job_timeout_seconds' where it is not associated with a value
During handling of the above exception, another exception occurred:
@phanikumv phanikumv closed this Nov 18, 2024
tatiana added a commit that referenced this pull request Nov 29, 2024
The Ray provider 0.2.1 allowed users to define a hard-coded configuration to materialize the Kubernetes cluster. This PR aims to enable users to define a function that can receive the Airflow context and generate the configuration dynamically using context properties. This request came from an Astronomer customer.

There is an example DAG file illustrating how to use this feature. It has a parent DAG that triggers two child DAGs, which leverage the just introduced `@ray.task` callable configuration.

The screenshots below show their success, when using the [local development instructions](https://github.com/astronomer/astro-provider-ray/blob/main/docs/getting_started/local_development_setup.rst) using Astro CLI.

Parent DAG:
<img width="1624" alt="Screenshot 2024-11-29 at 12 15 13" src="https://github.com/user-attachments/assets/586b4575-ee62-4344-bbd7-a1a6423360ce">

Child 1 DAG:
<img width="1624" alt="Screenshot 2024-11-29 at 12 15 56" src="https://github.com/user-attachments/assets/23d89288-c68a-498e-848a-743fb2684c4f">

Example of logs that illustrate the RayCluster using dynamic configuration was created and used in Kubernetes, with its own IP address:
```
(...)
[2024-11-29T12:14:52.276+0000] {standard_task_runner.py:104} INFO - Running: ['airflow', 'tasks', 'run', 'ray_dynamic_config_child_1', 'process_data_with_ray', 'manual__2024-11-29T12:14:50.273712+00:00', '--job-id', '773', '--raw', '--subdir', 'DAGS_FOLDER/ray_dynamic_config.py', '--cfg-path', '/tmp/tmpkggwlv23']
[2024-11-29T12:14:52.278+0000] {logging_mixin.py:190} WARNING - /usr/local/lib/python3.12/site-packages/airflow/task/task_runner/standard_task_runner.py:70 DeprecationWarning: This process (pid=238) is multi-threaded, use of fork() may lead to deadlocks in the child.
(...)
[2024-11-29T12:14:52.745+0000] {decorators.py:94} INFO - Using the following config {'conn_id': 'ray_conn', 'runtime_env': {'working_dir': '/usr/local/airflow/dags/ray_scripts', 'pip': ['numpy']}, 'num_cpus': 1, 'num_gpus': 0, 'memory': 0, 'poll_interval': 5, 'ray_cluster_yaml': '/usr/local/airflow/dags/scripts/first-254.yaml', 'xcom_task_key': 'dashboard'}
(...)
[2024-11-29T12:14:55.430+0000] {hooks.py:474} INFO - ::group::Create Ray Cluster
[2024-11-29T12:14:55.430+0000] {hooks.py:475} INFO - Loading yaml content for Ray cluster CRD...
[2024-11-29T12:14:55.451+0000] {hooks.py:410} INFO - Creating new Ray cluster: first-254
[2024-11-29T12:14:55.456+0000] {hooks.py:494} INFO - ::endgroup::
(...)
[2024-11-29T12:14:55.663+0000] {hooks.py:498} INFO - ::group::Setup Load Balancer service
[2024-11-29T12:14:55.663+0000] {hooks.py:334} INFO - Attempt 1: Checking LoadBalancer status...
[2024-11-29T12:14:55.669+0000] {hooks.py:278} ERROR - Error getting service first-254-head-svc: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'Audit-Id': '81b07ac4-db3b-48a6-b336-f52ae93bee55', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '955e8bb0-08b1-4d45-a768-e49387a9767c', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'd5240328-288d-4366-b094-d8fd793c7431', 'Date': 'Fri, 29 Nov 2024 12:14:55 GMT', 'Content-Length': '212'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"services \"first-254-head-svc\" not found","reason":"NotFound","details":{"name":"first-254-head-svc","kind":"services"},"code":404}
[2024-11-29T12:14:55.669+0000] {hooks.py:355} INFO - LoadBalancer service is not available yet...
[2024-11-29T12:15:35.670+0000] {hooks.py:334} INFO - Attempt 2: Checking LoadBalancer status...
[2024-11-29T12:15:35.688+0000] {hooks.py:348} INFO - LoadBalancer is ready.
[2024-11-29T12:15:35.688+0000] {hooks.py:441} INFO - {'ip': '172.18.255.1', 'hostname': None, 'ports': [{'name': 'client', 'port': 10001}, {'name': 'dashboard', 'port': 8265}, {'name': 'gcs', 'port': 6379}, {'name': 'metrics', 'port': 8080}, {'name': 'serve', 'port': 8000}], 'working_address': '172.18.255.1'}

(...)

[2024-11-29T12:15:38.345+0000] {triggers.py:124} INFO - ::group:: Trigger 1/2: Checking the job status
[2024-11-29T12:15:38.345+0000] {triggers.py:125} INFO - Polling for job raysubmit_paxAkyLiKxEHPmwG every 5 seconds...
(...)
[2024-11-29T12:15:38.354+0000] {hooks.py:156} INFO - Dashboard URL is: http://172.18.255.1:8265
[2024-11-29T12:15:38.361+0000] {hooks.py:208} INFO - Job raysubmit_paxAkyLiKxEHPmwG status: PENDING
[2024-11-29T12:15:38.361+0000] {triggers.py:100} INFO - Status of job raysubmit_paxAkyLiKxEHPmwG is: PENDING
[2024-11-29T12:15:38.361+0000] {triggers.py:108} INFO - ::group::raysubmit_paxAkyLiKxEHPmwG logs
[2024-11-29T12:15:43.416+0000] {hooks.py:208} INFO - Job raysubmit_paxAkyLiKxEHPmwG status: RUNNING
[2024-11-29T12:15:43.416+0000] {triggers.py:100} INFO - Status of job raysubmit_paxAkyLiKxEHPmwG is: RUNNING
[2024-11-29T12:15:43.417+0000] {triggers.py:112} INFO - 2024-11-29 04:15:40,813	INFO worker.py:1429 -- Using address 10.244.0.140:6379 set in the environment variable RAY_ADDRESS
[2024-11-29T12:15:43.417+0000] {triggers.py:112} INFO - 2024-11-29 04:15:40,814	INFO worker.py:1564 -- Connecting to existing Ray cluster at address: 10.244.0.140:6379...
[2024-11-29T12:15:43.417+0000] {triggers.py:112} INFO - 2024-11-29 04:15:40,820	INFO worker.py:1740 -- Connected to Ray cluster. View the dashboard at �[1m�[32m10.244.0.140:8265 �[39m�[22m
[2024-11-29T12:15:48.430+0000] {hooks.py:208} INFO - Job raysubmit_paxAkyLiKxEHPmwG status: SUCCEEDED
[2024-11-29T12:15:48.430+0000] {triggers.py:112} INFO - Mean of this population is 12.0
[2024-11-29T12:15:48.430+0000] {triggers.py:112} INFO - �[36m(autoscaler +5s)�[0m Tip: use `ray status` to view detailed cluster status. To disable these messages, set RAY_SCHEDULER_EVENTS=0.
[2024-11-29T12:15:48.430+0000] {triggers.py:112} INFO - �[36m(autoscaler +5s)�[0m Adding 1 node(s) of type small-group.
[2024-11-29T12:15:49.448+0000] {triggers.py:113} INFO - ::endgroup::
[2024-11-29T12:15:49.448+0000] {triggers.py:144} INFO - ::endgroup::
[2024-11-29T12:15:49.448+0000] {triggers.py:145} INFO - ::group:: Trigger 2/2: Job reached a terminal state
[2024-11-29T12:15:49.448+0000] {triggers.py:146} INFO - Status of completed job raysubmit_paxAkyLiKxEHPmwG is: SUCCEEDED
(...)
```

Child 2 DAG:
<img width="1624" alt="Screenshot 2024-11-29 at 12 17 20" src="https://github.com/user-attachments/assets/5f0320a2-3bce-49a9-8580-a584d1f894dc">

Kubernetes RayClusters spun:
<img width="758" alt="Screenshot 2024-11-29 at 12 15 37" src="https://github.com/user-attachments/assets/aabcce4a-ce4a-47db-b9cf-e87cf68f2316">


**Limitations**

The example DAGs are not currently being executed in the CI, but there is a dedicated ticket for this work:
#95

**References**

This PR had inspiration from:
#67
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Decorator Improvements -- Dynamic config
5 participants