Skip to content

Releases: dstackai/dstack-enterprise

0.18.16-v1

30 Sep 10:32
d4ea467
Compare
Choose a tag to compare

0.18.16

The update includes all the features and bug fixes from version 0.18.16.

New versioning policy

Starting with this release, dstack adopts a new versioning policy to provide better server and client backward compatibility and improve the upgrading experience. dstack continues to follow semver versioning scheme ({major}.{minor}.{patch}) with the following principles:

  • The server backward compatibility is maintained across all minor and patch releases. The specific features can be removed but the removal is preceded with deprecation warnings for several minor releases. This means you can use older client versions with newer server versions.
  • The client backward compatibility is maintained across patch releases. A new minor release indicates that the release breaks client backward compatibility. This means you don't need to update the server when you update the client to a new patch release. Still, upgrading a client to a new minor version requires upgrading the server too.

Perviously, dstack never guaranteed client backward compatibility, so you had to always update the server when updating the client. The new versioning policy makes the client and server upgrading more flexible.

Note: The new policy only takes affect after both the clients and the server are upgraded to 0.18.16. The 0.18.15 server still won't work with newer clients.

dstack attach

The CLI gets a new dstack attach command that allows attaching to a run. It establishes the SSH tunnel, forwards ports, and streams run logs in real time:

 ✗ dstack attach silent-panther-1
Attached to run silent-panther-1 (replica=0 job=0)
Forwarded ports (local -> remote):
  - localhost:7860 -> 7860
To connect to the run via SSH, use `ssh silent-panther-1`.
Press Ctrl+C to detach...

This command is a replacement for dstack logs --attach with major improvements and bugfixes.

CloudWatch-related bugfixes

The releases includes several important bugfixes for CloudWatchLogStorage. We strongly recommend upgrading the dstack server if it's configured to store logs in CloudWatch.

Deprecations

  • dstack logs --attach is deprecated in favor of dstack attach and may be removed in the following minor releases.

What's Changed

Full Changelog: dstackai/dstack@0.18.15...0.18.16

0.18.15-v1

25 Sep 11:25
d4ea467
Compare
Choose a tag to compare

0.18.15

The update includes all the features and bug fixes from version 0.18.15.

Cluster placement groups

Instances of AWS cluster fleets are now provisioned into cluster placement groups for better connectivity. For example, when you create this fleet:

type: fleet
name: my-cluster-fleet
nodes: 4
placement: cluster
backends: [aws]

dstack will automatically create a cluster placement group and use it to provision the instances.

On-prem and VM-based fleets improvements

  • All available Nvidia driver capabilities are now requested by default, which makes it possible to run GPU workloads requiring OpenGL/Vulkan/RT/Video Codec SDK libraries. (dstackai/dstack#1714)
  • Automatic container cleanup. Previously, when the run completed, either successfully or due to an error, its container was not deleted, which led to ever-increasing storage consumption. Now, only the last stopped container is preserved and is available until the next run is completed. (dstackai/dstack#1706)

Major bug fixes

  • Fixed a bug where under some conditions logs wouldn't be uploaded to CloudWatch Logs due to size limits. (dstackai/dstack#1712)
  • Fixed a bug that prevented running services on on-prem instances. (dstackai/dstack#1716)

Changelog

Full Changelog: dstackai/dstack@0.18.14...0.18.15

0.18.14-v1

18 Sep 10:17
d4ea467
Compare
Choose a tag to compare

0.18.14

The update includes all the features and bug fixes from version 0.18.14.

Multi-replica server deployment

Previously, the dstack server only supported deploying a single instance (replica). However, with 0.18.14, you can now deploy multiple replicas, enabling high availability and zero-downtime updates

Note

Multi-replica server deployment requires using Postgres instead of the default SQLite. To configure Postgres, set the DSTACK_DATABASE_URL environment variable.

Make sure to update to version 0.18.14 before configuring multiple replicas.

Major bug-fixes

Other

Full changelog: dstackai/dstack@0.18.13...0.18.14

0.18.13-v1

11 Sep 14:29
d4ea467
Compare
Choose a tag to compare

0.18.13

The update includes all the features and bug fixes from version 0.18.13.

Windows

You can now use the CLI on Windows (WSL 2 is not required).

Ensure that Git and OpenSSH are installed via Git for Windows.

During installation, select Git from the command line and also from 3-rd party software
(or Use Git and optional Unix tools from the Command Prompt), and Use bundled OpenSSH checkboxes.

Spot policy

Previously, dev environments used the on-demand spot policy, while tasks and services used auto. With this update, we've changed the default spot policy to always be on-demand for all configurations. Users will now need to explicitly specify the spot policy if they want to use spot instances.

Troubleshooting

The documentation now includes a Troubleshooting guide with instructions on how to report issues.

Changelog

All commits: dstackai/dstack@0.18.12...0.18.13

0.18.12-v1

04 Sep 12:47
d4ea467
Compare
Choose a tag to compare

0.18.12

The update includes all the features and bug fixes from version 0.18.12.

Features

  • Added support for ECDSA and Ed25519 keys for on-prem fleets by @swsvc in #1641

Major bugfixes

  • Fixed the order of CloudWatch log events in the web interface by @un-def in #1613
  • Fixed a bug where CloudWatch log events might not be displayed in the web interface for old runs by @un-def in #1652
  • Prevent possible server freeze on SSH connections by @jvstme in #1627

Other changes

Full changelog: dstackai/dstack@0.18.11...0.18.12

0.18.11-v1

22 Aug 12:57
d4ea467
Compare
Choose a tag to compare

0.18.11

The update includes all the features and bug fixes from version 0.18.11.

AMD

With the latest update, you can now specify an AMD GPU under resources. Below is an example.

type: service
name: amd-service-tgi

image: ghcr.io/huggingface/text-generation-inference:sha-a379d55-rocm
env:
  - HUGGING_FACE_HUB_TOKEN
  - MODEL_ID=meta-llama/Meta-Llama-3.1-70B-Instruct
  - TRUST_REMOTE_CODE=true
  - ROCM_USE_FLASH_ATTN_V2_TRITON=true
commands:
  - text-generation-launcher --port 8000
port: 8000

resources:
  gpu: MI300X
  disk: 150GB

spot_policy: auto

model:
  type: chat
  name: meta-llama/Meta-Llama-3.1-70B-Instruct
  format: openai

Note

AMD accelerators are currently supported only with the runpod backend. Support for on-prem fleets and more backends
is coming soon.

GPU vendors

The gpu property now accepts the vendor attribute, with supported values: nvidia, tpu, and amd.

Alternatively, you can also prefix the GPU name with the vendor name followed by a colon, for example: tpu:v2-8 or amd:192GB, etc. This change ensures consistency in GPU requirements configuration across vendors.

Encryption

dstack now supports encryption of sensitive data, such as backend credentials, user tokens, etc. Learn more on the reference page.

Storing logs in AWS CloudWatch

By default, the dstack server stores run logs in ~/.dstack/server/projects/<project name>/logs. To store logs in AWS CloudWatch, set the SERVER_CLOUDWATCH_LOG_GROUP environment variable.

Project manager role

With this update, it's now possible to assign any user as a project manager. This role grants permission to manage project users but does not allow management of backends or resources.

Default permissions

By default, all users can create and manage their own projects. If you want only global admins to create projects, add the following to ~/.dstack/server/config.yml:

default_permissions:
  allow_non_admins_create_projects: false

Other

Full changelog: dstackai/dstack@0.18.10...0.18.11

0.18.10-v1

13 Aug 15:20
d4ea467
Compare
Choose a tag to compare

0.18.10

The update includes all the features and bug fixes from version 0.18.10.

Environment variables interpolation

Previously, it wasn't possible to use environment variables to configure credentials for a private Docker registry. With this update, you can now use the following interpolation syntax to avoid hardcoding credentials in the configuration.

type: dev-environment
name: train

env:
  - DOCKER_USER
  - DOCKER_USERPASSWORD

image: dstackai/base:py3.10-0.4-cuda-12.1
registry_auth:
  username: ${{ env.DOCKER_USER }}
  password: ${{ env.DOCKER_USERPASSWORD }}

Network interfaces for port forwarding

When you run a dev environment or a task with dstack apply, it automatically forwards the remote ports to localhost. However, these ports are, by default, bound to 127.0.0.1. If you'd like to make a port available on an arbitrary host, you can now specify the host using the --host option.

For example, this command will make the port available on all network interfaces:

dstack apply --host 0.0.0.0 -f my-task.dstack.yml

Major bugfixes

Other

All changes: dstackai/dstack@0.18.9...0.18.10

0.18.9-v1

07 Aug 16:13
d4ea467
Compare
Choose a tag to compare

0.18.9

The update includes all the features and bug fixes from version 0.18.9.

Base Docker image with nvcc

If you don't specify a custom Docker image, dstack uses its own base image with essential CUDA drivers, python, pip, and conda (Miniforge). Previously, this image didn't include nvcc, needed for compiling custom CUDA kernels (e.g., Flash Attention).

With version 0.18.9, you can now include nvcc.

type: task

python: "3.10"
# This line ensures `nvcc` is included into the base Docker image
nvcc: true

commands:
  - pip install -r requirements.txt
  - python train.py

resources:
  gpu: 24GB

Environment variables for on-prem fleets

When you create an on-prem fleet, it's now possible to pre-configure environment variables. These variables will be used when installing the dstack-shim service on hosts and running workloads.

For example, these environment variables can be used to configure dstack to use a proxy:

type: fleet
name: my-fleet

placement: cluster

env:
- HTTP_PROXY=http://proxy.example.com:80
- HTTPS_PROXY=http://proxy.example.com:80
- NO_PROXY=localhost,127.0.0.1

ssh_config:
  user: ubuntu
  identity_file: ~/.ssh/id_rsa
  hosts:
    - 3.255.177.51
    - 3.255.177.52

Examples

New examples include:

  • Llama 3.1 recipes for inference and fine-tuning
  • Spark cluster setup
  • Ray cluster setup

Other

Full changelog: https://github.com/dstackai/dstack/releases/0.18.9

0.18.8-v1

01 Aug 15:39
c43ace7
Compare
Choose a tag to compare

0.18.8

The update includes all the features and bug fixes from version 0.18.8.

GCP volumes

Now, volumes are also supported for the gcp backend:

type: volume
name: my-gcp-volume
backend: gcp
region: europe-west1
size: 100GB

Previously, volumes were only supported for aws and runpod.

Major bugfixes

The update fixes a major bug introduced in 0.18.7 that could prevent instances from being terminated in the cloud.

Other

Full changelog: https://github.com/dstackai/dstack/releases/0.18.8

0.18.7-v1

29 Jul 14:12
c43ace7
Compare
Choose a tag to compare

0.18.7

The update brings all the features and bug fixes introduced in version 0.18.7.

Fleets

With fleets, you can now describe clusters declaratively and create them in both cloud and on-prem with a single command. Once a fleet is created, it can be used with dev environments, tasks, and services.

Cloud fleets

To provision a fleet in the cloud, specify the required resources, number of nodes, and other optional parameters.

type: fleet
name: my-fleet
placement: cluster
nodes: 2
resources:
  gpu: 24GB

On-prem fleets

To create a fleet from on-prem servers, specify their hosts along with the user, port, and SSH key for connection via SSH.

type: fleet
name: my-fleet
placement: cluster
ssh_config:
  user: ubuntu
  identity_file: ~/.ssh/id_rsa
  hosts:
    - 3.255.177.51
    - 3.255.177.52

To create or update the fleet, simply call the dstack apply command:

dstack apply -f examples/fleets/my-fleet.dstack.yml

Learn more about fleets in the documentation.

Deprecating dstack run

Now that we support dstack apply for gateways, volumes, and fleets, we have extended this support to dev environments, tasks, and services. Instead of using dstack run WORKING_DIR -f CONFIG_FILE, you can now use dstack apply -f CONFIG_FILE.

Also, it's now possible to specify a name for dev environments, tasks, and services, just like for gateways, volumes, and fleets.

type: dev-environment
name: my-ide

python: "3.11"

ide: vscode

resources:
  gpu: 80GB

This name is used as a run name and is more convenient than a random name. However, if you don't specify a name, dstack will assign a random name as before.

Major bugfixes

Important

This update fixes the broken kubernetes backend, which has been non-functional since a few previous updates.

Other

Full changelog: 0.18.7