Skip to content

Commit

Permalink
tweak examples, add light tests, add gh action (#23)
Browse files Browse the repository at this point in the history
* tweak examples, add light tests, add gh action

* remove launch.json

* remove beta9ignore

* tweak to run in pr rn

* refactor a bit

* beam auth in action

* set env

* update deps

* try unset CI

* try setting manually

* add workspace id env

* merge and update one test

* fix test
  • Loading branch information
dleviminzi authored Aug 2, 2024
1 parent 9203bda commit a065cd4
Show file tree
Hide file tree
Showing 56 changed files with 4,081 additions and 58 deletions.
52 changes: 52 additions & 0 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
name: Tests

on: [push, pull_request, workflow_dispatch]

jobs:
light-test:
runs-on: ubuntu-latest
env:
BEAM_AUTH_TOKEN: ${{ secrets.BEAM_AUTH_TOKEN }}
BEAM_WORKSPACE_ID: ${{ secrets.BEAM_WORKSPACE_ID }}

steps:
- uses: actions/checkout@v3

- name: set up python
uses: actions/setup-python@v4
with:
python-version: '3.x'

- name: install poetry
uses: snok/install-poetry@v1
with:
version: '1.5.1'
virtualenvs-create: true
virtualenvs-in-project: true

- name: load cached venv
id: cached-poetry-dependencies
uses: actions/cache@v3
with:
path: .venv
key: venv-${{ runner.os }}-${{ hashFiles('**/poetry.lock') }}

- name: install dependencies
if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true'
run: poetry install --no-interaction --no-root

- name: configure beam
run: |
source .venv/bin/activate
mkdir -p ~/.beam
cat << EOF > ~/.beam/config.ini
[default]
token = ${{ secrets.BEAM_AUTH_TOKEN }}
gateway_host = gateway.beam.cloud
gateway_port = 443
EOF
- name: run tests
run: |
source .venv/bin/activate
pytest tests/light_test.py
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,5 @@
__pycache__
__downloads__
.env
.venv
.venv
.vscode/launch.json
12 changes: 5 additions & 7 deletions 02_customizing_environment/custom_image.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,15 @@
image = Image(
python_version="python3.9",
python_packages=[
"transformers",
"torch",
],
commands=["apt-get update -y && apt-get install ffmpeg -y"],
base_image="docker.io/nvidia/cuda:12.1.1-runtime-ubuntu20.04",
commands=["apt-get update -y && apt-get install neovim -y"],
base_image="docker.io/nvidia/cuda:12.3.1-runtime-ubuntu20.04",
)


@endpoint()
def handler(image=image):
@endpoint(image=image)
def handler():
import torch

print(torch)
return {}
return {"torch_version": + torch.__version__}
2 changes: 1 addition & 1 deletion 02_customizing_environment/gpu_acceleration.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@

@endpoint(gpu="T4")
def handler():
print("📡 This is running on a GPU!")
print(subprocess.check_output(["nvidia-smi"]))
return "This container has a GPU attached 📡!"


if __name__ == "__main__":
Expand Down
8 changes: 5 additions & 3 deletions 02_customizing_environment/using_secrets.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,17 @@
Once the secret is created, it can be accessed as an environment variable (see below).
"""

import os
from beam import function

os.environ["FOO"] = "bar"

@function(secrets=["AWS_ACCESS_KEY"])
@function(secrets=["FOO"])
def handler():
import os

my_secret = os.environ["AWS_ACCESS_KEY"]
print(f"Secret: {my_secret}")
my_secret = os.environ["FOO"]
return f"secret {my_secret}"


if __name__ == "__main__":
Expand Down
2 changes: 1 addition & 1 deletion 03_endpoint/keep_warm.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,4 @@
# Each container will stay up for 5 min before shutting down automatically
@endpoint(keep_warm_seconds=300)
def handler():
return {}
return "warm"
2 changes: 0 additions & 2 deletions 03_endpoint/preload_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,4 @@ def predict(context, prompt):
generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]

print(result)

return {"prediction": result}
7 changes: 4 additions & 3 deletions 04_task_queue/async_task.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
Task queues are deployed the same way as web endpoints.
As a recap, this is the CLI command to deploy an task queue or endpoint:
As a recap, this is the CLI command to deploy a task queue or endpoint:
```
beam deploy [file.py]:[function] --name [name]
Expand Down Expand Up @@ -36,5 +36,6 @@ def multiply(**inputs):
return {"result": result}


# Interactively enqueue a task without deploying
multiply.put(x=1)
if __name__ == "__main__":
# Interactively enqueue a task without deploying
multiply.put(x=1)
2 changes: 1 addition & 1 deletion 04_task_queue/task_callbacks.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
from beam import function


@function(callback_url="https://your-server.io")
@function(callback_url="https://www.beam.cloud/")
def handler(x):
return {"result": x}

Expand Down
2 changes: 2 additions & 0 deletions 05_function/scaling_out.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@ def main():
for result in square.map(numbers):
print(result)
squared.append(result)

print("result", squared)


if __name__ == "__main__":
Expand Down
34 changes: 23 additions & 11 deletions 05_function/sharing_state.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,29 @@
`Queue()` is a concurrency-safe distributed queue, accessible both locally and within remote containers.
"""

from beam import Queue
from beam import Queue, function

val = [1, 2, 3]

# Initialize the Queue
q = Queue(name="myqueue")
@function(cpu=0.1)
def access_queue():
q = Queue(name="myqueue")
return q.pop()

for i in range(100):
# Insert something to the queue
q.put(val)
while not q.empty():
# Remove something from the queue
val = q.pop()
print(val)
if __name__ == "__main__":
val = ["eli", "luke", "john", "nick"]

# Initialize the Queue
q = Queue(name="myqueue")

for i in val:
# Insert something to the queue
q.put(i)

while not q.empty():
# Remove something from the queue
val = q.pop()
print(val)

q.put("daniel")

print(access_queue.remote())
21 changes: 0 additions & 21 deletions 06_volume/mounting_volumes.py

This file was deleted.

14 changes: 10 additions & 4 deletions 06_volume/reading_and_writing_data.py → 06_volume/volume_use.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,23 @@
from beam import function, Volume


VOLUME_PATH = "./model_weights"
VOLUME_PATH = "./example-volume"


@function(
volumes=[Volume(name="model-weights", mount_path=VOLUME_PATH)],
volumes=[Volume(name="example-volume", mount_path=VOLUME_PATH)],
)
def access_files():
# Write files to a volume
with open(f"{VOLUME_PATH}/somefile.txt", "w") as f:
f.write("Writing to the volume!")
f.write("On the volume!")

# Read files from a volume
s = ""
with open(f"{VOLUME_PATH}/somefile.txt", "r") as f:
f.read()
s = f.read()

return s

if __name__ == "__main__":
print(access_files.remote())
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,10 @@ def save_image():
# Print other details about the output
print(f"Output ID: {output.id}")
print(f"Output Path: {output.path}")
print(f"Output Stats: {output.stat()}")
print(f"Output Exists: {output.exists()}")

return {"image": url}


if __name__ == "__main__":
save_image()
save_image.remote()
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
from beam import endpoint, Image
from beam import Image, Volume, endpoint, Output

CACHE_PATH = "./models"
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
94 changes: 94 additions & 0 deletions 10_language_models/mixtral7b/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
"""
### Mixtral 7B ###
Note: This is a gated Huggingface model and you must request access to it here:
https://huggingface.co/mistralai/Mistral-7B-v0.1
Retrieve your HF token from this page: https://huggingface.co/settings/tokens
After your access is granted, make sure to save your Huggingface token on Beam:
```
$ beam secret create [SECRET]
```
...and add the secret to your Beam function decorator:
@endpoint(secrets=["HF_TOKEN"])
"""

from beam import endpoint, Image, Volume, env

# This ensures that these packages are only loaded when the script is running remotely on Beam
if env.is_remote():
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

CHECKPOINT = "mistralai/Mistral-7B-v0.1"
BEAM_VOLUME_PATH = "./cached_models"


def load_models():
model = AutoModelForCausalLM.from_pretrained(
CHECKPOINT,
torch_dtype=torch.float16,
device_map="auto",
cache_dir=BEAM_VOLUME_PATH,
)
tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)
return model, tokenizer


@endpoint(
secrets=["HF_TOKEN"],
on_start=load_models,
name="mistral-7b",
cpu=2,
memory="32Gi",
gpu="A10G",
image=Image(
python_version="python3.11",
python_packages=[
"transformers==4.42.3",
"sentencepiece==0.1.99",
"accelerate==0.23.0",
"torch==2.0.1",
],
),
volumes=[
Volume(
name="cached_models",
mount_path=BEAM_VOLUME_PATH,
)
],
)
def generate(context, **inputs):
# Retrieve model and tokenizer from on_start
model, tokenizer = context.on_start_value

# Inputs passed to API
prompt = inputs.get("prompt")
if not prompt:
return {"error": "Please provide a prompt."}

generate_args = {
"max_new_tokens": inputs.get("max_new_tokens", 128),
"temperature": inputs.get("temperature", 1.0),
"top_p": inputs.get("top_p", 0.95),
"top_k": inputs.get("top_k", 50),
"repetition_penalty": 1.0,
"no_repeat_ngram_size": 0,
"use_cache": True,
"do_sample": True,
"eos_token_id": tokenizer.eos_token_id,
"pad_token_id": tokenizer.pad_token_id,
}

input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()

with torch.no_grad():
output = model.generate(inputs=input_ids, **generate_args)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)

return {"generated_text": generated_text}
File renamed without changes.
File renamed without changes.
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,15 @@

This repo includes various code examples that demonstrate the functionality of Beam.

## Running examples
Some of the examples showcase local usecases and others are examples of full deployments. For the examples that can
be run locally, you can use poetry to get python setup correctly.

```bash
poetry install
poetry shell
```

---

**Attention Beta9 users**: These examples are for the [beam.cloud](beam.cloud) product. If you are coming from the open-source [Beta9](https://github.com/beam-cloud/beta9/) repo, any of these examples can be run by changing the Python imports from **beam** to **beta9**:
Expand Down
Loading

0 comments on commit a065cd4

Please sign in to comment.