Skip to content

Commit

Permalink
Bump to v0.5.0
Browse files Browse the repository at this point in the history
Also remove unused function run_curl_cmd.

Signed-off-by: Daniel J Walsh <[email protected]>
  • Loading branch information
rhatdan committed Jan 9, 2025
1 parent ad08674 commit e764515
Show file tree
Hide file tree
Showing 7 changed files with 8 additions and 18 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ curl -fsSL https://raw.githubusercontent.com/containers/ramalama/s/install.sh |

### Running Models

You can `run` a chatbot on a model using the `run` command. By default, it pulls from the ollama registry.
You can `run` a chatbot on a model using the `run` command. By default, it pulls from the Ollama registry.

Note: RamaLama will inspect your machine for native GPU support and then will
use a container engine like Podman to pull an OCI container image with the
Expand Down Expand Up @@ -158,7 +158,7 @@ ollama://moondream:latest 6 days ago
```
### Pulling Models

You can `pull` a model using the `pull` command. By default, it pulls from the ollama registry.
You can `pull` a model using the `pull` command. By default, it pulls from the Ollama registry.

```
$ ramalama pull granite-code
Expand All @@ -167,7 +167,7 @@ $ ramalama pull granite-code

### Serving Models

You can `serve` multiple models using the `serve` command. By default, it pulls from the ollama registry.
You can `serve` multiple models using the `serve` command. By default, it pulls from the Ollama registry.

```
$ ramalama serve --name mylama llama3
Expand Down
2 changes: 1 addition & 1 deletion docs/ramalama.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Running in containers eliminates the need for users to configure the host system

RamaLama pulls AI Models from model registries. Starting a chatbot or a rest API service from a simple single command. Models are treated similarly to how Podman and Docker treat container images.

When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behaviour. When neither are installed RamaLama attempts to run the model with software on the local system.
When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behavior. When neither are installed RamaLama attempts to run the model with software on the local system.

Note:

Expand Down
2 changes: 1 addition & 1 deletion docs/ramalama.conf.5.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ llama.cpp explains this as:

The lower the number is, the more deterministic the response.

The higher the number is the more creative the response is, but moee likely to hallucinate when set too high.
The higher the number is the more creative the response is, but more likely to hallucinate when set too high.

Usage: Lower numbers are good for virtual assistants where we need deterministic responses. Higher numbers are good for roleplay or creative tasks like editing stories

Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "ramalama"
version = "0.4.0"
version = "0.5.0"
dependencies = [
"argcomplete",
]
Expand Down
10 changes: 0 additions & 10 deletions ramalama/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,16 +99,6 @@ def find_working_directory():
return os.path.dirname(__file__)


def run_curl_cmd(args, filename):
if not verify_checksum(filename):
try:
run_cmd(args, debug=args.debug)
except subprocess.CalledProcessError as e:
if e.returncode == 22:
perror(filename + " not found")
raise e


def verify_checksum(filename):
"""
Verifies if the SHA-256 checksum of a file matches the checksum provided in
Expand Down
2 changes: 1 addition & 1 deletion rpm/python-ramalama.spec
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
%global pypi_name ramalama
%global forgeurl https://github.com/containers/%{pypi_name}
# see ramalama/version.py
%global version0 0.4.0
%global version0 0.5.0
%forgemeta

%global summary RamaLama is a command line tool for working with AI LLM models
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ def find_package_modules(self, package, package_dir):

setuptools.setup(
name="ramalama",
version="0.4.0",
version="0.5.0",
packages=find_packages(),
cmdclass={"build_py": build_py},
scripts=["bin/ramalama"],
Expand Down

0 comments on commit e764515

Please sign in to comment.