Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build: ✨ update local setup #140

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -132,4 +132,7 @@ cython_debug/

# Notebook Model Downloads
notebooks/PyTorchModels/
pytorch-model-scan-results.json
pytorch-model-scan-results.json

# Code Coverage
cov.xml
61 changes: 48 additions & 13 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,42 +1,77 @@
.DEFAULT_GOAL := help
VERSION ?= $(shell dunamai from git --style pep440 --format "{base}.dev{distance}+{commit}")

install-dev:
.PHONY: env
env: ## Display information about the current environment.
poetry env info

.PHONY: install-dev
install-dev: ## Install all dependencies including dev and test dependencies, as well as pre-commit.
poetry install --with dev --with test --extras "tensorflow h5py"
pre-commit install

install:
.PHONY: install
install: ## Install required dependencies.
poetry install

install-prod:
.PHONY: install-prod
install-prod: ## Install prod dependencies.
poetry install --with prod

install-test:
.PHONY: install-test
install-test: ## Install test dependencies.
poetry install --with test --extras "tensorflow h5py"

clean:
pip uninstall modelscan
.PHONY: clean
clean: ## Uninstall modelscan
python -m pip uninstall modelscan

.PHONY: test
test: ## Run pytests.
poetry run pytest tests/

test:
poetry run pytest
.PHONY: test-cov
test-cov: ## Run pytests with code coverage.
poetry run pytest --cov=modelscan --cov-report xml:cov.xml tests/

build:
.PHONY: build
build: ## Build the source and wheel achive.
poetry build

.PHONY: build-prod
build-prod: version
build-prod: ## Update the version and build wheel archive.
poetry build

version:
.PHONY: version
version: ## Bumps the version of the project.
echo "__version__ = '$(VERSION)'" > modelscan/_version.py
poetry version $(VERSION)

.PHONY: lint
lint: bandit mypy
lint: ## Run all the linters.

bandit:
.PHONY: bandit
bandit: ## Run SAST scanning.
poetry run bandit -c pyproject.toml -r .

mypy:
.PHONY: mypy
mypy: ## Run type checking.
poetry run mypy --ignore-missing-imports --strict --check-untyped-defs .

format:
.PHONY: black
format: ## Run black to format the code.
black .


.PHONY: help
help: ## List all targets and help information.
@grep --no-filename -E '^([a-z.A-Z_%-/]+:.*?)##' $(MAKEFILE_LIST) | sort | \
awk 'BEGIN {FS = ":.*?(## ?)"}; { \
if (length($$1) > 0) { \
printf " \033[36m%-30s\033[0m %s\n", $$1, $$2; \
} else { \
printf "%s\n", $$2; \
} \
}'
58 changes: 31 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,20 @@
[![Supported Versions](https://img.shields.io/pypi/pyversions/modelscan.svg)](https://pypi.org/project/modelscan)
[![pypi Version](https://img.shields.io/pypi/v/modelscan)](https://pypi.org/project/modelscan)
[![License: Apache 2.0](https://img.shields.io/crates/l/apa)](https://opensource.org/license/apache-2-0/)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit)](https://github.com/pre-commit/pre-commit)

# ModelScan: Protection Against Model Serialization Attacks

Machine Learning (ML) models are shared publicly over the internet, within teams and across teams. The rise of Foundation Models have resulted in public ML models being increasingly consumed for further training/fine tuning. ML Models are increasingly used to make critical decisions and power mission-critical applications.
Despite this, models are not scanned with the rigor of a PDF file in your inbox.

This needs to change, and proper tooling is the first step.

![ModelScan Preview](/imgs/modelscan-unsafe-model.gif)

ModelScan is an open source project that scans models to determine if they contain
unsafe code. It is the first model scanning tool to support multiple model formats.
ModelScan currently supports: H5, Pickle, and SavedModel formats. This protects you
ModelScan is an open source project that scans models to determine if they contain
unsafe code. It is the first model scanning tool to support multiple model formats.
ModelScan currently supports: H5, Pickle, and SavedModel formats. This protects you
when using PyTorch, TensorFlow, Keras, Sklearn, XGBoost, with more on the way.

## TL;DR
Expand All @@ -38,9 +41,9 @@ modelscan -p /path/to/model_file.pkl

Models are often created from automated pipelines, others may come from a data scientist’s laptop. In either case the model needs to move from one machine to another before it is used. That process of saving a model to disk is called serialization.

A **Model Serialization Attack** is where malicious code is added to the contents of a model during serialization(saving) before distribution — a modern version of the Trojan Horse.
A **Model Serialization Attack** is where malicious code is added to the contents of a model during serialization(saving) before distribution — a modern version of the Trojan Horse.

The attack functions by exploiting the saving and loading process of models. When you load a model with `model = torch.load(PATH)`, PyTorch opens the contents of the file and begins to running the code within. The second you load the model the exploit has executed.
The attack functions by exploiting the saving and loading process of models. When you load a model with `model = torch.load(PATH)`, PyTorch opens the contents of the file and begins to running the code within. The second you load the model the exploit has executed.

A **Model Serialization Attack** can be used to execute:

Expand All @@ -55,19 +58,19 @@ These attacks are incredibly simple to execute and you can view working examples

### How ModelScan Works

If loading a model with your machine learning framework automatically executes the attack,
If loading a model with your machine learning framework automatically executes the attack,
how does ModelScan check the content without loading the malicious code?

Simple, it reads the content of the file one byte at a time just like a string, looking for
Simple, it reads the content of the file one byte at a time just like a string, looking for
code signatures that are unsafe. This makes it incredibly fast, scanning models in the time it
takes for your computer to process the total filesize from disk(seconds in most cases). It also secure.

ModelScan ranks the unsafe code as:

* CRITICAL
* HIGH
* MEDIUM
* LOW
- CRITICAL
- HIGH
- MEDIUM
- LOW

![ModelScan Flow Chart](/imgs/model_scan_flow_chart.png)

Expand All @@ -78,7 +81,7 @@ it opens you up for attack. Use your discretion to determine if that is appropri

### What Models and Frameworks Are Supported?

This will be expanding continually, so look out for changes in our release notes.
This will be expanding continually, so look out for changes in our release notes.

At present, ModelScan supports any Pickle derived format and many others:

Expand All @@ -90,7 +93,8 @@ At present, ModelScan supports any Pickle derived format and many others:
| | [keras.models.save(save_format= 'keras')](https://www.tensorflow.org/guide/keras/serialization_and_saving) | Keras V3 (Hierarchical Data Format) | Yes |
| Classic ML Libraries (Sklearn, XGBoost etc.) | pickle.dump(), dill.dump(), joblib.dump(), cloudpickle.dump() | Pickle, Cloudpickle, Dill, Joblib | Yes |

### Installation
### Installation

ModelScan is installed on your systems as a Python package(Python 3.8 to 3.11 supported). As shown from above you can install
it by running this in your terminal:

Expand All @@ -106,6 +110,7 @@ modelscan = ">=0.1.1"
```

Scanners for Tensorflow or HD5 formatted models require installation with extras:

```bash
pip install 'modelscan[ tensorflow, h5py ]'
```
Expand All @@ -114,22 +119,23 @@ pip install 'modelscan[ tensorflow, h5py ]'

ModelScan supports the following arguments via the CLI:

| Usage | Argument | Explanation |
| Usage | Argument | Explanation |
|----------------------------------------------------------------------------------|------------------|---------------------------------------------------------|
| ```modelscan -h ``` | -h or --help | View usage help |
| ```modelscan -v ``` | -v or --version | View version information |
| ```modelscan -h``` | -h or --help | View usage help |
| ```modelscan -v``` | -v or --version | View version information |
| ```modelscan -p /path/to/model_file``` | -p or --path | Scan a locally stored model |
| ```modelscan -p /path/to/model_file --settings-file ./modelscan-settings.toml``` | --settings-file | Scan a locally stored model using custom configurations |
| ```modelscan create-settings-file``` | -l or --location | Create a configurable settings file |
| ```modelscan -r``` | -r or --reporting-format | Format of the output. Options are console, json, or custom (to be defined in settings-file). Default is console |
| ```modelscan -r reporting-format -o file-name``` | -o or --output-file | Optional file name for output report |
| ```modelscan --show-skipped``` | --show-skipped | Print a list of files that were skipped during the scan |


Remember models are just like any other form of digital media, you should scan content from any untrusted source before use.

##### CLI Exit Codes
#### CLI Exit Codes

The CLI exit status codes are:

- `0`: Scan completed successfully, no vulnerabilities found
- `1`: Scan completed successfully, vulnerabilities found
- `2`: Scan failed, modelscan threw an error while scanning
Expand All @@ -143,9 +149,9 @@ Once a scan has been completed you'll see output like this if an issue is found:
![ModelScan Scan Output](https://github.com/protectai/modelscan/raw/main/imgs/cli_output.png)

Here we have a model that has an unsafe operator for both `ReadFile` and `WriteFile` in the model.
Clearly we do not want our models reading and writing files arbitrarily. We would now reach out
Clearly we do not want our models reading and writing files arbitrarily. We would now reach out
to the creator of this model to determine what they expected this to do. In this particular case
it allows an attacker to read our AWS credentials and write them to another place.
it allows an attacker to read our AWS credentials and write them to another place.

That is a firm NO for usage.

Expand Down Expand Up @@ -182,13 +188,13 @@ to learn more!

## Licensing

Copyright 2023 Protect AI
Copyright 2023 Protect AI

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0
<http://www.apache.org/licenses/LICENSE-2.0>

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
Expand All @@ -201,9 +207,7 @@ limitations under the License.
We were heavily inspired by [Matthieu Maitre](http://mmaitre314.github.io) who built [PickleScan](https://github.com/mmaitre314/picklescan).
We appreciate the work and have extended it significantly with ModelScan. ModelScan is OSS’ed in the similar spirit as PickleScan.

## Contributing

We would love to have you contribute to our open source ModelScan project.
If you would like to contribute, please follow the details on [Contribution page](https://github.com/protectai/modelscan/blob/main/CONTRIBUTING.md).
## Contributing


We would love to have you contribute to our open source ModelScan project.
If you would like to contribute, please follow the details on [Contribution page](https://github.com/protectai/modelscan/blob/main/CONTRIBUTING.md).
13 changes: 7 additions & 6 deletions docs/model_serialization_attacks.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Machine Learning(ML) models are the foundational asset in ML powered application
Models can be compromised in various ways, some are new like adversarial machine learning methods, others are common with traditional applications like denial of service attacks. While these can be a threat to safely operating an ML powered application, this document focuses on exposing the risk of Model Serialization Attacks.
In a Model Serialization Attack malicious code is added to a model when it is saved, this is also called a code injection attack as well. When any user or system then loads the model for further training or inference the attack code is executed immediately, often with no visible change in behavior to users. This makes the attack a powerful vector and an easy point of entry for attacking broader machine learning components.

To secure ML models, you need to understand what’s inside them and how they are stored on disk in a process called serialization.
To secure ML models, you need to understand what’s inside them and how they are stored on disk in a process called serialization.

ML models are composed of:

Expand All @@ -30,7 +30,7 @@ Before digging into how a Model Serialization Attack works and how to scan for t

## 1. Pickle Variants

**Pickle** and its variants (cloudpickle, dill, joblib) all store objects to disk in a general purpose way. These frameworks are completely ML agnostic and store Python objects as-is.
**Pickle** and its variants (cloudpickle, dill, joblib) all store objects to disk in a general purpose way. These frameworks are completely ML agnostic and store Python objects as-is.

Pickle is the defacto library for serializing ML models for following ML frameworks:

Expand All @@ -47,15 +47,15 @@ Pickle is also used to store vectors/tensors only for following frameworks:
Pickle allows for arbitrary code execution and is highly vulnerable to code injection attacks with very large attack surface. Pickle documentation makes it clear with the following warning:

> **Warning:** The `pickle` module **is not secure**. Only unpickle data you trust.
>
>
>
>
> It is possible to construct malicious pickle data which will **execute
> arbitrary code during unpickling**. Never unpickle data that could have come
> from an untrusted source, or that could have been tampered with.
>
>
> Consider signing data with [hmac](https://docs.python.org/3/library/hmac.html#module-hmac) if you need to ensure that it has not
> been tampered with.
>
>
> Safer serialization formats such as [json](https://docs.python.org/3/library/json.html#module-json) may be more appropriate if
> you are processing untrusted data.

Expand Down Expand Up @@ -129,6 +129,7 @@ With the exception of pickle, these formats cannot execute arbitrary code. Howev
With an understanding of various approaches to model serialization, explore how many popular choices are vulnerable to this attack with an end to end explanation.

# End to end Attack Scenario

1. Internal attacker:
The attack complexity will vary depending on the access trusted to an internal actor.
2. External attacker:
Expand Down
15 changes: 8 additions & 7 deletions docs/severity_levels.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
# modelscan Severity Levels

modelscan classifies potentially malicious code injection attacks in the following four severity levels.
modelscan classifies potentially malicious code injection attacks in the following four severity levels.
<br> </br>

- **CRITICAL:** A model file that consists of unsafe operators/globals that can execute code is classified at critical severity. These operators are:
- exec, eval, runpy, sys, open, breakpoint, os, subprocess, socket, nt, posix
- exec, eval, runpy, sys, open, breakpoint, os, subprocess, socket, nt, posix
<br> </br>
- **HIGH:** A model file that consists of unsafe operators/globals that can not execute code but can still be exploited is classified at high severity. These operators are:
- webbrowser, httplib, request.api, Tensorflow ReadFile, Tensorflow WriteFile
- webbrowser, httplib, request.api, Tensorflow ReadFile, Tensorflow WriteFile
<br> </br>
- **MEDIUM:** A model file that consists of operators/globals that are neither supported by the parent ML library nor are known to modelscan are classified at medium severity.
- Keras Lambda layer can also be used for arbitrary code execution. In general, it is not a best practise to add a Lambda layer to a ML model that can get exploited for code injection attacks.
- Work in Progress: Custom operators will be classified at medium severity.
- **MEDIUM:** A model file that consists of operators/globals that are neither supported by the parent ML library nor are known to modelscan are classified at medium severity.
- Keras Lambda layer can also be used for arbitrary code execution. In general, it is not a best practise to add a Lambda layer to a ML model that can get exploited for code injection attacks.
- Work in Progress: Custom operators will be classified at medium severity.
<br> </br>
- **LOW:** At the moment no operators/globals are classified at low severity level.
- **LOW:** At the moment no operators/globals are classified at low severity level.
Loading