Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VC models PyPi package #9

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 11 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@
[Website](https://eai-vc.github.io/) | [Blog post](https://ai.facebook.com/blog/robots-learning-video-simulation-artificial-visual-cortex-vc-1) | [Paper](https://arxiv.org/abs/2303.18240)

<p align="center">
<img src="res/img/vc1_teaser.gif" alt="Visual Cortex and CortexBench" width="600">

<img src="https://eai-vc.github.io/assets/images/vc1_teaser.gif" alt="Visual Cortex and CortexBench" width="600">
<br />
<br />
<a href="https://opensource.fb.com/support-ukraine"><img alt="Support Ukraine" src="https://img.shields.io/badge/Support-Ukraine-FFD500?style=flat&labelColor=005BBB" /></a>
Expand All @@ -19,30 +18,29 @@ We're releasing CortexBench and our first Visual Cortex model: VC-1. CortexBench

## Open-Sourced Models
We're open-sourcing two visual cortex models ([model cards](./MODEL_CARD.md)):
* VC-1 (ViT-L): Our best model, uses a ViT-L backbone, also known simply as `VC-1` | [Download](https://dl.fbaipublicfiles.com/eai-vc/vc1_vitl.pth)
* VC-1-base (VIT-B): pre-trained on the same data as VC-1 but with a smaller backbone (ViT-B) | [Download](https://dl.fbaipublicfiles.com/eai-vc/vc1_vitb.pth)
* VC-1 (ViT-L): Our best model, uses a ViT-L backbone, also known simply as `VC-1` | [Download](https://huggingface.co/facebook/vc1-large/resolve/main/pytorch_model.bin)
* VC-1-base (VIT-B): pre-trained on the same data as VC-1 but with a smaller backbone (ViT-B) | [Download](https://huggingface.co/facebook/vc1-base/resolve/main/pytorch_model.bin)

## Installation

To install our visual cortex models and CortexBench, please follow the instructions in [INSTALLATION.md](INSTALLATION.md).
To install our visual cortex models and CortexBench, please follow the instructions in [INSTALLATION.md](./INSTALLATION.md).

## Directory structure

- `vc_models`: contains config files for visual cortex models, the model loading code and, as well as some project utilities.
- See [README](./vc-models/README.md) for more details.
- See [README](./vc_models/README.md) for more details.
- `cortexbench`: embodied AI downstream tasks to evaluate pre-trained representations.
- `third_party`: Third party submodules which aren't expected to change often.
- `data`: Gitignored directory, needs to be created by the user. Is used by some downstream tasks to find (symlinks to) datasets, models, etc.

## Load VC-1
## Load VC-1

To use the VC-1 model, you can install the `vc_models` module with pip. Then, you can load the model with code such as the following or follow [our tutorial](./tutorial/tutorial_vc.ipynb):
```python
import vc_models
from vc_models.models.vit import model_utils

model,embd_size,model_transforms,model_info = model_utils.load_model(model_utils.VC1_LARGE_NAME)
# To use the smaller VC-1-base model use model_utils.VC1_BASE_NAME.
model,embd_size,model_transforms,model_info = vc_models.load_model(vc_models.VC1_LARGE_NAME)
# To use the smaller VC-1-base model use vc_models.VC1_BASE_NAME.

# The img loaded should be Bx3x250x250
img = your_function_here ...
Expand All @@ -59,7 +57,7 @@ To reproduce the results with the VC-1 model, please follow the README instructi

## Load Your Own Encoder Model and Run Across All Benchmarks
To load your own encoder model and run it across all benchmarks, follow these steps:
1. Create a configuration for your model `<your_model>.yaml` in [the model configs folder](vc_models/src/vc_models/conf/model/) of the `vc_models` module.
1. Create a configuration for your model `<your_model>.yaml` in [the model configs folder](./vc_models/src/vc_models/conf/model/) of the `vc_models` module.
1. In the config, you can specify the custom methods (as `_target_` field) for loading your encoder model.
1. Then, you can load the model as follows:
```python
Expand All @@ -72,7 +70,7 @@ To load your own encoder model and run it across all benchmarks, follow these st

## Contributing

If you would like to contribute to Visual Cortex and CortexBench, please see [CONTRIBUTING.md](CONTRIBUTING.md).
If you would like to contribute to Visual Cortex and CortexBench, please see [CONTRIBUTING.md](./CONTRIBUTING.md).

## Citing Visual Cortex
If you use Visual Cortex in your research, please cite [the following paper](https://arxiv.org/abs/2303.18240):
Expand All @@ -89,7 +87,7 @@ If you use Visual Cortex in your research, please cite [the following paper](htt
```

## License
The majority of Visual Cortex and CortexBench code is licensed under CC-BY-NC (see the [LICENSE file](/LICENSE) for details), however portions of the project are available under separate license terms: trifinger_simulation is licensed under the BSD 3.0 license; mj_envs, mjrl are licensed under the Apache 2.0 license; Habitat Lab, dmc2gym, mujoco-py are licensed under the MIT license.
The majority of Visual Cortex and CortexBench code is licensed under CC-BY-NC (see the [LICENSE file](./LICENSE) for details), however portions of the project are available under separate license terms: trifinger_simulation is licensed under the BSD 3.0 license; mj_envs, mjrl are licensed under the Apache 2.0 license; Habitat Lab, dmc2gym, mujoco-py are licensed under the MIT license.

The trained policies models and the task datasets are considered data derived from the correspondent scene datasets.

Expand Down
4 changes: 2 additions & 2 deletions cortexbench/habitat_vc/configs/hydra/output/path.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# @package hydra
run:
dir: /checkpoint/maksymets/vc/results/${hydra.job.name}/${oc.env:USER}/${WANDB.name}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These paths should not be here?

dir: /checkpoint/maksymets/eaif/results/${hydra.job.name}/${oc.env:USER}/${WANDB.name}
subdir: ${hydra.job.num}_${hydra.job.override_dirname}
sweep:
dir: /checkpoint/maksymets/vc/results/${hydra.job.name}/${oc.env:USER}/${WANDB.name}
dir: /checkpoint/maksymets/eaif/results/${hydra.job.name}/${oc.env:USER}/${WANDB.name}
subdir: ${hydra.job.num}_${hydra.job.override_dirname}
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,3 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This source code is licensed under the CC-BY-NC license found in the
# LICENSE file in the root directory of this source tree.

1 change: 0 additions & 1 deletion cortexbench/habitat_vc/habitat_vc/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,3 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This source code is licensed under the CC-BY-NC license found in the
# LICENSE file in the root directory of this source tree.

4 changes: 2 additions & 2 deletions cortexbench/mujoco_vc/src/mujoco_vc/model_loading.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# This source code is licensed under the CC-BY-NC license found in the
# LICENSE file in the root directory of this source tree.

from vc_models import vc_models_dir_path
import vc_models
from omegaconf import OmegaConf
from PIL import Image
import os
Expand All @@ -20,7 +20,7 @@ def load_pretrained_model(embedding_name, input_type=np.ndarray, *args, **kwargs
"""
Load the pretrained model based on the config corresponding to the embedding_name
"""

vc_models_dir_path = os.path.dirname(vc_models.__file__)
config_path = os.path.join(
vc_models_dir_path, "conf/model", embedding_name + ".yaml"
)
Expand Down
4 changes: 2 additions & 2 deletions cortexbench/mujoco_vc/src/mujoco_vc/rollout_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ def rollout_from_init_states(
# DMC test
data_paths = pickle.load(
open(
"/checkpoint/maksymets/vc/datasets/dmc-expert-v0.1/dmc_reacher_easy-v1.pickle",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these paths hardcoded?

"/checkpoint/maksymets/eaif/datasets/dmc-expert-v0.1/dmc_reacher_easy-v1.pickle",
"rb",
)
)
Expand Down Expand Up @@ -129,7 +129,7 @@ def rollout_from_init_states(
# Adroit test
data_paths = pickle.load(
open(
"/checkpoint/maksymets/vc/datasets/adroit-expert-v0.1/pen-v0.pickle", "rb"
"/checkpoint/maksymets/eaif/datasets/adroit-expert-v0.1/pen-v0.pickle", "rb"
)
)
e = env_constructor(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,6 @@ run:
dir: ./outputs/${hydra.job.name}/${now:%Y-%m-%d}_${now:%H-%M-%S}
subdir: ${hydra.job.num}_${hydra.job.override_dirname}
sweep:
dir: /checkpoint/maksymets/vc/results/${hydra.job.name}/${oc.env:USER}/${now:%Y-%m-%d}_${now:%H-%M-%S}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use local paths

dir: /checkpoint/maksymets/eaif/results/${hydra.job.name}/${oc.env:USER}/${now:%Y-%m-%d}_${now:%H-%M-%S}
subdir: ${hydra.job.num}_${hydra.job.override_dirname}

2 changes: 1 addition & 1 deletion vc_models/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# EAI Foundation Models
# Visual Cortex Models

This package contains a minimal-dependency set of model loading code. Model definitions are defined under `src/vc_models/models`, with configurations (including reference checkpoint filepaths) under `src/vc_models/conf`.

Expand Down
6 changes: 6 additions & 0 deletions vc_models/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
torch >= 1.10.2
torchvision >= 0.11.3
timm==0.6.11
hydra-core
wandb>=0.13
requests
91 changes: 91 additions & 0 deletions vc_models/scripts/build_pkg.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
#!/bin/bash

# Check if the script is being run from the correct directory
if [ ! -f "setup.py" ]; then
echo "Error: This script should be run from the directory containing the 'setup.py' file."
echo "Please navigate to the correct directory and run the script as follows:"
echo " ./scripts/build_pkg.sh"
exit 1
fi


# Function to check if a Python package is installed
python_package_exists() {
python -c "import pkgutil; exit(0 if pkgutil.find_loader('$1') else 1)" &> /dev/null
}

# Check for Python package dependencies
dependencies=("twine" "wheel" "setuptools")
missing_dependencies=()

for dep in "${dependencies[@]}"; do
if ! python_package_exists "$dep"; then
missing_dependencies+=("$dep")
fi
done

if [ ${#missing_dependencies[@]} -ne 0 ]; then
echo "The following Python package dependencies are missing: ${missing_dependencies[*]}"
echo "Please install them using the following command:"
echo " pip install ${missing_dependencies[*]}"
exit 1
fi

# Function to prompt the user for confirmation
confirm() {
read -p "$1 (y/n): " choice
case "$choice" in
y|Y ) return 0;;
n|N ) return 1;;
* ) echo "Invalid input. Please enter 'y' or 'n'."; confirm "$1";;
esac
}

# Clean build artifacts
if confirm "Do you want to clean previous build artifacts?"; then
rm -rf build dist *.egg-info
echo "Cleaned previous build artifacts."
fi

# Run tests (replace `python -m unittest` with your test command if different)
if confirm "Do you want to run tests?"; then
python -m unittest
if [ $? -ne 0 ]; then
echo "Tests failed. Please fix the issues before building and uploading the package."
exit 1
else
echo "All tests passed."
fi
fi

# Build the package
if confirm "Do you want to build the package?"; then
python setup.py sdist bdist_wheel
echo "Package built successfully."
if [ $? -eq 0 ]; then
echo "Package built successfully."
else
echo "Failed to built successfully."
exit 1
fi
fi

# Upload to TestPyPI
if confirm "Do you want to upload the package to TestPyPI?"; then
twine upload --repository-url https://test.pypi.org/legacy/ dist/*
if [ $? -eq 0 ]; then
echo "Package uploaded to TestPyPI."
else
echo "Failed to upload the package to TestPyPI."
fi
fi

# Upload to PyPI
if confirm "Do you want to upload the package to PyPI?"; then
twine upload dist/*
if [ $? -eq 0 ]; then
echo "Package uploaded to PyPI."
else
echo "Failed to upload the package to PyPI."
fi
fi
128 changes: 110 additions & 18 deletions vc_models/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,28 +4,120 @@
# This source code is licensed under the CC-BY-NC license found in the
# LICENSE file in the root directory of this source tree.

from typing import Dict, List
from setuptools import setup
from setuptools import find_packages
from setuptools import find_namespace_packages

def get_file_content(file_path: str, *args, **kwargs) -> str:
"""
Read the content of a file.
Args:
file_path: Path to the file.
*args: Additional arguments to pass to open().
**kwargs: Additional keyword arguments to pass to open().
Returns:
The content of the file.
"""
with open(file_path, *args, **kwargs) as f:
return f.read()

packages = find_packages(where="src") + find_namespace_packages(
include=["hydra_plugins.*"], where="src"
)
install_requires = [
"torch >= 1.10.2",
"torchvision >= 0.11.3",
"timm==0.6.11",
"hydra-core",
"wandb>=0.13",
"six"
def get_package_version() -> str:
"""
Get the version of the package. The version is defined in the file version.py.
Returns:
The version of the package.
"""
import os.path as osp
import sys

sys.path.insert(0, osp.join(osp.dirname(__file__), "src", "vc_models"))
from version import VERSION

return VERSION

def parse_sections_md_file(file_path: str) -> Dict[str, str]:
"""
Parse a markdown file into sections.

Sections are defined by the first level 1 or 2 header.

Args:
file_path: Path to the markdown file.

Returns:
A dictionary of sections with the section content.
"""
with open(file_path, "r") as file:
content = file.read()

section_dict: Dict[str, str] = {}
lines = content.split("\n")
current_title = ""
code_block_mode = False

for line in lines:
if line.startswith("```"):
code_block_mode = not code_block_mode
elif code_block_mode:
section_dict[current_title] += f"{line}\n"
elif line.startswith("# "):
current_title = line[2:]
section_dict[current_title] = f"{line}\n"
elif line.startswith("## "):
current_title = line[3:]
section_dict[current_title] = f"{line}\n"
elif current_title:
section_dict[current_title] += f"{line}\n"

return section_dict

desc_sections = parse_sections_md_file("../README.md")

sections_to_include = [
"Visual Cortex and CortexBench",
"Open-Sourced Models",
"Load VC-1",
"Citing Visual Cortex",
"License",
]
long_description = "".join(desc_sections[section] for section in sections_to_include)
long_description = long_description.replace("# Visual Cortex and CortexBench", "# Visual Cortex").replace("vc1_teaser.gif", "vc1_teaser_small.gif").replace("./", "https://github.com/facebookresearch/eai-vc/tree/main/")

packages_to_release = find_packages(where="src") + find_namespace_packages(
include=["hydra_plugins.*"], where="src")

setup(
name="vc_models",
version="0.1",
packages=packages,
package_dir={"": "src"},
install_requires=install_requires,
include_package_data=True,
)
if __name__ == "__main__":
setup(
name="vc_models",
install_requires=get_file_content("requirements.txt").strip().split("\n"),
packages=packages_to_release,
version=get_package_version(),
package_dir={"": "src"},
include_package_data=True,
package_data={'vc_models': ['conf/model/*.yaml']},
description="Visual Cortex Models: A lightweight package for loading cutting-edge efficient Artificial Visual Cortex models for Embodied AI applications.",
long_description=long_description,
long_description_content_type="text/markdown",
author="Meta AI Research",
license="CC-BY-NC License",
url="https://eai-vc.github.io/",
project_urls={
"GitHub repo": "https://github.com/facebookresearch/eai-vc",
"Bug Tracker": "https://github.com/facebookresearch/eai-vc/issues",
},
classifiers=[
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Development Status :: 5 - Production/Stable",
"License :: Other/Proprietary License",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Operating System :: OS Independent",
"Natural Language :: English",
],
)
1 change: 0 additions & 1 deletion vc_models/src/hydra_plugins/eaif_models_plugin/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,3 @@
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This source code is licensed under the CC-BY-NC license found in the
# LICENSE file in the root directory of this source tree.

Loading