it is now maintained in https://github.com/ai4os/ai4os-dev-env
This is a container that exposes Jupyter notebook and Jupyter Lab or VSCode together with the DEEP as a Service API component. There is no application code inside!
You can either mount host volume with the code into the container, or run jupyterlab terminal (e.g. http://127.0.0.1:8888/lab) to use git to pull your code and use either jupyter notebook or jupyter lab or vscode for the development of your application. Test it immediately and when ready, commit your changes back to your repository.
The resulting Docker image has pre-installed:
- Tensorflow or PyTorch or (just) Ubuntu
- cookiecutter
- git
- curl
- deepaas
- deep-start
- flaat
- jupyter, jupyterlab OR vscode (code-server)
- mc
- nano
- oidc-agent
- openssh-client
- python3
- pip3
- rclone
- wget
To run the Docker container directly from Docker Hub and start using jupyter notebook / jupyterlab or vscode run the following command:
$ docker run -ti -p 5000:5000 -p 6006:6006 -p 8888:8888 deephdc/deep-oc-generic-dev
This command will pull the Docker image from the Docker Hub and start the default command `deep-start -j´, which starts Jupyter Lab.
Then go either to http://127.0.0.1:8888/tree for jupyter notebook or to http://127.0.0.1:8888/lab for jupyterlab.
If you want to start DEEPaaS API service, go to the jupyterlab, i.e. http://127.0.0.1:8888/lab, open terminal, type:
$ deep-start
direct your browser to http://127.0.0.1:5000
Since Jan-2023, deep-start also allows to start VSCode (code-server) via `deep-start -s´ :
$ docker run -ti -p 5000:5000 -p 6006:6006 -p 8888:8888 deephdc/deep-oc-generic-dev deep-start -s
If you need to mount some directories from your host into the container, please, use usual Docker way, e.g.
$ docker run -ti -p 5000:5000 -p 6006:6006 -p 8888:8888 -v $HOME/data:/srv/app/data deephdc/deep-oc-generic-dev
mounts your host directory $HOME/data
into container's path /srv/app/data
.
N.B. For either CPU-based or GPU-based images you can also use udocker to run containers.
docker-compose.yml allows you to run the application with various configurations via docker-compose.
N.B! docker-compose.yml is of version '2.3', one needs docker 17.06.0+ and docker-compose ver.1.16.0+, see https://docs.docker.com/compose/install/
If you want to use Nvidia GPU (generic-gpu), you need nvidia-docker and docker-compose ver1.19.0+ , see nvidia/FAQ
If you want to build the container directly in your machine (because you want
to modify the Dockerfile
for instance) follow the following instructions:
Building the container:
-
Get the
DEEP-OC-generic-dev
repository:$ git clone https://github.com/deephdc/DEEP-OC-generic-dev
-
Build the container (default is CPU and Python3 support):
$ cd DEEP-OC-generic-dev $ docker build -t deephdc/deep-oc-generic-dev .
These two steps will download the repository from GitHub and will build the
Docker container locally on your machine. You can inspect and modify the
Dockerfile
in order to check what is going on. For example, Dockerfile has three ARGs:
- image: base image (default: tensorflow/tensorflow)
- tag: to define tag for the Tensorflow Baseimage, e.g. '2.10.0' (default)
e.g.
$ cd DEEP-OC-generic-dev
$ docker build -t deephdc/deep-oc-generic-dev:tf2.10.0-cpu --build-arg tag=2.10.0 .
builds deephdc/deep-oc-generic-dev:tf2.10.0-cpu
with CPU version of Tensorflow 2.10.0.
If you call http://127.0.0.1:8888/tree or http://127.0.0.1:8888/lab for the first time, you will get to "login" page. If you run the container locally,
you will see in the terminal where the container started printed token to access Jupyter Notebook or Jupyter Lab.
You can also see logs of your running container by envoking $ docker logs containerID
One other way is to specify the jupyter password at the time of container instantiation:
$ docker run -ti -p 5000:5000 -p 6006:6006 -p 8888:8888 -e idePASSWORD=the_pass_for_ide deephdc/deep-oc-generic-dev
N.B. The quotes are treated as parts of the password. The password has to be more than 8 characters long!