Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Docker containers #75

Closed
wants to merge 8 commits into from
Closed

WIP: Docker containers #75

wants to merge 8 commits into from

Conversation

reelmatt
Copy link
Member

@reelmatt reelmatt commented Apr 27, 2020

This contains the work @matthew-t-smith did on Dockerizing the front-/back-ends. Please include/correct anything that might be missing!

Build each end separately:

  • cd into the back-end or front-end directories
  • Build the image running a command like:
    docker build --tag backendtest:1.0 .
  • Run the container with a command like:
    docker run -p 8000:8000 --name <name> backendtest:1.0
    Note, the -p flag should be set to 8000:8000 for the back-end and 3000:3000 for the front-end.

Build/run the two together:
In the root directory run docker-compose up. This should start both containers and have them talk to each other. When you visit http://localhost:3000/, you may need to refresh the page once to get the Node list on the side. This bug was fixed in #76.

Split back-end into own directory to be its own image
- Individual image Dockerfiles for front-end added; not successfully run yet, but they build without error
- docker-compose to run both images in a container; also not yet successful
Copy link
Member Author

@reelmatt reelmatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See a smattering of comments below. The big change which seems to have linked the two is in front-end/Dockerfile and front-end/package.json, which switches the proxy from "localhost" to "back-end" (the name of the Docker service).

back-end/Dockerfile Outdated Show resolved Hide resolved
back-end/Dockerfile Outdated Show resolved Hide resolved
back-end/Dockerfile Show resolved Hide resolved
back-end/vp/manage.py Show resolved Hide resolved
back-end/vp/vp/settings.py Show resolved Hide resolved
docker-compose.yml Outdated Show resolved Hide resolved
docker-compose.yml Outdated Show resolved Hide resolved
docker-compose.yml Outdated Show resolved Hide resolved
docker-compose.yml Outdated Show resolved Hide resolved
docker-compose.yml Show resolved Hide resolved
matthew-t-smith and others added 2 commits April 27, 2020 18:25
- Change COPY path directives
- Add environment creation for temporary key
- Missed `python` in CMD
- Re-adding `dotenv` usage
- `ALLOWED_HOSTS` filled in
- Version of `docker-compose`
- Adding `bash -c` for directory change assistance
- Added `links` for back-end
- Changing front-end proxy
@reelmatt
Copy link
Member Author

I pushed a few tweaks, all minor things (a missing hypen, quotes, and a backslash)!

This now works for me with docker-compose up in the root dir and the separate apps can be spun up with

docker build --tag backendtest:2.0 .
docker run -p 8000:8000 --name be backendtest2.0

and

docker build --tag frontendtest:2.0 .
docker run -p 3000:3000 --name fe frontendtest2.0

- Changing to lighter-weight node base image
- Removing extra `cd` that misses `manage.py`
- Added .dockerignore for perf improvements
@matthew-t-smith
Copy link
Collaborator

A few more changes. Still having this one issue:

back-end_1   | [28/Apr/2020 05:45:55] "GET /workflow/nodes HTTP/1.1" 500 23
back-end_1   | [28/Apr/2020 05:45:55] "POST /workflow/new HTTP/1.1" 200 78

@reelmatt can you check on the GET nodes that seems to be failing? Can see it's finding the workflow views alright now.

I will also double back when this gets resolved to fix up the Postman Tests that are surely failing with the directory changes for back-end.

@matthew-t-smith matthew-t-smith marked this pull request as draft April 28, 2020 06:14
@@ -6,7 +6,9 @@ services:
- ./back-end:/visual-programming/back-end
ports:
- "8000:8000"
command: bash -c "cd vp && pipenv run python manage.py runserver 0.0.0.0:8000"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On my machine at least I need this extra cd vp part of the command. I think this is because the docker-compose.yml donesn't the have same/similar line that the individual Dockerfile has of WORKDIR /..../vp to change it.

Reverting back to cd vp && pipenv ... still runs for me with the other changes you made.

@reelmatt
Copy link
Member Author

@matthew-t-smith We may be having some differences with where directories are being located in the Docker container. Made a comment about adding the cd vp part back to the docker-compose.yml for the back-end.

The issue with GET /workflow/nodes is an order-of-operations issue introduced in one of the latest PRs. The Node listing now requires a Workflow object to know where to look for the Nodes/handle custom node creation. So the fix is just reversing the two calls to do /workflow/new first followed by /workflow/nodes. If you refresh the page once after it loads, you should see the listing pop in.

I know @reddigari has done a lot of front-end improvements in the mean time, so I think it might be easier to make a change in one of his open PRs, or make the change after this PR is merged to reduce conflict headaches.

Re: testing, I think it's actually a problem with the pyworkflow unit tests. If I add cd back-end to the workflow file before pipenv install and test locally, everything is fine. There may be a Python version issue though with the container GitHub Actions uses to run the workflow though. We might need to revert the update to the Pipfile to go from 3.8.2 back to 3.8 (or 3.8.1, that's what looks like is the latest version for default GitHub Actions—hard to confirm without it doing a run).

@reelmatt
Copy link
Member Author

@matthew-t-smith I committed the changes to fix the GitHub Action runner; it now finds and passes the tests.

Have you looked into the discrepancy between including cd vp in the docker-compose.yml that I commented on above? The build fails when I do not include it, but I could be missing something. If we figure that out, I think we're good to merge so I'm going to open this up for review.

@reelmatt reelmatt marked this pull request as ready for review April 30, 2020 20:43
@reddigari reddigari mentioned this pull request Apr 30, 2020
@reddigari
Copy link
Collaborator

A little confused by the volume mounts in docker-compose.yml, since the individual Docker files copy the source directories into the image. Likewise the front end build runs npm install, so we shouldn't need to mount node_modules. Are the mounts just for the sake of keeping the image up to date as we develop?

@reddigari
Copy link
Collaborator

Just a thought: would it be possible to include an option for a user to mount their own custom nodes directory into the pyworkflow custom nodes location? They could store their nodes locally and then have access to them as soon as the container is running as opposed to manually uploading each one, and having them disappear when the container restarts.

@matthew-t-smith
Copy link
Collaborator

@reelmatt - I think I have an idea on the directory issue. I’ll do some testing Sunday to be sure.

@reddigari - the directory mounts were largely following some other samples. I’ll double check on those. The node modules should be ignored in the images so the container is lighter, but I will make sure that’s actually happening. Also, not sure on the option of loading into the container’s directory structure. Interesting, but once I get this updated, I can see.

@reddigari
Copy link
Collaborator

As far as I understand it (which is without much dev ops experience), the volume mounts are so a running container reflects changes you make locally without having to rebuild. But the image we upload for people to pull and run needs to have the released source code copied (and dependencies installed). That's what's handled by the COPY statements, and running npm install (which will create and populate node_modules in the container).

Using build args to decide whether to run the COPY and install commands is one option, but I've also seen folks maintain two Dockerfiles, one called Dockerfile that runs all the copy/install stuff, and one called Dockerfile.dev, for development, that does not. But I'm not sure how to make that play with docker-compose.

@matthew-t-smith
Copy link
Collaborator

Abandoning in favor of PR #84

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants