Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Dockerize the Project #1

Open
Zhen-Bo opened this issue Dec 17, 2024 · 10 comments
Open

Feature Request: Dockerize the Project #1

Zhen-Bo opened this issue Dec 17, 2024 · 10 comments

Comments

@Zhen-Bo
Copy link

Zhen-Bo commented Dec 17, 2024

Description:

I would like to request the addition of Docker support for this project. Dockerizing the project would allow it to be easily deployed and run on servers, rather than just on personal computers. This would greatly enhance the flexibility of the project.

Benefits:

  1. Consistency: Docker ensures that the project runs the same way in different environments.
  2. Ease of Deployment: Docker simplifies the deployment process, making it easier to run the project on various servers.
  3. Isolation: Docker containers provide isolation, which helps in avoiding conflicts with other applications running on the same server.

Suggested Implementation:

  1. Create a Dockerfile to define the environment and dependencies.
  2. Add a docker-compose.yml file for easier orchestration if necessary.
  3. Update the documentation to include instructions on how to build and run the Docker container.

Thank you for considering this feature request. I believe it will be a valuable addition to the project.

@15532th
Copy link
Owner

15532th commented Dec 17, 2024

Thank you for reaching out with such a detailed feature request.

The way this project is normally used implies that it generates large amount of files that would have to be directly available to the user. On top of that, in some scenarios it might also produce quite a large amount of internal data files (my own instance is nearing 5Gb at the moment).

Since the project is effectively an orchestration software, and relies on other tools to get the work done, the list of dependencies varies between installations. To account for that I would need to either bloat the image with all possible tools I can think of, or require end users to edit the dockerfile to include packages they intend to run, which kinda defeats the first two benefits you list, since resulting installations would not be consistent, and manually editing and rebuilding the image doesn't sound like an easy deployment process (even more so if you take updates into account).

Please provide more details on your intended usage of the project and the ways you would prefer these issues to be handled, so I can take it into account when considering whether dockerization would be beneficial in a typical use scenario.

@Zhen-Bo
Copy link
Author

Zhen-Bo commented Dec 18, 2024

Thank you for your detailed response and for considering my feature request.

The reason I suggested Dockerizing the project is that I plan to run it on my NAS. Running it on the NAS helps address the issue of limited storage space. Additionally, with services like WebDAV, SMB, and iSCSI available on the NAS, I can seamlessly access the generated files on my computer in real time.

Regarding the Docker image construction, I believe an all-in-one (AIO) approach could work well. This involves packaging all the necessary tools into a single Docker image. For example, I attempted to create a Docker image yesterday that includes Python3, FFmpeg, yt-dlp, and avtdl. The resulting image, in my recollection, was no larger than 300MB, which seems manageable.

Of course, I understand that this approach may not cover every potential dependency a user might need. However, for many users, an AIO Docker image could provide a consistent, easy-to-deploy solution that satisfies common use cases. Advanced users could still modify the Dockerfile to meet their specific requirements.

I’d love to hear your thoughts on whether an AIO approach might be a feasible starting point or if there are other suggestions you’d prefer to explore.

Thank you again for your time and consideration!

@15532th
Copy link
Owner

15532th commented Dec 18, 2024

I understand the approach you're suggesting for dependencies selection, however, I would like to clarify the way downloaded files would be exposed to the NAS own filesystem. Do you prefer a single volume with relative paths (the way used in examples) or some kind of more advanced scheme that separates internal files from the downloads?

@Zhen-Bo
Copy link
Author

Zhen-Bo commented Dec 18, 2024

The current recommendation is to map the three directories (db, logs, and archive) individually.

If you are willing to adjust the architecture so that all output files are stored in a single directory, it would indeed simplify the mapping configuration. In this case, you would only need to map the host's config.yml into the container for reading and additionally map one dedicated output directory.

However, based on the current structure, I still recommend mapping the three directories separately. In my use case, I store Docker configuration files in the /volume1/docker folder, while another shared folder, /volume1/photoserver, is dedicated to storing recorded videos for my media server to access.

Here’s an example of my docker-compose.yml file for fc2-live-dl:

version: '3'
services:
  autofc2:
    image: ghcr.io/holoarchivists/fc2-live-dl:sha-dfe14ad
    command: autofc2
    logging:
      driver: "json-file"
      options:
        max-size: "1k"
    volumes:
      - ./autofc2.json:/app/autofc2.json
      - /volume1/photoserver/ASMR/FC2_ASMR_Stream:/recordings

@15532th
Copy link
Owner

15532th commented Dec 18, 2024

I uploaded a test image, should work with docker-compose.yml containing something like this:

version: '3'
services:
  avtdl:
    image: ghcr.io/15532th/avtdl:2.2.0-beta
    ports:
      - "8080:8080"
    volumes:
      - ./app:/home/avtdl/app/

Adding a separate volume for recordings seems straightforward, though the same path should be used in the configuration file settings.

It seems there is no obvious way to add ffmpeg into existing base image without tripling its size, so I left it out for the time being.

@Zhen-Bo
Copy link
Author

Zhen-Bo commented Dec 19, 2024

I have created a Docker image based on the current implementation, and the resulting size is approximately 589MB. I believe this size is acceptable for deployment purposes.

Below is the Dockerfile I used for testing:

FROM alpine:3.18

COPY . /app

WORKDIR /app

RUN apk update && \
    apk add --no-cache \
    bash \
    build-base \
    ca-certificates \
    coreutils \
    freetype-dev \
    lame-dev \
    libass-dev \
    libogg-dev \
    libvpx-dev \
    libwebp-dev \
    libvorbis-dev \
    opus-dev \
    rtmpdump-dev \
    x264-dev \
    x265-dev \
    yasm-dev \
    python3 \
    py3-pip \
    ffmpeg && \
    pip3 install --no-cache-dir yt-dlp && \
    if [ -f requirements.txt ]; then pip3 install --no-cache-dir -r requirements.txt; fi

RUN python3 --version && \
    pip3 --version && \
    ffmpeg -version && \
    yt-dlp --version

CMD ["python3", "avtdl.py"]

image

@15532th
Copy link
Owner

15532th commented Dec 19, 2024

Seems like it makes sense to have two images, basic with avtdl only, and extended, with ffmpeg, yt-dlp and other typical dependencies. In my tests installing ffmpeg increased size to slightly below 700Mb for Debian-based image, so I'm going to use python:3.11-slim as base for both basic and extended images, unless you explicitly request Alpine.

@15532th
Copy link
Owner

15532th commented Dec 20, 2024

Should be available on the package page, please check if it works on your machine.

Ended up using ffmpeg container as a base, since it results in much smaller size than python-based container with ffmpeg installed from repositories.

@Zhen-Bo
Copy link
Author

Zhen-Bo commented Dec 21, 2024

The new image fails to start properly.

Error Message:

avtdl        |   libavutil      59. 39.100 / 59. 39.100
avtdl        |   libavcodec     61. 19.100 / 61. 19.100
avtdl        |   libavformat    61.  7.100 / 61.  7.100
avtdl        |   libavdevice    61.  3.100 / 61.  3.100
avtdl        |   libavfilter    10.  4.100 / 10.  4.100
avtdl        |   libswscale      8.  3.100 /  8.  3.100
avtdl        |   libswresample   5.  3.100 /  5.  3.100
avtdl        |   libpostproc    58.  3.100 / 58.  3.100
avtdl        | Unrecognized option '-host'.
avtdl        | Error splitting the argument list: Option not found

It seems that the Dockerfile's CMD definition:

CMD ["avtdl", "--host", "0.0.0.0"]

needs to be updated to:

CMD ["avtdl"]

@15532th
Copy link
Owner

15532th commented Dec 21, 2024

Judging from the error message that was probably caused by the base container defining it's own ENTRYPOINT, which is set to "ffmpeg". Seems like I have missed it because I was testing the extended image by docker run --entrypoint and only run docker-compose on the -basic build.

Should be fixed in 2.2 image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants