Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ARG inheritance works in Docker, but not Podman #5762

Open
srgoni opened this issue Oct 1, 2024 · 9 comments · May be fixed by #5845
Open

ARG inheritance works in Docker, but not Podman #5762

srgoni opened this issue Oct 1, 2024 · 9 comments · May be fixed by #5845

Comments

@srgoni
Copy link

srgoni commented Oct 1, 2024

Description

Docker provides limited inheritance of Dockerfile ARGs between build stages, as described here: https://docs.docker.com/build/building/variables/#scoping

Once a build argument is declared or consumed in a stage, it's automatically inherited by child stages.

This contradicts the Dockerfile reference https://docs.docker.com/reference/dockerfile/#scope - which states:

An ARG instruction goes out of scope at the end of the build stage where it was defined. To use an argument in multiple stages, each stage must include the ARG instruction.

Podman behaves according to the latter statement, making the examples given in the other document incompatible.

Steps to reproduce the issue:

Try to build a Containerfile with the following contents (copied from the Docker manual):

# syntax=docker/dockerfile:1
FROM alpine AS base
# Declare the build argument in the build stage
ARG NAME="joe"

# Create a new stage based on "base"
FROM base AS build
# The NAME build argument is available here
# since it's declared in a parent stage
RUN echo "hello $NAME!"

Describe the results you received:

podman build prints:

[2/2] STEP 2/2: RUN echo "hello $NAME!"
hello !

Describe the results you expected:

docker build would print:

[2/2] STEP 2/2: RUN echo "hello $NAME!"
hello joe!

Output of rpm -q buildah or apt list buildah:

buildah/testing,unstable,now 1.37.2+ds1-3 amd64 [installed,automatic]

Output of buildah version:

Version:         1.37.2
Go Version:      go1.22.7
Image Spec:      1.1.0
Runtime Spec:    1.2.0
CNI Spec:        1.0.0
libcni Version:  
image Version:   5.32.2
Git Commit:      
Built:           Thu Jan  1 01:00:00 1970
OS/Arch:         linux/amd64
BuildPlatform:   linux/amd64

Output of podman version if reporting a podman build issue:

Client:       Podman Engine
Version:      5.2.2
API Version:  5.2.2
Go Version:   go1.23.1
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Output of cat /etc/*release:

PRETTY_NAME="Debian GNU/Linux trixie/sid"
NAME="Debian GNU/Linux"
VERSION_CODENAME=trixie
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Output of uname -a:

Linux 6.10.9-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.10.9-1 (2024-09-08) x86_64 GNU/Linux

Output of cat /etc/containers/storage.conf:

cat: /etc/containers/storage.conf: No such file or directory
Copy link

github-actions bot commented Nov 1, 2024

A friendly reminder that this issue had no activity for 30 days.

@srgoni
Copy link
Author

srgoni commented Nov 1, 2024

Update: The Docker documentation was amended by moby/buildkit#5381 and no longer shows contradictory information.

Podman buildah should be updated to support argument inheritance in the same way as Docker buildkit.

@rhatdan
Copy link
Member

rhatdan commented Nov 1, 2024

Interested in opening a PR?

@srgoni
Copy link
Author

srgoni commented Nov 1, 2024

Interested, yes, but I'm not familiar with the Podman source code and have no idea where to start.

@rhatdan
Copy link
Member

rhatdan commented Nov 4, 2024

The place to start would be Buildah not Podman, Podman vendors in buildah to do all of its container image building.

Buildah is a smaller code base.

@srgoni
Copy link
Author

srgoni commented Nov 15, 2024

As far as I can see, arguments are resolved in https://github.com/openshift/imagebuilder/blob/master/builder.go#L290 per stage.

As buildah seems to defer to openshift/imagebuilder to do all the heavy lifting, should I open an issue (or possibly PR) on this repository instead?

@nalind
Copy link
Member

nalind commented Nov 15, 2024

I expect the fix would eventually be made there, yes.

@srgoni
Copy link
Author

srgoni commented Nov 22, 2024

This looks much more complicated after all. I suppose it would be better to not try to resolve arguments during initial parsing, but record resolved arguments on the fly and them make them available to dependent build stages.

  1. While building each stage, record processed arguments in a map
  2. At the end of the build, keep the final argument map
  3. Share this map with child stages before their build starts

This requires that child stages must wait until their parent finishes, but from what I can see in https://github.com/containers/buildah/blob/main/imagebuildah/executor.go#L908 , they would all run in parallel, and it's not obvious to me how dependencies between stages are guaranteed? Could you help explain how this works?

@srgoni
Copy link
Author

srgoni commented Nov 22, 2024

Ok, so... processed arguments are already recorded in Stage.Builder.Args.

Proposal:

  1. Add the field Executor.argResult of type map[string]map[string]string
  2. At the end of a stage build, copy Stage.Builder.Args to Executor.argResult[stageName] (with locking)
  3. Add an additional return value to Executor.waitForStage of type map[string]string, which is copied from Executor.argResult[stageName] (with locking)
  4. At the beginning of a stage build, add the returned map from waitForStage to Stage.Builder.Args
  5. Other places where waitForStage is used can just discard the args

Edit: I submitted a PR based on this idea from my personal GH account. It's completely untested and needs some finetuning.

@onitake onitake linked a pull request Nov 22, 2024 that will close this issue
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants