Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing Multiplatform Builds #2549

Open
wants to merge 105 commits into
base: dev/multi-platform-images
Choose a base branch
from

Conversation

waterfoul
Copy link
Contributor

No description provided.

Dockerfile Outdated
@@ -336,6 +369,7 @@ COPY --link --from=terragrunt /usr/local/bin/terragrunt /usr/bin/
COPY --link --from=terragrunt /bin/terraform /usr/bin/
COPY --link --from=kics /app/bin/kics /usr/bin/
COPY --from=kics /app/bin/assets /opt/kics/assets/
COPY --from=cargo /bin/* /usr/bin/
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I added the COPY --link, I wasn't sure what would happen if there are multiple files copied and there are some existing files already in the destination folder. If you would've used copy with link here anyways, since you are using it in your configurations, go for it and change it ;)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, I also tweaked the deploy-DEV.yml to (hopefully) properly build and start the test of the image (I haven't touched the tests yet)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that this runs within my local repo so I'll tinker with it till I can get the tests to run

@waterfoul waterfoul force-pushed the dev/multi-platform-images branch 2 times, most recently from 6ddff57 to 4fa6afc Compare April 11, 2023 16:29
@echoix
Copy link
Collaborator

echoix commented Apr 11, 2023

If you manage to get the build working, you are effectively solving one of the blockers of #2273 (since anything doesn't work for now).

But before the changes to the yaml workflow files, I would've considered to target these changes to main, as if it works well, it's going to benefit even without having the multi-platform images ready (and we pull changes from main to the dev branch afterwards)

@echoix
Copy link
Collaborator

echoix commented Apr 11, 2023

@waterfoul If you want to make another dummy PR targeting branch dev/multi-platform-images with any change, and that we merge it, then you'll no longer be a first-time collaborator and the CI checks would be run here without us approving them each time, to help you iterate faster.

@waterfoul
Copy link
Contributor Author

@waterfoul If you want to make another dummy PR targeting branch dev/multi-platform-images with any change, and that we merge it, then you'll no longer be a first-time collaborator and the CI checks would be run here without us approving them each time, to help you iterate faster.

I'll do that once I get to a stopping point

@nvuillam
Copy link
Member

@waterfoul is seems that build job is unhappy about rust version, if it can help ^^

#185 1782.9 error: package ruff v0.0.261 (/tmp/pip-install-w9hbx5ad/ruff_82dfa3bc116449dcb88013beb07b9a49/local_dependencies/ruff) cannot be built because it requires rustc 1.67 or newer, while the currently active rustc version is 1.64.0

@waterfoul
Copy link
Contributor Author

@waterfoul is seems that build job is unhappy about rust version, if it can help ^^

#185 1782.9 error: package ruff v0.0.261 (/tmp/pip-install-w9hbx5ad/ruff_82dfa3bc116449dcb88013beb07b9a49/local_dependencies/ruff) cannot be built because it requires rustc 1.67 or newer, while the currently active rustc version is 1.64.0

Got sick over the weekend so wasn't working on this. Hopefully I'll work the rest of the kinks out today

"RUN PYTHONDONTWRITEBYTECODE=1 pip3 install"
" --no-cache-dir --upgrade pip virtualenv \\\n"
" --no-cache-dir --upgrade pip crossenv \\\n"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's the first time I see crossenv, is it part of Python like venv? I know virtualenv is not recommended anymore since such functionality was made part of Python. Is it a new recommended way?

We used virtual environments since it was difficult to have all the packages without any conflicting dependencies (that would prevent having the latests versions). One of my ideas was to one day, split out the installation of environnement in different stages, and then combine the stages, since there are a lot of packages that don't have wheels for musl, ands it's long. What was preventing me from doing it right away, is that I wasn't sure how the copy steps would work with the symlinks that might be done with virtualenv to decouple common dependencies or interpreters.
If you have experience on the handling, I would be curious to have your opinion.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Later on I saw how you separated the download and install of Python packages. You might disregard some of the thoughts written earlier.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's the first time I see crossenv, is it part of Python like venv? I know virtualenv is not recommended anymore since such functionality was made part of Python. Is it a new recommended way?

It's a different project that does something similar. It builds a virtual environment but allows cross compiling. That way you can use the amd64 processor to build the arm64 libs (which is MUCH faster) https://pypi.org/project/crossenv/. It's really only something you want to use if your are building multiarch images

We used virtual environments since it was difficult to have all the packages without any conflicting dependencies (that would prevent having the latests versions). One of my ideas was to one day, split out the installation of environnement in different stages, and then combine the stages, since there are a lot of packages that don't have wheels for musl, ands it's long. What was preventing me from doing it right away, is that I wasn't sure how the copy steps would work with the symlinks that might be done with virtualenv to decouple common dependencies or interpreters.
If you have experience on the handling, I would be curious to have your opinion.

Are you thinking about doing separate docker stages or separate github stages? I'm noticing more cache misses than i'd like in buildkit so we need to fix that one first, once that's fixed we can look at separate stages (It's how I'm doing our internal build). I was planning on looking into cache optimization after I got it working.

Later on I saw how you separated the download and install of Python packages. You might disregard some of the thoughts written earlier.

I probably did that for a reason other than what you'd expect. With a multiarch build on build stages that are running with --platform=$BUILDPLATFORM it only runs stages once (instead of once for each arch) until you hit a, ARG TARGETPLATFORM (or similar) statement. With that in mind you want to try to do as much as you can (ex. download packages) before it splits the build between different arches

Dockerfile Show resolved Hide resolved
@nvuillam
Copy link
Member

nvuillam commented Apr 19, 2023

I like what's happening here... we all the great minds working on the multi-arch topic, for a while now... I'm pretty confident that MegaLinter will be multi-arch someday, and much more optimized :)

@echoix
Copy link
Collaborator

echoix commented Apr 19, 2023

Once the changes needed in the descriptor format are finished (if my initial spec definition doesn't handle the cases needed), I'll update the jsonschema according to what is needed.

@waterfoul waterfoul force-pushed the dev/multi-platform-images branch 3 times, most recently from 29409dd to 60675a9 Compare April 24, 2023 20:04
@bdovaz bdovaz force-pushed the dev/multi-platform-images branch 2 times, most recently from 5f701b4 to 5c48663 Compare May 14, 2023 14:41
@bdovaz
Copy link
Collaborator

bdovaz commented May 14, 2023

I seem to remember that this PR had to do with #2273, I don't have much idea of either the technologies or what this PR solves but I'm just writing to say that I've done a rebase on the #2273 branch on main and it's already updated in case it helps.

@waterfoul I hope you keep working on this PR.

@bdovaz bdovaz force-pushed the dev/multi-platform-images branch from cf9732a to 21a6b61 Compare May 14, 2023 15:43
@waterfoul
Copy link
Contributor Author

@bdovaz Yes this has to do with the other branch. I've been a bit busy and haven't been able to track back over here for a while. I'm pretty close to making our internal image work (which a lot of this is based from) so I may be able to get back here soonish. I am also going to be on vacation in a few weeks so that will delay this as well.

@waterfoul waterfoul changed the title Add the ability to native build cargo packages Fixing Multiplatform Builds May 15, 2023
@nvuillam
Copy link
Member

nvuillam commented May 15, 2023

@waterfoul thanks a lot for your work :)

@waterfoul
Copy link
Contributor Author

Sorry I haven't been able to track back to this for the last week or so and I'll be on vacation for the next three weeks. I managed to get the build working for x86 and started getting the tests going. My plan is to loop back to arm after the x86 tests work (the docker file supports it but I was getting timeout/disk space errors)

@github-actions
Copy link
Contributor

This pull request has been automatically marked as stale because it has not had recent activity.
It will be closed in 14 days if no further activity occurs.
Thank you for your contributions.

If you think this pull request should stay open, please remove the O: stale 🤖 label or comment on the pull request.

@waterfoul waterfoul force-pushed the dev/multi-platform-images branch 2 times, most recently from 6cf43f3 to 9341eed Compare July 24, 2023 14:57
@echoix
Copy link
Collaborator

echoix commented Jul 24, 2023

The luarocks failure is because the alpine package has a different name to call, since it has different subpackages.

If we look at the APKBUILD of luarocks for alpine here: https://git.alpinelinux.org/aports/tree/community/luarocks/APKBUILD, we see the versions of lua that they use to create the subpackages.

In the MR linked the patch file https://git.alpinelinux.org/aports/tree/community/luarocks/prefer-curl-to-wget.patch on https://git.alpinelinux.org/aports/tree/community/luarocks?h=master, ie this MR: https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/48613,

An example of running luarocks is shown:

> apk add luarocks5.1 lua5.1-dev
> luarocks-5.1 install argparse # fails

# now apply the fix manually:
> apk add wget
> luarocks-5.1 install argparse # succeeds

In the Docker build logs, at line 2443 https://github.com/oxsecurity/megalinter/actions/runs/5647726454/job/15298447734?pr=2549#step:10:2447, we see that lua5.4 (5.4.6-r0) is installed.

#96 111.1 (146/211) Installing lua5.4-libs (5.4.6-r0)
#96 111.1 (147/211) Installing lua5.4 (5.4.6-r0)
#96 111.1 (148/211) Installing luarocks (2.4.4-r2)

So, ea0cd80#diff-b5c2204997905019d3b42466eb0d6d216f140759bef65e5294e54f5b2b1a384aR26 should have

luarocks5.4 install luacheck

However, it seems that on alpine 3.17, the luarocks package installed is 2.4.4-r2 (https://pkgs.alpinelinux.org/package/v3.17/community/x86_64/luarocks)
And that luarocks creates subpackages for lua 5.1 to 5.3:
https://git.alpinelinux.org/aports/tree/community/luarocks/APKBUILD?h=3.17-stable#n25
and not the lua 5.4 that was installed as saw in the build logs.

Is the alpine 3.18 just recently released and not used yet for megalinter?

@echoix
Copy link
Collaborator

echoix commented Jul 25, 2023

Seeing the failures, it seems that either we need to install explicitly lua5.3 (https://pkgs.alpinelinux.org/package/v3.17/main/x86_64/lua5.3) or use an Python alpine3.18 base image, ie python:3.11.4-alpine3.18 instead of python:3.11.4-alpine3.17.

@waterfoul
Copy link
Contributor Author

waterfoul commented Jul 26, 2023

@echoix Do you know why sdfx-hardis is installed? I can't find a reason for it, it takes quite a while to install, and it was added when checkov got a manual version bump

372840c

I'm going to remove it for now, we can add it back in if it's needed

@echoix
Copy link
Collaborator

echoix commented Jul 26, 2023

@echoix Do you know why sdfx-hardis is installed? I can't find a reason for it, it takes quite a while to install, and it was added when checkov got a manual version bump

372840c

I'm going to remove it for now, we can add it back in if it's needed

Well, it turns out you the maintainer/creator of sfdx-hardis is @nvuillam, you can directly ask him ;)

@bdovaz
Copy link
Collaborator

bdovaz commented Oct 13, 2023

@waterfoul how is this PR? It would be a shame for it to be abandoned after all your work.

@echoix
Copy link
Collaborator

echoix commented Oct 14, 2023

I was thinking last month of maybe we could at some time merge into the branch, then try to add some of the build changes to main, before having the full multiplatform changes done. There was some nice improvements and nice patterns here ;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
nostale This issue or pull request is not stale, keep it open
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants