-
-
Notifications
You must be signed in to change notification settings - Fork 238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixing Multiplatform Builds #2549
base: dev/multi-platform-images
Are you sure you want to change the base?
Fixing Multiplatform Builds #2549
Conversation
Dockerfile
Outdated
@@ -336,6 +369,7 @@ COPY --link --from=terragrunt /usr/local/bin/terragrunt /usr/bin/ | |||
COPY --link --from=terragrunt /bin/terraform /usr/bin/ | |||
COPY --link --from=kics /app/bin/kics /usr/bin/ | |||
COPY --from=kics /app/bin/assets /opt/kics/assets/ | |||
COPY --from=cargo /bin/* /usr/bin/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I added the COPY --link, I wasn't sure what would happen if there are multiple files copied and there are some existing files already in the destination folder. If you would've used copy with link here anyways, since you are using it in your configurations, go for it and change it ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, I also tweaked the deploy-DEV.yml to (hopefully) properly build and start the test of the image (I haven't touched the tests yet)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that this runs within my local repo so I'll tinker with it till I can get the tests to run
6ddff57
to
4fa6afc
Compare
If you manage to get the build working, you are effectively solving one of the blockers of #2273 (since anything doesn't work for now). But before the changes to the yaml workflow files, I would've considered to target these changes to main, as if it works well, it's going to benefit even without having the multi-platform images ready (and we pull changes from main to the dev branch afterwards) |
@waterfoul If you want to make another dummy PR targeting branch dev/multi-platform-images with any change, and that we merge it, then you'll no longer be a first-time collaborator and the CI checks would be run here without us approving them each time, to help you iterate faster. |
I'll do that once I get to a stopping point |
5e9c6a1
to
a83a139
Compare
de7bd9f
to
5acf8ac
Compare
@waterfoul is seems that build job is unhappy about rust version, if it can help ^^ #185 1782.9 error: package |
Got sick over the weekend so wasn't working on this. Hopefully I'll work the rest of the kinks out today |
e271bbe
to
9a84c80
Compare
.automation/build.py
Outdated
"RUN PYTHONDONTWRITEBYTECODE=1 pip3 install" | ||
" --no-cache-dir --upgrade pip virtualenv \\\n" | ||
" --no-cache-dir --upgrade pip crossenv \\\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the first time I see crossenv, is it part of Python like venv? I know virtualenv is not recommended anymore since such functionality was made part of Python. Is it a new recommended way?
We used virtual environments since it was difficult to have all the packages without any conflicting dependencies (that would prevent having the latests versions). One of my ideas was to one day, split out the installation of environnement in different stages, and then combine the stages, since there are a lot of packages that don't have wheels for musl, ands it's long. What was preventing me from doing it right away, is that I wasn't sure how the copy steps would work with the symlinks that might be done with virtualenv to decouple common dependencies or interpreters.
If you have experience on the handling, I would be curious to have your opinion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Later on I saw how you separated the download and install of Python packages. You might disregard some of the thoughts written earlier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the first time I see crossenv, is it part of Python like venv? I know virtualenv is not recommended anymore since such functionality was made part of Python. Is it a new recommended way?
It's a different project that does something similar. It builds a virtual environment but allows cross compiling. That way you can use the amd64 processor to build the arm64 libs (which is MUCH faster) https://pypi.org/project/crossenv/. It's really only something you want to use if your are building multiarch images
We used virtual environments since it was difficult to have all the packages without any conflicting dependencies (that would prevent having the latests versions). One of my ideas was to one day, split out the installation of environnement in different stages, and then combine the stages, since there are a lot of packages that don't have wheels for musl, ands it's long. What was preventing me from doing it right away, is that I wasn't sure how the copy steps would work with the symlinks that might be done with virtualenv to decouple common dependencies or interpreters.
If you have experience on the handling, I would be curious to have your opinion.
Are you thinking about doing separate docker stages or separate github stages? I'm noticing more cache misses than i'd like in buildkit so we need to fix that one first, once that's fixed we can look at separate stages (It's how I'm doing our internal build). I was planning on looking into cache optimization after I got it working.
Later on I saw how you separated the download and install of Python packages. You might disregard some of the thoughts written earlier.
I probably did that for a reason other than what you'd expect. With a multiarch build on build stages that are running with --platform=$BUILDPLATFORM
it only runs stages once (instead of once for each arch) until you hit a, ARG TARGETPLATFORM
(or similar) statement. With that in mind you want to try to do as much as you can (ex. download packages) before it splits the build between different arches
I like what's happening here... we all the great minds working on the multi-arch topic, for a while now... I'm pretty confident that MegaLinter will be multi-arch someday, and much more optimized :) |
Once the changes needed in the descriptor format are finished (if my initial spec definition doesn't handle the cases needed), I'll update the jsonschema according to what is needed. |
29409dd
to
60675a9
Compare
5f701b4
to
5c48663
Compare
I seem to remember that this PR had to do with #2273, I don't have much idea of either the technologies or what this PR solves but I'm just writing to say that I've done a rebase on the #2273 branch on main and it's already updated in case it helps. @waterfoul I hope you keep working on this PR. |
cf9732a
to
21a6b61
Compare
@bdovaz Yes this has to do with the other branch. I've been a bit busy and haven't been able to track back over here for a while. I'm pretty close to making our internal image work (which a lot of this is based from) so I may be able to get back here soonish. I am also going to be on vacation in a few weeks so that will delay this as well. |
60675a9
to
010cb24
Compare
@waterfoul thanks a lot for your work :) |
5868a9e
to
533180a
Compare
Sorry I haven't been able to track back to this for the last week or so and I'll be on vacation for the next three weeks. I managed to get the build working for x86 and started getting the tests going. My plan is to loop back to arm after the x86 tests work (the docker file supports it but I was getting timeout/disk space errors) |
This pull request has been automatically marked as stale because it has not had recent activity. If you think this pull request should stay open, please remove the |
db6c086
to
c03af4e
Compare
6cf43f3
to
9341eed
Compare
9341eed
to
ea0cd80
Compare
The luarocks failure is because the alpine package has a different name to call, since it has different subpackages. If we look at the APKBUILD of luarocks for alpine here: https://git.alpinelinux.org/aports/tree/community/luarocks/APKBUILD, we see the versions of lua that they use to create the subpackages. In the MR linked the patch file https://git.alpinelinux.org/aports/tree/community/luarocks/prefer-curl-to-wget.patch on https://git.alpinelinux.org/aports/tree/community/luarocks?h=master, ie this MR: https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/48613, An example of running luarocks is shown: > apk add luarocks5.1 lua5.1-dev
> luarocks-5.1 install argparse # fails
# now apply the fix manually:
> apk add wget
> luarocks-5.1 install argparse # succeeds In the Docker build logs, at line 2443 https://github.com/oxsecurity/megalinter/actions/runs/5647726454/job/15298447734?pr=2549#step:10:2447, we see that lua5.4 (5.4.6-r0) is installed.
So, ea0cd80#diff-b5c2204997905019d3b42466eb0d6d216f140759bef65e5294e54f5b2b1a384aR26 should have
However, it seems that on alpine 3.17, the luarocks package installed is 2.4.4-r2 (https://pkgs.alpinelinux.org/package/v3.17/community/x86_64/luarocks) Is the alpine 3.18 just recently released and not used yet for megalinter? |
Seeing the failures, it seems that either we need to install explicitly lua5.3 (https://pkgs.alpinelinux.org/package/v3.17/main/x86_64/lua5.3) or use an Python alpine3.18 base image, ie python:3.11.4-alpine3.18 instead of python:3.11.4-alpine3.17. |
Well, it turns out you the maintainer/creator of sfdx-hardis is @nvuillam, you can directly ask him ;) |
@waterfoul how is this PR? It would be a shame for it to be abandoned after all your work. |
I was thinking last month of maybe we could at some time merge into the branch, then try to add some of the build changes to main, before having the full multiplatform changes done. There was some nice improvements and nice patterns here ;) |
No description provided.