-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert ARM changes #92
Conversation
Well damn. 😬 |
Weirdly, after the revert things are still not building correctly. Looking at the failing base images (Alpine, 11), it has:
Which makes no sense, as we're specifically installing the zlib-dev package as part of the Dockerfile for Alpine: docker-pgautoupgrade/Dockerfile.alpine Line 42 in 536d763
The installation of the zlib dev library happens on line 207:
I wonder if the problem is bad caching of some variety? |
... could be? I mean we could delete the |
Ahhh, hadn't thought of that. That's a good idea. In the meantime, I've just temporariliy disabled zlib support in the PG 11 build to see if the failure changes at all. I'm kind of suspecting it won't, but lets see what happens. 😄 |
funny enough, the final images were all able to build the |
My temporary commit didn't help at all, so I'm going to revert it and try your idea of killing the |
Hmmm, I'm not seeing image(s) on Docker Hub with |
ah yeah, |
Killed all of the Looking over the failing log this time (a different one, yet again):
It seems like it's running the compile process for PG twice (for the same ARM64 arch), and in this particular case it somehow thinks during the second compile that the C compiler isn't producing valid programs. I say "during the second compile" because it runs through the whole process a few thousand lines earlier, successfully installing and compile PG for ARM64 the first time around. Any ideas? I wonder if we need to get the config.log to be output (ie
|
Meh, I'm pretty certain it's more some kind of flakiness with GitHubs infrastructure. I just re-ran one of the successful CI runs from last week, and it failed this time around. https://github.com/pgautoupgrade/docker-pgautoupgrade/actions/runs/12970259213 |
maybe some of the ARM infrastructure is leaking over to the AMD64 infrastructure? 😄 I mean, shouldn't be possible ... as long as we can build the final images it is okay, but the pipeline will take much longer. |
I am really confused why the builds do not work on at least it appears to always fail at the same point in the Alpine image:
I rebuilt the Alpine image on my local machine without any issues. The first time the build failed was two weeks ago (26th of January). The build the week before (19th of January) passed. So it is likely not related to the fact that GitHub changed the default Ubuntu image from 22.04 to 24.04. but what is also strange is that |
Yeah, it makes no sense to me too. I was wondering if one of the external things we call might have changed (ie Alpine or Debian base image), but that wouldn't seem to consistently cause the behaviour we're seeing either. |
I know re-activated |
well, nevermind. segmentation fault when building the ARM version of the Postgres v15 base image:
|
I'm far from certain as I haven't checked very thoroughly yet, but using qemu and buildx runs the ARM and AMD builds in parallel on the same machine, right? If there is caching that is defined for the whole workflow, would the AMD and ARM builds be sharing the same cache? Could that cause the issue we are seeing? Saying that, I have never experienced such an issue before though, with similar multi-arch workflows |
correct.
i am not quite sure how the GitHub Actions cache created by the Docker
same here. although at this point I think the failing build on main is disconnected from the failing build on my feature branch using your new workflow. I assume the build on the feature branch is simply too much load on one machine, that's why it's crashing. I'll open a new PR to switch to the Ubuntu 22.04 image. maybe this could help, not sure. |
very funky, build appears to work for all base images on Ubuntu 22.04. so it could really be that the 24.04 image has some kind of issue? I think the 22.04 is supported for quite some time ... so let's stay on that version until we need to upgrade 😄 |
Yeah, this was a super weird one. Glad something worked to get things building properly again though. 😄 |
it seems that if you build an image for different architectures and on different machines, the last one pushed overwrites the previous one, even if their architecture is different. right now the pipeline on main passed and only the ARM64 versions are present ...