You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran into a very hard-to-find issue where the cache of files that have been edited was not getting invalidated. Here is a Dockerfile to reproduce:
FROM busybox as final
ARG CACHE_BUSTER=default
RUN echo "Cache busting value: $CACHE_BUSTER"
COPY 1/something.py .cache_buster /home/nonroot/main/
RUN cat /home/nonroot/main/something.py
I have a buildkit daemon remotely and was using this call to trigger it:
I tried all these additional things like a .cache_buster file added to context filled with a date or a CACHE_BUSTER build arg filled with date and neither were helping. I found that old cached values (from multiple edits ago of something.py were being left in the resulting image).
It turned out I was doing this in the prep for the build context before calling buildctl:
find $CONTEXT -type f -exec touch -t 202501011200.00 {} +
Which I thought would be better in case some hashing depended on file timestamps in the caching hash. Something definitely does but in unexpected ways. I noticed the issue by finally looking deep into the snapshots and noticed all of them for this file were Jan 1 12:00 across multiple snapshots. That somehow prevents new file changes from invalidating the cache.
My hunch is when my local context gets prepared to be transferred to the remote buildkit daemon, some step there looks at the timestamps of the files, and if they haven't changed or aren't recent they don't get sent to the daemon? Or maybe worse, in the daemon it's colliding in snapshots with based on the same filename and timestamp so decides not to invalidate that line or the hashing function does mtime+filename first before content or something.
Removing the touch command fixes it and works as expected, but I'm reporting this as this doesn't seem right, and I'm sure others will hit this.
The text was updated successfully, but these errors were encountered:
When transferring local files between client and daemon the comparison between previous transfer and new files is based on file metadata (size, timestamp etc.). These semantics are same as rsync defaults for example.
I ran into a very hard-to-find issue where the cache of files that have been edited was not getting invalidated. Here is a Dockerfile to reproduce:
I have a buildkit daemon remotely and was using this call to trigger it:
I tried all these additional things like a .cache_buster file added to context filled with a date or a CACHE_BUSTER build arg filled with date and neither were helping. I found that old cached values (from multiple edits ago of something.py were being left in the resulting image).
It turned out I was doing this in the prep for the build context before calling buildctl:
Which I thought would be better in case some hashing depended on file timestamps in the caching hash. Something definitely does but in unexpected ways. I noticed the issue by finally looking deep into the snapshots and noticed all of them for this file were
Jan 1 12:00
across multiple snapshots. That somehow prevents new file changes from invalidating the cache.My hunch is when my local context gets prepared to be transferred to the remote buildkit daemon, some step there looks at the timestamps of the files, and if they haven't changed or aren't recent they don't get sent to the daemon? Or maybe worse, in the daemon it's colliding in snapshots with based on the same filename and timestamp so decides not to invalidate that line or the hashing function does mtime+filename first before content or something.
Removing the touch command fixes it and works as expected, but I'm reporting this as this doesn't seem right, and I'm sure others will hit this.
The text was updated successfully, but these errors were encountered: