-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cherrypicker missing many (all?) cherrypicks from label #2309
Comments
There is no special configuration; the plugin simply looks for the |
FYI: istio/istio.io#6332 |
All of the errors that I am seeing thus far are e.g. istio/istio#20543{
insertId: "170kjw2f50zixp"
jsonPayload: {
error: "git fetch error: exit status 128. output: fatal: remote error: upload-pack: not our ref 2df38397f8b7ef9e1415dd777ddce3c64a0d149d
fatal: The remote end hung up unexpectedly
"
event-GUID: "ecdba800-4157-11ea-9f97-d30febca7c89"
event-type: "pull_request"
level: "info"
msg: "Cherry-pick failed."
plugin: "cherrypick"
}
labels: {
compute.googleapis.com/resource_name: "gke-prow-prod-pool-fb335f1f-5rds"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "cherrypicker-6895fb69fc-mcchq"
container.googleapis.com/stream: "stderr"
}
logName: "projects/istio-testing/logs/cherrypicker"
receiveTimestamp: "2020-01-27T22:54:27.758925058Z"
resource: {
labels: {
cluster_name: "prow"
container_name: "cherrypicker"
instance_id: "294245110686867759"
namespace_id: "default"
pod_id: "cherrypicker-6895fb69fc-mcchq"
project_id: "istio-testing"
zone: "us-west1-a"
}
type: "container"
}
severity: "ERROR"
timestamp: "2020-01-27T22:54:22Z"
} cc @cjwagner |
Well I don't see a |
That SHA is in my fork (and the PR referenced is from me): howardjohn/istio@2df3839. I am not exactly sure where its from, but its not completely random? |
Interesting, that commit is a merge of the merge commit of istio/istio#20543 and some other commit. I don't see anywhere we try to clone the user's fork and that commit shouldn't matter anyways since it is entirely separate from the PR... |
There are plenty more, but here are a few of the other commits scraped from the logs which also failed on |
In particular they are all merge commits in which one of the 2 parent commits is the merge commit of a PR that requested a cherry pick. |
Running a load test using the following code and export GIT_TRACE=1
tmp_dir=$(mktemp -d -t ci-XXXXXXXXXX)
trap 'rm -rf $mktemp' EXIT
org="istio"
format_string="https://github.com/%s/%s.git"
repos=(
"istio.io"
"istio"
"release-builder"
"proxy"
"envoy"
"cni"
"operator"
"client-go"
)
pushd "$tmp_dir"
for repo in "${repos[@]}"; do
src="$(printf "$format_string" "$org" "$repo")"
git clone --mirror "$src"
pushd "$(basename "$src")"
git fetch
popd
done
popd with error: 15:54:11.760238 git.c:439 trace: built-in: git clone --mirror https://github.com/istio/istio.git
Cloning into bare repository 'istio.git'...
warning: templates not found in /Users/clarketm/.git-templates
15:54:11.767597 run-command.c:663 trace: run_command: git-remote-https origin https://github.com/istio/istio.git
15:54:12.697024 run-command.c:663 trace: run_command: git fetch-pack --stateless-rpc --stdin --lock-pack --thin --check-self-contained-and-connected --cloning --no-progress https://github.com/istio/istio.git/
15:54:12.704064 git.c:439 trace: built-in: git fetch-pack --stateless-rpc --stdin --lock-pack --thin --check-self-contained-and-connected --cloning --no-progress https://github.com/istio/istio.git/
fatal: remote error: upload-pack: not our ref 39c3dbcd9b9c18c21ea97898922749528bbf96fe
|
I see both openshift (https://github.com/openshift/release/issues/6110) and prow (kubernetes/test-infra#15225) have been experiencing pain related to this error |
I wonder if that merge commit is somehow related to in-repo config fetching? The |
I am still unsure about the merge commit origination. I will need to dig into the in-repo code to see.
That makes sense. So, it sounds like this is less of a bug and more of a necessary side-effect of eventual consistency and replication. I am fine with more retries or a longer backoff factor; the problem is that these are hardcoded to 3 retries with a relatively short backoff factor of 2 in the I guess a good first step is to try to identify, with certainty, what/who creates the merge commit(s). The I can tweak the |
Still sounds like there is a bug somewhere given that we still don't know why the unrelated merge commit is being fetched. +1 for identifying that first before trying to work around with retries. Since this looks like a git/GitHub bug and since you have a simple script to reproduce this it is probably time to escalate to GH support? |
GitHub support ticket created. In the meantime, I will add support for more (or longer time between) retries to see if that mitigates. |
The The cherrypicker fetches the patch from the GitHub api and not via the git client: https://github.com/kubernetes/test-infra/blob/b2d632960440e47f671dd3b997709aa299911456/prow/external-plugins/cherrypicker/server.go#L378 This doesn't really answer the question why exactly this happens (maybe revisions that are not actually existing anymore are advertised by github?) but I doesn't look like a bug in our code. |
Example: istio/istio#20543, istio/istio#20442
Do we need to setup config per-branch for this label? I could not find anything where we specify this so I assumed it applies to any
cherrypick/*
label. If the labels don't work we should just remove the labels and have everyone use the command since it seems to work more reliablyThe text was updated successfully, but these errors were encountered: