-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The file is incomplete (105%). Try uploading again. #899
Comments
Hey @bits4life , I'm not sure really where to look. It seems that uploading did not work correctly, but can't really tell. I tried it here, but could not reproduce it (but not kubernetes and not arm64). The warning means that there exists more bytes than the reported content-length of the file. This could be a bad content-length header while uploading (not really likely to me) or some misbehaviour when chunking or some misbehaviour when storing chunks. Does it happen with any file or is there some kind of pattern? Does it change without a reverse proxy? |
Hey eikek,
First of all thanks for your reply and explanation.
I did some more tests.
Only plain text files seem to upload without problems. But other formats
like docx, bmp, bin, pdf, etc ... all have the issue.
I also ruled out the reverse proxy (traefik in my case) by running the
container as NodePort service where the entry IP:port is directly on the
hostnode for the container. The problem is still the same with this setup.
Which module is handling the chunks ? maybe i can set it to trace to look
for issues ?
Best regards,
Nico
Op zo 30 okt. 2022 om 15:10 schreef eikek ***@***.***>:
… Hey @bits4life <https://github.com/bits4life> , I'm not sure really where
to look. It seems that uploading did not work correctly, but can't really
tell. I tried it here, but could not reproduce it (but not kubernetes and
not arm64). The warning means that there exists more bytes than the
reported content-length of the file. This could be a bad content-length
header while uploading (not really likely to me) or some misbehaviour when
chunking or some misbehaviour when storing chunks.
Does it happen with any file or is there some kind of pattern? Does it
change without a reverse proxy?
—
Reply to this email directly, view it on GitHub
<#899 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFNCSSHKBAEXDQSWXOLJVODWFZXVZANCNFSM6AAAAAARSIAFQQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
That sounds a bit like that the text files are rather small and can be uploaded in one chunk completely whereas other files are uploaded in mutlliple chunks, could this be? You could try setting log level to Does it also occur when using a "normal" upload with e.g. |
I'm also running into this issue while on kubernetes. The logs i'm getting on a fresh instance: https://hastebin.com/veribikapu.yaml |
I confirmed that the issue doesn't happen when uploading using curl. |
Here's an example file that has the issue: {
"id": "15vrEnEHMjd-n9TY1jBCnfF-aefv9ZWzym3-skSivJBWmc",
"name": null,
"aliasId": null,
"aliasName": null,
"validity": 604800000,
"maxViews": 30,
"password": false,
"descriptionRaw": "",
"description": "",
"created": 1671740820902,
"publishInfo": null,
"files": [
{
"id": "9FmbL6W29ek-9XFR6Wv25uu-RGATEJKSMjA-vSwjC5qU7Sr",
"filename": "305752627_5281399795241558_427954128394578030_n.jpg",
"size": 45104,
"mimetype": "image/jpeg",
"checksum": "",
"storedSize": 47154
}
]
} |
Hello @insuusvenerati, I am having the same problem. Did you find the problem? |
Same here. |
Hi there, totally lost this issue…. But sadly I cannot reproduce 😟. Are there any other details that could help to reproduce? Did some/most of you changed config values, like file storage? From the original description it seems not affected by the file backend, but any info might help :). |
You're deploying in k8s? |
No, don't know k8s very well and I need to reproduce it without to better analyze what's going on. |
That's a good question. Maybe a try is to record all the requests when uploading one such file that fails and send me the HAR export from the browser. |
Hi @cerealconyogurt thank you a lot for this! I'll try to have a look over the weekend. |
@eikek Having the same issue. Because I saw it happen to many files (like pdf), I just cut out stuff from the pdf file to see what the condition for it happening is. I ended up reducing it to a single character file (see file below). So it seems some weird bytes cause this error. Interestingly, if you add for example an "a" after that character, it gets cut out in the file after uploading to sharry. In a hex editor this specific character shows up as the bytes EC, 8F. |
Just tested some older versions, it seems to work in 1.10.0, but not in 1.11.0 and newer. |
Thanks @PietHelzel for all the info. I'm sorry that it's currently a bit hard for me to find time. Also, the fact that I cannot reproduce it locally makes it quite hard to figure out. But it's a great thing to know that it has been working in an earlier version. |
I can confirm the version 1.10.0 works flawless |
@eikek Good news, it seems like filesystem storage on the newest version works too, it only affects the database storage |
Hi, chiming in on this issue. |
Same issue, reported details in redundant thread: #1327 I can confirm the tipp the others said. Its working with file storage and not with H2 storage. |
Hi community,
I am trying to setup sharry on my k3s cluster and it seems the work out fine except that for every upload there is an error about incomplete file upload (ranging from 103 - 105 %)
Some details on my setup :
sharry.restserver {
base-url = "https://sharry.xxxmydomainxxx.be"
bind {
address = "0.0.0.0"
port = {{ .Values.service.main.ports.http.port }}
}
backend {
auth {
fixed {
enabled = true # set to true to enable this auth provider
user = "admin"
password = "admin"
order = 10
}
}
}
}
I set log-level to debug and could see the following errors :
2022.10.30 11:46:40:0000 [io-comp...] [WARN ] org.http4s.blazecore.util.IdentityWriter - Will not write more bytes than what was indicated by the Content-Length header (347753)
2022.10.30 11:46:40:0000 [io-comp...] [ERROR] org.http4s.blaze.server.Http1ServerStage$$anon$1 - Error writing body
Not sure if this is related.
I tried different versions, also changed to file store VS database, tried disabling the checksum but nothing seems to make a difference.
Any idea how to further troubleshoot would be appreciated.
Best regards,
Nico
The text was updated successfully, but these errors were encountered: