-
Notifications
You must be signed in to change notification settings - Fork 620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Nip-46] - Issue when using nip 44 for encrypting/decrypting bunker requests #1712
Comments
@vitorpamplona @paulmillr what's the reason for the maximum? Is it something we could get rid of, or would that compromise security? It never occurred to me that the payload size would be too small. |
It was unnecessary to be longer than 64K because all protocol messages were smaller than that. So it was easy to hardcode / validate. We can adjust the limit, however, some hardcoded constants which check input length etc will need to be adjusted. Is it hard to split the messages? |
ohhhh, that can explain many of the decryption bugs we are seeing. Lists can easily go beyond 65KB of data. I don't think we should be checking for size. If it doesn't explode with OutOfMemory we should encrypt, decrypt. |
What makes it hard to simply split the plaintexts into 65K chunks? The limitation has been there for 1+ year since the spec went live. |
It's not hard. It's just not the responsibility of the encryption library to dictate what size of payloads people are allowed to use. |
It is absurdly hard to split the plaintexts, we would need to make a new protocol for creating and handling these splits and get it implemented in dozens of clients and libraries and services. If we just increase the limit that's a simple non-breaking change everywhere. |
All libraries which implement nip44 would need to be upgraded to newer versions without the constraints. |
That's fine. Are we increasing the limit or getting rid of it? I vote for getting rid of it. |
This is ok, we're already getting unexpected errors with no workaround. This change would be entirely backwards compatible, and result in better UX over time. |
Splitting into multiple events would be catastrophic. Splitting into blocks within a single event (encrypted 64K at a time) is possible. But I'd rather the limit be 1MB if we need to have a limit. Funny, I have no recollection that there was a size limit but it is there in code I wrote. |
The big problem is that the padding spec says:
So how do we bypass this? |
Well, we have another rule that says messages must have at least 1 byte, and anyway I assume no one is making messages of zero bytes out there or that it would be a bug already, so maybe we can keep backwards-compatibility by doing:
The changes aren't super ugly, for example, when encrypting: padding := calcPadding(size)
- padded := make([]byte, 2+padding)
- binary.BigEndian.PutUint16(padded, uint16(size))
- copy(padded[2:], plain)
+ var padded []byte
+
+ if size < (1 << 16) {
+ padded = make([]byte, 2+padding)
+ binary.BigEndian.PutUint16(padded[0:2], uint16(size))
+ copy(padded[2:], plain)
+ } else {
+ padded = make([]byte, 6+padding)
+ binary.BigEndian.PutUint32(padded[2:6], uint32(size))
+ copy(padded[6:], plain)
+ }
ciphertext, err := chacha(cc20key, cc20nonce, []byte(padded)) When decrypting: - unpaddedLen := binary.BigEndian.Uint16(padded[0:2])
- if unpaddedLen < uint16(MinPlaintextSize) || unpaddedLen > uint16(MaxPlaintextSize) ||
- len(padded) != 2+calcPadding(int(unpaddedLen)) {
+ unpaddedLen := int(binary.BigEndian.Uint16(padded[0:2]))
+ offset := 2
+ if unpaddedLen == 0 {
+ unpaddedLen = int(binary.BigEndian.Uint32(padded[2:6]))
+ offset = 6
+ }
+
+ if unpaddedLen < 1 || len(padded) != offset+calcPadding(unpaddedLen) {
return "", fmt.Errorf("invalid padding")
}
- unpadded := padded[2:][:unpaddedLen]
- if len(unpadded) == 0 || len(unpadded) != int(unpaddedLen) {
+ unpadded := padded[offset : offset+unpaddedLen]
+ if len(unpadded) == 0 || len(unpadded) != unpaddedLen {
return "", fmt.Errorf("invalid padding")
} |
Can't this just use a new Editing existing versions seems like a mess. Just look at the idiotic overengineering decisions made in btc. |
Yes, I think we should use a new version for this |
Making a new version means 15 libraries and clients have to migrate before we can start using the new version. Doing this means it just works. Bitcoin is a live example of how versioning doesn't work and hacks like this are necessary. |
A new version seems like the right way to handle this. We can maintain compatibility by having senders use version 2 for anything under the size limit, then switching to version 3 only when they hit the limit. For receivers, only the stuff that was broken before will be using version 3. |
When the client and signer are using nip 44 for nsecbunker and you try to encrypt a large message eg.( following 1000+ people) the nip 44 encryption doesnt work beucause it has a limit of 65535 bytes
nips/44.md
Line 87 in cc3fbab
The text was updated successfully, but these errors were encountered: