Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Nip-46] - Issue when using nip 44 for encrypting/decrypting bunker requests #1712

Open
greenart7c3 opened this issue Jan 22, 2025 · 17 comments

Comments

@greenart7c3
Copy link
Contributor

When the client and signer are using nip 44 for nsecbunker and you try to encrypt a large message eg.( following 1000+ people) the nip 44 encryption doesnt work beucause it has a limit of 65535 bytes

nips/44.md

Line 87 in cc3fbab

- Validate plaintext length. Minimum is 1 byte, maximum is 65535 bytes

@staab
Copy link
Member

staab commented Jan 22, 2025

@vitorpamplona @paulmillr what's the reason for the maximum? Is it something we could get rid of, or would that compromise security? It never occurred to me that the payload size would be too small.

@paulmillr
Copy link
Contributor

It was unnecessary to be longer than 64K because all protocol messages were smaller than that. So it was easy to hardcode / validate.

We can adjust the limit, however, some hardcoded constants which check input length etc will need to be adjusted.

Is it hard to split the messages?

@vitorpamplona
Copy link
Collaborator

ohhhh, that can explain many of the decryption bugs we are seeing. Lists can easily go beyond 65KB of data.

I don't think we should be checking for size. If it doesn't explode with OutOfMemory we should encrypt, decrypt.

@paulmillr
Copy link
Contributor

ohhhh, that can explain many of the decryption bugs we are seeing

What makes it hard to simply split the plaintexts into 65K chunks? The limitation has been there for 1+ year since the spec went live.

@vitorpamplona
Copy link
Collaborator

It's not hard. It's just not the responsibility of the encryption library to dictate what size of payloads people are allowed to use.

@fiatjaf
Copy link
Member

fiatjaf commented Jan 22, 2025

It is absurdly hard to split the plaintexts, we would need to make a new protocol for creating and handling these splits and get it implemented in dozens of clients and libraries and services.

If we just increase the limit that's a simple non-breaking change everywhere.

@paulmillr
Copy link
Contributor

If we just increase the limit that's a simple non-breaking change everywhere

All libraries which implement nip44 would need to be upgraded to newer versions without the constraints.

@vitorpamplona
Copy link
Collaborator

All libraries which implement nip44 would need to be upgraded to newer versions without the constraints.

That's fine. Are we increasing the limit or getting rid of it? I vote for getting rid of it.

@staab
Copy link
Member

staab commented Jan 22, 2025

All libraries which implement nip44 would need to be upgraded to newer versions without the constraints.

This is ok, we're already getting unexpected errors with no workaround. This change would be entirely backwards compatible, and result in better UX over time.

@mikedilger
Copy link
Contributor

Splitting into multiple events would be catastrophic. Splitting into blocks within a single event (encrypted 64K at a time) is possible. But I'd rather the limit be 1MB if we need to have a limit.

Funny, I have no recollection that there was a size limit but it is there in code I wrote.

@fiatjaf
Copy link
Member

fiatjaf commented Jan 26, 2025

The big problem is that the padding spec says:

Padding format is: [plaintext_length: u16][plaintext][zero_bytes]

So how do we bypass this?

@fiatjaf
Copy link
Member

fiatjaf commented Jan 26, 2025

Well, we have another rule that says messages must have at least 1 byte, and anyway I assume no one is making messages of zero bytes out there or that it would be a bug already, so maybe we can keep backwards-compatibility by doing:

When message length is smaller than 65536, padding format is: [plaintext_length: u16][plaintext][zero_bytes]
When message length is greater than or equal to 65536, padding format is [0: u16][plaintext_length: u32][plaintext][zero_bytes]

The changes aren't super ugly, for example, when encrypting:

        padding := calcPadding(size)
-       padded := make([]byte, 2+padding)
-       binary.BigEndian.PutUint16(padded, uint16(size))
-       copy(padded[2:], plain)
+       var padded []byte
+
+       if size < (1 << 16) {
+               padded = make([]byte, 2+padding)
+               binary.BigEndian.PutUint16(padded[0:2], uint16(size))
+               copy(padded[2:], plain)
+       } else {
+               padded = make([]byte, 6+padding)
+               binary.BigEndian.PutUint32(padded[2:6], uint32(size))
+               copy(padded[6:], plain)
+       }

        ciphertext, err := chacha(cc20key, cc20nonce, []byte(padded))

When decrypting:

-       unpaddedLen := binary.BigEndian.Uint16(padded[0:2])
-       if unpaddedLen < uint16(MinPlaintextSize) || unpaddedLen > uint16(MaxPlaintextSize) ||
-               len(padded) != 2+calcPadding(int(unpaddedLen)) {
+       unpaddedLen := int(binary.BigEndian.Uint16(padded[0:2]))
+       offset := 2
+       if unpaddedLen == 0 {
+               unpaddedLen = int(binary.BigEndian.Uint32(padded[2:6]))
+               offset = 6
+       }
+
+       if unpaddedLen < 1 || len(padded) != offset+calcPadding(unpaddedLen) {
                return "", fmt.Errorf("invalid padding")
        }

-       unpadded := padded[2:][:unpaddedLen]
-       if len(unpadded) == 0 || len(unpadded) != int(unpaddedLen) {
+       unpadded := padded[offset : offset+unpaddedLen]
+       if len(unpadded) == 0 || len(unpadded) != unpaddedLen {
                return "", fmt.Errorf("invalid padding")
        }

@paulmillr
Copy link
Contributor

paulmillr commented Jan 27, 2025

Can't this just use a new version? Seems like exactly a problem solved by bumping version number. I'd be happy to edit spec & re-gen test vectors.

Editing existing versions seems like a mess. Just look at the idiotic overengineering decisions made in btc.

@greenart7c3
Copy link
Contributor Author

Yes, I think we should use a new version for this

@fiatjaf
Copy link
Member

fiatjaf commented Jan 27, 2025

Making a new version means 15 libraries and clients have to migrate before we can start using the new version. Doing this means it just works.

Bitcoin is a live example of how versioning doesn't work and hacks like this are necessary.

@staab
Copy link
Member

staab commented Jan 27, 2025

A new version seems like the right way to handle this. We can maintain compatibility by having senders use version 2 for anything under the size limit, then switching to version 3 only when they hit the limit. For receivers, only the stuff that was broken before will be using version 3.

@mikedilger
Copy link
Contributor

mikedilger commented Jan 27, 2025

I was going to agree with @fiatjaf but then I read @staab 's proposal which ticks all the boxes. Just make sure v3 is as similar to v2 as possible

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants