Skip to content

Random content decoding failure #347

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 tasks done
Verequies opened this issue Feb 3, 2025 · 13 comments
Open
2 tasks done

Random content decoding failure #347

Verequies opened this issue Feb 3, 2025 · 13 comments

Comments

@Verequies
Copy link

Verequies commented Feb 3, 2025

Prerequisites

  • I have written a descriptive issue title
  • I have searched existing issues to ensure the bug has not already been reported

Fastify version

5.2.1

Plugin version

8.0.2

Node.js version

20

Operating system

Windows

Operating system version (i.e. 20.04, 11.3, 10)

Server running on Linux, clients connecting from Chrome on Windows 11

Description

Over the past year I have been seeing a random bug with Fastify compress, where after some amount of requests (entirely random) I eventually get a (failed)net::ERR_CONTENT_DECODING_FAILED. I am not able to repeat this consistently, but many of my users receive this issue quite alot, even within 5 minutes of using the software.

Is there any part of Fastify compress that could be causing this bug? It does not happen when I disable compression.
Fastify and fastify-compress are updated to the latest versions. Requests that fail usually appear to be around 50 Bytes according to Chromes network monitor.

Link to code that reproduces the bug

Not able to consistently reproduce as it is random after some amount of requests.

Expected Behavior

All requests go through successfully and (failed)net::ERR_CONTENT_DECODING_FAILED does not occur.

@Verequies Verequies changed the title Random dontent decoding failure Random content decoding failure Feb 3, 2025
@mcollina
Copy link
Member

mcollina commented Feb 3, 2025

Unless you set up some kind of lab to reproduce it'll be really hard for us to help.

Does it happen only if your users are on windows, or are you deploying on Windows (I see Windows as your OSS).

What's your configuration for @fastify/compress?

@Verequies
Copy link
Author

Verequies commented Feb 3, 2025

@mcollina

I have managed to create a working app to reproduce the issue consistently.
https://github.com/Verequies/bughunt

It appears to occur happen when adding data to the front of the payload in the onSend server hook: https://github.com/Verequies/bughunt/blob/bb435c001af9c11372abb2400c65f87e9ee71f96/src-backend/main.ts#L46.

Should be able to run the app by calling yarn, then yarn serve:backend in one terminal, and in another yarn serve:frontend. Make sure to load the site via https://localhost:8080 and not https://0.0.0.0:8080 otherwise the service worker will not load.

The software is deployed to a Linux server but the majority of users are running on Chrome under Windows.

@climba03003
Copy link
Member

Are you meaning you always append payload after you send something in your actual program?
In this case, it must be fail. The main reason is that your payload becomes [compressed data][your on send hook data].

@mcollina
Copy link
Member

mcollina commented Feb 3, 2025

that's because the onSend hook that is being applied to the route is applied after the one added by @fastify/compress, leading to garbled data.

If you want to alter the payload pre-serialization, you could use the preSerialization hook: https://fastify.dev/docs/latest/Reference/Hooks/#preserialization.

@jsumners jsumners closed this as not planned Won't fix, can't repro, duplicate, stale Feb 3, 2025
@Verequies
Copy link
Author

Verequies commented Feb 3, 2025

@mcollina

If that was the case, then should it not always fail? And when I print out the payload right before returning it to the client, it always prints uncompressed data.

The software works 95% of the time and I am seeing this error under some load, this lead me to believing it was a bug.

Based on what you have said, I think whats actually happening is Fastify compress is actually being ran after the onSend hook, however on some load, it runs before. Which also doesn't make much sense at all given I can print the data out just fine and the request only fails on the client side.

Could you please clarify this?

If it was working as you say it does, then yes, I would have realised that I am modifying the payload in the wrong spot.

The documentation mentions you can edit the payload in the onSend hook as well: https://fastify.dev/docs/latest/Reference/Hooks/#onsend

EDIT: I cannot also modify the payload in the preSerialization hook as the data is a string. The documentation clearly mentions that preSerialization hook does not get run for string data types. See: https://fastify.dev/docs/latest/Reference/Hooks/#preserialization

Again, it is working 95% of the time with onSend, as I would expect based on the Fastify documentation. I believe this is some miscommunication about how Fastify compress is actually handling the data.

@Verequies
Copy link
Author

Verequies commented Feb 4, 2025

@mcollina

I was able to fix this issue by encapsulating the random byte string with quotes.
It seems that whenever the random byte string started with 80 it would result in Fastify compress failing to properly encode the data.

Whilst there is not much that Fastify compress can do in this situation, it might be good to potentially modify the return status from 200 OK to 500 Internal Server Error. The clients always receive 200 OK because the code/algorithm is running fine, it is just the outgoing encoding that is failing. This could be an option added in the plugin setup.

@mcollina
Copy link
Member

mcollina commented Feb 4, 2025

Whilst there is not much that Fastify compress can do in this situation, it might be good to potentially modify the return status from 200 OK to 500 Internal Server Error. The clients always receive 200 OK because the code/algorithm is running fine, it is just the outgoing encoding that is failing. This could be an option added in the plugin setup.

I don't think there is an error on the server side that we ignore. Did you find it?

@Verequies
Copy link
Author

@mcollina

There is the onUnsupportedEncoding plugin setup option, which I previously missed.
This however does not handle when the compression fails, so I can not change the return code in this situation.

Would it be possible to add this functionality? The option would be something like onEncodingError which would allow passing a function similar to onUnsupportedEncoding, and would be great if we could have the original payload as an argument in the case we want to "survive" a compression failure and bypass the compression while retaining the original payload.

@mcollina
Copy link
Member

mcollina commented Feb 4, 2025

Sure, send a PR.

@Verequies
Copy link
Author

Verequies commented Feb 5, 2025

@mcollina I am not entirely sure how to proceed with that given that it fails silently.

I was able to reproduce the same issue with any client. Be it Chrome, Firefox, or even curl or wget in the terminal. They all get the same error.

When modifying the backend to use Express JS instead, this error did not appear. So I again believe that this issue is a bug with Fastify compress, and there seems to be an error on how it is handled.

Express JS was tested using their compression library. Might give some insights as to what is happening.

Furthermore, this issue can be triggered without adding the "onSend" hook at all.

Regardless, this issue was closed prematurely.

EDIT: I have updated my test repository to add Express JS as well, and the ability to switch between the two for testing. Both setups are pretty much identical.

@jsumners jsumners reopened this Feb 5, 2025
@climba03003
Copy link
Member

We skip the compress when identify the bytes signature is compressed. That is the reason why random bytes would face the error you mention.

fastify-compress/index.js

Lines 515 to 523 in da5235e

function zipStream (deflate, encoding) {
return peek({ newline: false, maxBuffer: 10 }, function (data, swap) {
switch (isCompressed(data)) {
case 1: return swap(null, new Minipass())
case 2: return swap(null, new Minipass())
}
return swap(null, deflate[encoding]())
})
}

@Verequies
Copy link
Author

@climba03003 Ah I see, that makes sense. No need to re-compress data that is already compressed.

I guess in the situation where data may appear to be compressed already - even though it isn't - it may be a good idea to have an option to ignore the compressed check/force compression regardless.

@mcollina
Copy link
Member

mcollina commented Feb 14, 2025

That feature was added a long time ago in #34. I think we might want to revisit that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants