You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to quantize my real-time causal audio model (Conv2d + GRU+ Dense + TransConv2D). For this model I have buffers for (Conv2D and TransConv2D) and hidden state for GRU that need to be passed with every frame starting with zero values. The buffers need get updated for every frame.
The text was updated successfully, but these errors were encountered:
@basp-2024 calibration dataset is expected to be a subset / representative dataset from training / validation. In this case, you could treat this as model with multiple inputs (you might have to save these results from the float model).
Please, share more details if you are facing any specific issue.
I’m attaching the details on how I’m calibrating and quantizing the model. The model takes a single time frame as input, along with the encoder(Conv2D), GRU, and decoder(Conv2D Transpose) buffers. These buffers need to be passed with every frame, starting from zero values, and updated with each frame. However, the results are not as good as expected, and I suspect the calibration process might be the issue. Could you please assist me in improving the performance.
I am trying to quantize my real-time causal audio model (Conv2d + GRU+ Dense + TransConv2D). For this model I have buffers for (Conv2D and TransConv2D) and hidden state for GRU that need to be passed with every frame starting with zero values. The buffers need get updated for every frame.
The text was updated successfully, but these errors were encountered: