Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ReBASED #29481

Open
2 tasks done
kabachuha opened this issue Mar 6, 2024 · 3 comments
Open
2 tasks done

ReBASED #29481

kabachuha opened this issue Mar 6, 2024 · 3 comments

Comments

@kabachuha
Copy link

kabachuha commented Mar 6, 2024

Model description

Mirroring #29466, the newer model utilizing LinearAttention adds RMS norm to the attention forwarding and contracts the Taylor expansion to include only the third term, showing better performance and like BASED outperforms Mamba and stuff

Open source status

  • The model implementation is available
  • The model weights are available

Provide useful links for the implementation

The repo https://github.com/corl-team/rebased

@ArthurZucker
Copy link
Collaborator

If there are no model weights, the chances of merging it are very low 😢

@kabachuha
Copy link
Author

@ArthurZucker Their team now provided the weights and generation code corl-team/rebased#2

@elephantmipt
Copy link

elephantmipt commented Mar 9, 2024

From my perspective, there is a problem with Based/ReBased models in Huggingface Transformers: if someone wants to train/finetune a model, they need to utilize IO-Aware Triton kernels because the vanilla Torch implementation has a significant memory footprint. There is an implementation of several models here, and theoretically, it is possible to include flash_linear_attention as an optional dependency, similar to flash_attention. Additionally, there is still an option to use model for inference without any kernels.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants