v1.1.4
Added
- Maximum tokens from the config divided into input and output tokens. This means that models with smaller context windows (e.g. Mistral) no longer run into an error.
Fixed
- Version number is saved correctly again and displayed in the settings.
- Max_tokens and other parameters are now correctly set for the non-default LLMs