![KrillinAI](/krillinai/KrillinAI/raw/master/docs/images/logo.png)
Based on LLMs, professional-level translation, capable of generating both portrait and landscape formats, one-click deployment.
English: 中文
🚀 Project Overview
Krillin AI is a one-stop solution designed for users and developers seeking high-quality video processing. It provides an end-to-end workflow, from video download to the final product, ensuring every frame of your content is extraordinary.
🎯 One-Click Start: There is no need for complicated environment configuration. Krillin AI supports automatic installation of dependencies, enabling you to quickly get started and put it into use immediately.
📥 Video Acquisition: Integrated with yt-dlp, it can directly download videos via YouTube and Bilibili links, simplifying the process of material collection. You can also directly upload local videos.
📜 Subtitle Recognition and Translation: It supports voice and large model services of mainstream providers such as OpenAI and Alibaba Cloud, as well as local models (continuous integration in progress).
🧠 Intelligent Subtitle Segmentation and Alignment: Utilize self-developed algorithms to conduct intelligent segmentation and alignment of subtitles, getting rid of rigid sentence breaks.
🔄 Custom Vocabulary Replacement: Support one-click replacement of vocabulary to adapt to the language style of specific fields.
🌍 Professional Translation: The whole-paragraph translation engine ensures the consistency of context and semantic coherence.
🎙️ Dubbing and Voice Cloning: You can choose the default male or female voice tones to generate video reading dubbing for the translated content, or upload local audio samples to clone voice tones for dubbing.
📝 Dubbing Alignment: It can perform cross-language dubbing and also align with the original subtitles.
🎬 Video Composition: With one click, compose horizontal and vertical videos with embedded subtitles. Subtitles that exceed the width limit will be processed automatically.
Input languages: Chinese, English, Japanese, German, Turkish supported (more languages being added)
Translation languages: 56 languages supported, including English, Chinese, Russian, Spanish, French, etc.
The following picture demonstrates the effect after the subtitle file, which was generated through a one-click operation after importing a 46-minute local video, was inserted into the track. There was no manual adjustment involved at all. There are no missing or overlapping subtitles, the sentence segmentation is natural, and the translation quality is also quite high.
subtitle_translation.mp4 |
tts.mp4 |
- Download the executable file that matches your device system from the release and place it in an empty folder.
- Create a
config
folder inside the folder, then create aconfig.toml
file in theconfig
folder. Copy the content from theconfig-example.toml
file in the source code'sconfig
directory intoconfig.toml
and fill in your configuration information accordingly. - Double-click the executable file to start the service.
- Open a browser and enter
http://127.0.0.1:8888
to start using it (replace 8888 with the port you configured in theconfig.toml
file).
This software is not signed, so after completing the file configuration in the "Basic Steps," you will need to manually trust the application on macOS. Follow these steps:
- Open the terminal and navigate to the directory where the executable file (assuming the file name is
KrillinAI_1.0.0_macOS_arm64
) is located. - Execute the following commands in sequence:
sudo xattr -rd com.apple.quarantine ./KrillinAI_1.0.0_macOS_arm64
sudo chmod +x ./KrillinAI_1.0.0_macOS_arm64
./KrillinAI_1.0.0_macOS_arm64
This will start the service.
This project supports Docker deployment. Please refer to the Docker Deployment Instructions.
If you encounter video download failures, please refer to the Cookie Configuration Instructions to configure your cookie information.
The quickest and most convenient configuration method:
- Select
openai
for bothtranscription_provider
andllm_provider
. In this way, you only need to fill inopenai.apikey
in the following three major configuration item categories, namelyopenai
,local_model
, andaliyun
, and then you can conduct subtitle translation. (Fill inapp.proxy
,model
andopenai.base_url
as per your own situation.)
The configuration method for using the local speech recognition model (macOS is not supported for the time being) (a choice that takes into account cost, speed, and quality):
- Fill in
fasterwhisper
fortranscription_provider
andopenai
forllm_provider
. In this way, you only need to fill inopenai.apikey
andlocal_model.faster_whisper
in the following three major configuration item categories, namelyopenai
andlocal_model
, and then you can conduct subtitle translation. The local model will be downloaded automatically. (The same applies toapp.proxy
andopenai.base_url
as mentioned above.)
The following usage situations require the configuration of Alibaba Cloud:
- If
llm_provider
is filled withaliyun
, it indicates that the large model service of Alibaba Cloud will be used. Consequently, the configuration of thealiyun.bailian
item needs to be set up. - If
transcription_provider
is filled withaliyun
, or if the "voice dubbing" function is enabled when starting a task, the voice service of Alibaba Cloud will be utilized. Therefore, the configuration of thealiyun.speech
item needs to be filled in. - If the "voice dubbing" function is enabled and local audio files are uploaded for voice timbre cloning at the same time, the OSS cloud storage service of Alibaba Cloud will also be used. Hence, the configuration of the
aliyun.oss
item needs to be filled in. Configuration Guide: Alibaba Cloud Configuration Instructions
Please refer to Frequently Asked Questions
- Do not submit unnecessary files like
.vscode
,.idea
, etc. Please make good use of.gitignore
to filter them. - Do not submit
config.toml
; instead, submitconfig-example.toml
.