Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📝 docs: update readme #13

Merged
merged 3 commits into from
Nov 18, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
107 changes: 78 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<h1>Lobe TTS</h1>

A high-quality & reliable TTS React Hooks library
A high-quality & reliable TTS library

[![][npm-release-shield]][npm-release-link]
[![][github-releasedate-shield]][github-releasedate-link]
Expand All @@ -29,11 +29,9 @@ A high-quality & reliable TTS React Hooks library

#### TOC

- [📦 Usage](#-usage)
- [📦 Installation](#-installation)
- [Compile with Next.js](#compile-with-nextjs)
- [🛳 Self Hosting](#-self-hosting)
- [Deploy to Vercel](#deploy-to-vercel)
- [Environment Variable](#environment-variable)
- [⌨️ Local Development](#️-local-development)
- [🤝 Contributing](#-contributing)
- [🔗 More Products](#-more-products)
Expand All @@ -42,13 +40,89 @@ A high-quality & reliable TTS React Hooks library

</details>

## 📦 Usage

### Generate Speech on server

run the script below use Bun: `bun index.js`

```js
// index.js
import { EdgeSpeechTTS } from '@lobehub/tts';
import { Buffer } from 'buffer';
import fs from 'fs';
import path from 'path';

// Instantiate EdgeSpeechTTS
const tts = new EdgeSpeechTTS({ locale: 'en-US' });

// Create speech synthesis request payload
const payload = {
input: 'This is a speech demonstration',
options: {
voice: 'en-US-GuyNeural',
},
};

// Call create method to synthesize speech
const response = await tts.create(payload);

// generate speech file
const mp3Buffer = Buffer.from(await response.arrayBuffer());
const speechFile = path.resolve('./speech.mp3');

fs.writeFileSync(speechFile, mp3Buffer);
```


https://github.com/lobehub/lobe-tts/assets/28616219/3ab68c5a-2745-442e-8d66-ca410192ace1


> \[!IMPORTANT]\
> **Run on Node.js**
>
> As the Node.js environment lacks the `WebSocket` instance, we need to polyfill WebSocket. This can be done by importing the ws package.

```js
// import at the top of the file
import WebSocket from 'ws';

global.WebSocket = WebSocket;
```

### Use the React Component

```tsx
import { AudioPlayer, AudioVisualizer, useAudioPlayer } from '@lobehub/tts/react';

export default () => {
const { ref, isLoading, ...audio } = useAudioPlayer(url);

return (
<Flexbox align={'center'} gap={8}>
<AudioPlayer audio={audio} isLoading={isLoading} style={{ width: '100%' }} />
<AudioVisualizer audioRef={ref} isLoading={isLoading} />
</Flexbox>
);
};
```


https://github.com/lobehub/lobe-tts/assets/28616219/c2638383-314f-44c3-b358-8fbbd3028d61



## 📦 Installation

> \[!IMPORTANT]\
> This package is [ESM only](https://gist.github.com/sindresorhus/a39789f98801d908bbc7ff3ecc99d99c).

To install `@lobehub/tts`, run the following command:

```bash
$ pnpm i @lobehub/tts
```

[![][bun-shield]][bun-link]

```bash
Expand All @@ -72,31 +146,6 @@ const nextConfig = {

</div>

## 🛳 Self Hosting

If you want to deploy this service by yourself, you can follow the steps below.

### Deploy to Vercel

Click button below to deploy your private plugins' gateway.

[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Flobehub%2Flobe-tts&project-name=lobe-tts&repository-name=lobe-tts)

### Environment Variable

This project provides some additional configuration items set with environment variables:

| Environment Variable | Description | Default |
| -------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------- |
| `OPENAI_API_KEY` | This is the API key you apply on the OpenAI account page | `sk-xxxxxx...xxxxxx` |
| `OPENAI_BASE_URL` | If you manually configure the OpenAI interface proxy, you can use this configuration item to override the default OpenAI API request base URL | `https://api.openai.com/v1` |

<div align="right">

[![][back-to-top]](#readme-top)

</div>

## ⌨️ Local Development

You can use Github Codespaces for online development:
Expand Down
31 changes: 31 additions & 0 deletions examples/text-to-speech-on-server/EdgeSpeechTTS.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import { EdgeSpeechPayload, EdgeSpeechTTS } from '@/core';
import { Buffer } from 'node:buffer';
import fs from 'node:fs';
import path from 'node:path';

// 由于 nodejs 环境缺少 `WebSocket` 实例,因此我们需要将其 polyfill
// import WebSocket from 'ws';
// global.WebSocket = WebSocket;

// 实例化 EdgeSpeechTTS
const tts = new EdgeSpeechTTS({ locale: 'zh-CN' });

// 创建语音合成请求负载
const payload: EdgeSpeechPayload = {
input: '这是一段语音演示',
options: {
voice: 'zh-CN-XiaoxiaoNeural',
},
};

const speechFile = path.resolve('./speech.mp3');

// 调用 create 方法来合成语音
async function main() {
const response = await tts.create(payload);
const mp3Buffer = Buffer.from(await response.arrayBuffer());

fs.writeFileSync(speechFile, mp3Buffer);
}

main();
32 changes: 32 additions & 0 deletions examples/text-to-speech-on-server/MicrosoftTTS.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
import { MicrosoftSpeechPayload, MicrosoftSpeechTTS } from '@/core';
import { Buffer } from 'buffer';
import fs from 'fs';
import path from 'path';

// 由于 nodejs 环境缺少 `WebSocket` 实例,因此我们需要将其 polyfill
// import WebSocket from 'ws';
// global.WebSocket = WebSocket;

// 实例化 EdgeSpeechTTS
const tts = new MicrosoftSpeechTTS({ locale: 'zh-CN' });

// 创建语音合成请求负载
const payload: MicrosoftSpeechPayload = {
input: '这是一段语音演示',
options: {
voice: 'yue-CN-XiaoMinNeural',
style: 'embarrassed',
},
};

const speechFile = path.resolve('./speech.mp3');

// 调用 create 方法来合成语音
async function main() {
const response = await tts.create(payload);
const mp3Buffer = Buffer.from(await response.arrayBuffer());

fs.writeFileSync(speechFile, mp3Buffer);
}

main();
28 changes: 28 additions & 0 deletions examples/text-to-speech-on-server/OpenAITTS.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
import { OpenAITTS, OpenAITTSPayload } from '@/core';
import { Buffer } from 'node:buffer';
import fs from 'node:fs';
import path from 'node:path';

// 实例化 OpenAITTS
const tts = new OpenAITTS({ OPENAI_API_KEY: 'your-api-key' });

// 创建语音合成请求负载
const payload: OpenAITTSPayload = {
input: '今天是美好的一天',
options: {
model: 'tts-1',
voice: 'alloy',
},
};

const speechFile = path.resolve('./speech.mp3');

// 调用 create 方法来合成语音
async function main() {
const response = await tts.create(payload);
const mp3Buffer = Buffer.from(await response.arrayBuffer());

fs.writeFileSync(speechFile, mp3Buffer);
}

main();
Loading