-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Audio transcription support #781
Changes from 21 commits
e5a87ae
b5e7997
2c20378
5e73eee
c055935
55cbf44
a9162cb
af4d323
3d306db
6715875
47ebb6a
d719edb
ae93421
91c97ba
211eeae
dd00a89
34108eb
2906a9b
949792c
0e944d0
09b9e24
2c0e261
40692b6
83e8408
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
## Overview | ||
|
||
[Audio Transcription Drivers](../../reference/griptape/drivers/audio_transcription/index.md) extract text from spoken audio. | ||
|
||
This driver acts as a critical bridge between audio transcription Engines and the underlying models, facilitating the construction and execution of API calls that transform speech into editable and searchable text. Utilized predominantly in applications that support the input of verbal communications, the Audio Transcription Driver effectively extracts and interprets speech, rendering it into a textual format that can be easily integrated into data systems and Workflows. | ||
|
||
This capability is essential for enhancing accessibility, improving content discoverability, and automating tasks that traditionally relied on manual transcription, thereby streamlining operations and enhancing efficiency across various industries. | ||
|
||
### OpenAI | ||
|
||
The [OpenAI Audio Transcription Driver](../../reference/griptape/drivers/audio_transcription/openai_audio_transcription_driver.md) utilizes OpenAI's sophisticated `whisper` model to accurately transcribe spoken audio into text. This model supports multiple languages, ensuring precise transcription across a wide range of dialects. | ||
|
||
```python | ||
from griptape.drivers import OpenAiAudioTranscriptionDriver | ||
from griptape.engines import AudioTranscriptionEngine | ||
from griptape.tools.transcription_client.tool import TranscriptionClient | ||
from griptape.structures import Agent | ||
|
||
|
||
driver = OpenAiAudioTranscriptionDriver( | ||
model="whisper-1" | ||
) | ||
|
||
tool = TranscriptionClient( | ||
off_prompt=False, | ||
engine=AudioTranscriptionEngine( | ||
audio_transcription_driver=driver, | ||
), | ||
) | ||
|
||
Agent(tools=[tool]).run("Transcribe the following audio file: tests/resources/sentences.wav") | ||
``` |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
# AudioTranscriptionClient | ||
|
||
This Tool enables [Agents](../../griptape-framework/structures/agents.md) to transcribe speech from text using [Audio Transcription Engines](../../reference/griptape/engines/audio/audio_transcription_engine.md) and [Audio Transcription Drivers](../../reference/griptape/drivers/audio_transcription/index.md). | ||
|
||
```python | ||
from griptape.drivers import OpenAiAudioTranscriptionDriver | ||
from griptape.engines import AudioTranscriptionEngine | ||
from griptape.tools.audio_transcription_client.tool import AudioTranscriptionClient | ||
from griptape.structures import Agent | ||
|
||
|
||
driver = OpenAiAudioTranscriptionDriver( | ||
model="whisper-1" | ||
) | ||
|
||
tool = AudioTranscriptionClient( | ||
off_prompt=False, | ||
engine=AudioTranscriptionEngine( | ||
audio_transcription_driver=driver, | ||
), | ||
) | ||
|
||
Agent(tools=[tool]).run("Transcribe the following audio file: /Users/andrew/code/griptape/tests/resources/sentences2.wav") | ||
``` |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,42 @@ | ||
from __future__ import annotations | ||
|
||
from abc import ABC, abstractmethod | ||
from typing import TYPE_CHECKING, Optional | ||
|
||
from attr import define, field | ||
|
||
from griptape.artifacts import TextArtifact, AudioArtifact | ||
from griptape.events import StartAudioTranscriptionEvent, FinishAudioTranscriptionEvent | ||
from griptape.mixins import ExponentialBackoffMixin, SerializableMixin | ||
|
||
if TYPE_CHECKING: | ||
from griptape.structures import Structure | ||
|
||
|
||
@define | ||
class BaseAudioTranscriptionDriver(SerializableMixin, ExponentialBackoffMixin, ABC): | ||
model: str = field(kw_only=True, metadata={"serializable": True}) | ||
structure: Optional[Structure] = field(default=None, kw_only=True) | ||
|
||
def before_run(self) -> None: | ||
if self.structure: | ||
self.structure.publish_event(StartAudioTranscriptionEvent()) | ||
|
||
def after_run(self) -> None: | ||
if self.structure: | ||
self.structure.publish_event(FinishAudioTranscriptionEvent()) | ||
|
||
def run(self, audio: AudioArtifact, prompts: Optional[list[str]] = None) -> TextArtifact: | ||
for attempt in self.retrying(): | ||
with attempt: | ||
self.before_run() | ||
result = self.try_run(audio, prompts) | ||
self.after_run() | ||
|
||
return result | ||
|
||
else: | ||
raise Exception("Failed to run audio transcription") | ||
|
||
@abstractmethod | ||
def try_run(self, audio: AudioArtifact, prompts: Optional[list[str]] = None) -> TextArtifact: ... |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
from typing import Optional | ||
|
||
from attrs import define, field | ||
from griptape.artifacts import AudioArtifact, TextArtifact | ||
from griptape.drivers import BaseAudioTranscriptionDriver | ||
from griptape.exceptions import DummyException | ||
|
||
|
||
@define | ||
class DummyAudioTranscriptionDriver(BaseAudioTranscriptionDriver): | ||
model: str = field(init=False) | ||
|
||
def try_run(self, audio: AudioArtifact, prompts: Optional[list] = None) -> TextArtifact: | ||
raise DummyException(__class__.__name__, "try_transcription") | ||
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,43 @@ | ||
from __future__ import annotations | ||
|
||
import io | ||
from typing import Optional | ||
|
||
import openai | ||
from attr import field, Factory, define | ||
|
||
from griptape.artifacts import AudioArtifact, TextArtifact | ||
from griptape.drivers import BaseAudioTranscriptionDriver | ||
|
||
|
||
@define | ||
class OpenAiAudioTranscriptionDriver(BaseAudioTranscriptionDriver): | ||
api_type: str = field(default=openai.api_type, kw_only=True) | ||
api_version: Optional[str] = field(default=openai.api_version, kw_only=True, metadata={"serializable": True}) | ||
base_url: Optional[str] = field(default=None, kw_only=True, metadata={"serializable": True}) | ||
api_key: Optional[str] = field(default=None, kw_only=True, metadata={"serializable": False}) | ||
organization: Optional[str] = field(default=openai.organization, kw_only=True, metadata={"serializable": True}) | ||
client: openai.OpenAI = field( | ||
default=Factory( | ||
lambda self: openai.OpenAI(api_key=self.api_key, base_url=self.base_url, organization=self.organization), | ||
takes_self=True, | ||
) | ||
) | ||
|
||
def try_run(self, audio: AudioArtifact, prompts: Optional[list[str]] = None) -> TextArtifact: | ||
additional_params = {} | ||
|
||
if prompts is not None: | ||
additional_params["prompt"] = ", ".join(prompts) | ||
|
||
transcription = self.client.audio.transcriptions.create( | ||
# Even though we're not actually providing a file to the client, the API still requires that we send a file | ||
# name. We set the file name to use the same format as the audio file so that the API can reject | ||
# it if the format is unsupported. | ||
model=self.model, | ||
file=(f"a.{audio.format}", io.BytesIO(audio.value)), | ||
response_format="json", | ||
**additional_params, | ||
) | ||
|
||
return TextArtifact(value=transcription.text) |
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Outside the scope of this PR but I'm starting to doubt our pattern of creating Engines that are just thin wrappers over Drivers. Engines should augment 1 or more Drivers for a specific use-case that a single Driver cannot do on its' own. What do you think? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I strongly agree, this layer is way too thin, but I also did not want to create a weird single audio-to-text modality that breaks our established pattern. Optimistically, they're a layer where we can add additional functionality. For now, they're mostly an additional initialization step. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
from attrs import define, field | ||
|
||
from griptape.artifacts import AudioArtifact, TextArtifact | ||
from griptape.drivers import BaseAudioTranscriptionDriver | ||
|
||
|
||
@define | ||
class AudioTranscriptionEngine: | ||
audio_transcription_driver: BaseAudioTranscriptionDriver = field(kw_only=True) | ||
|
||
def run(self, audio: AudioArtifact, *args, **kwargs) -> TextArtifact: | ||
return self.audio_transcription_driver.try_run(audio) | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we name these Drivers
BaseSpeechToTextDriver
for consistency with the inverse Drivers?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I've steered a bit too far in the direction of naming drivers based on their artifact interfaces. I think the specificity of this name is helpful, what do you think about adding a similarly specific name to the
BaseTextToSpeechDriver
?BaseSpeechGenerationDriver
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah thinking about it more I think this current convention is the more "correct" one.
Down to do a rename of
BaseTextToSpeechDriver
(though maybe in a separate PR), what do you think aboutBaseAudioGenerationDriver
? This may extend beyond speech in the future?