Skip to content

Commit

Permalink
feat(api): add new usage response fields (#332)
Browse files Browse the repository at this point in the history
  • Loading branch information
stainless-bot authored and RobertCraigie committed Feb 2, 2024
1 parent a4e904c commit 554098e
Show file tree
Hide file tree
Showing 12 changed files with 178 additions and 51 deletions.
2 changes: 2 additions & 0 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,14 @@ from anthropic.types.beta import (
ContentBlockStopEvent,
Message,
MessageDeltaEvent,
MessageDeltaUsage,
MessageParam,
MessageStartEvent,
MessageStopEvent,
MessageStreamEvent,
TextBlock,
TextDelta,
Usage,
)
```

Expand Down
114 changes: 96 additions & 18 deletions src/anthropic/resources/beta/messages.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,15 @@ def create(
"""
Create a Message.
The Messages API is currently in beta.
Send a structured list of input messages, and the model will generate the next
message in the conversation.
Messages can be used for either single queries to the model or for multi-turn
conversations.
The Messages API is currently in beta. During beta, you must send the
`anthropic-beta: messages-2023-12-15` header in your requests. If you are using
our client SDKs, this is handled for you automatically.
Args:
max_tokens: The maximum number of tokens to generate before stopping.
Expand Down Expand Up @@ -136,6 +144,11 @@ def create(
[guide to prompt design](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design)
for more details on how to best construct prompts.
Note that if you want to include a
[system prompt](https://docs.anthropic.com/claude/docs/how-to-use-system-prompts),
you can use the top-level `system` parameter — there is no `"system"` role for
input messages in the Messages API.
model: The model that will complete your prompt.
As we improve Claude, we develop new versions of it that you can query. The
Expand All @@ -161,8 +174,8 @@ def create(
stream: Whether to incrementally stream the response using server-sent events.
See [streaming](https://docs.anthropic.com/claude/reference/streaming) for
details.
See [streaming](https://docs.anthropic.com/claude/reference/messages-streaming)
for details.
system: System prompt.
Expand Down Expand Up @@ -221,7 +234,15 @@ def create(
"""
Create a Message.
The Messages API is currently in beta.
Send a structured list of input messages, and the model will generate the next
message in the conversation.
Messages can be used for either single queries to the model or for multi-turn
conversations.
The Messages API is currently in beta. During beta, you must send the
`anthropic-beta: messages-2023-12-15` header in your requests. If you are using
our client SDKs, this is handled for you automatically.
Args:
max_tokens: The maximum number of tokens to generate before stopping.
Expand Down Expand Up @@ -293,6 +314,11 @@ def create(
[guide to prompt design](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design)
for more details on how to best construct prompts.
Note that if you want to include a
[system prompt](https://docs.anthropic.com/claude/docs/how-to-use-system-prompts),
you can use the top-level `system` parameter — there is no `"system"` role for
input messages in the Messages API.
model: The model that will complete your prompt.
As we improve Claude, we develop new versions of it that you can query. The
Expand All @@ -306,8 +332,8 @@ def create(
stream: Whether to incrementally stream the response using server-sent events.
See [streaming](https://docs.anthropic.com/claude/reference/streaming) for
details.
See [streaming](https://docs.anthropic.com/claude/reference/messages-streaming)
for details.
metadata: An object describing metadata about the request.
Expand Down Expand Up @@ -378,7 +404,15 @@ def create(
"""
Create a Message.
The Messages API is currently in beta.
Send a structured list of input messages, and the model will generate the next
message in the conversation.
Messages can be used for either single queries to the model or for multi-turn
conversations.
The Messages API is currently in beta. During beta, you must send the
`anthropic-beta: messages-2023-12-15` header in your requests. If you are using
our client SDKs, this is handled for you automatically.
Args:
max_tokens: The maximum number of tokens to generate before stopping.
Expand Down Expand Up @@ -450,6 +484,11 @@ def create(
[guide to prompt design](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design)
for more details on how to best construct prompts.
Note that if you want to include a
[system prompt](https://docs.anthropic.com/claude/docs/how-to-use-system-prompts),
you can use the top-level `system` parameter — there is no `"system"` role for
input messages in the Messages API.
model: The model that will complete your prompt.
As we improve Claude, we develop new versions of it that you can query. The
Expand All @@ -463,8 +502,8 @@ def create(
stream: Whether to incrementally stream the response using server-sent events.
See [streaming](https://docs.anthropic.com/claude/reference/streaming) for
details.
See [streaming](https://docs.anthropic.com/claude/reference/messages-streaming)
for details.
metadata: An object describing metadata about the request.
Expand Down Expand Up @@ -693,7 +732,15 @@ async def create(
"""
Create a Message.
The Messages API is currently in beta.
Send a structured list of input messages, and the model will generate the next
message in the conversation.
Messages can be used for either single queries to the model or for multi-turn
conversations.
The Messages API is currently in beta. During beta, you must send the
`anthropic-beta: messages-2023-12-15` header in your requests. If you are using
our client SDKs, this is handled for you automatically.
Args:
max_tokens: The maximum number of tokens to generate before stopping.
Expand Down Expand Up @@ -765,6 +812,11 @@ async def create(
[guide to prompt design](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design)
for more details on how to best construct prompts.
Note that if you want to include a
[system prompt](https://docs.anthropic.com/claude/docs/how-to-use-system-prompts),
you can use the top-level `system` parameter — there is no `"system"` role for
input messages in the Messages API.
model: The model that will complete your prompt.
As we improve Claude, we develop new versions of it that you can query. The
Expand All @@ -790,8 +842,8 @@ async def create(
stream: Whether to incrementally stream the response using server-sent events.
See [streaming](https://docs.anthropic.com/claude/reference/streaming) for
details.
See [streaming](https://docs.anthropic.com/claude/reference/messages-streaming)
for details.
system: System prompt.
Expand Down Expand Up @@ -850,7 +902,15 @@ async def create(
"""
Create a Message.
The Messages API is currently in beta.
Send a structured list of input messages, and the model will generate the next
message in the conversation.
Messages can be used for either single queries to the model or for multi-turn
conversations.
The Messages API is currently in beta. During beta, you must send the
`anthropic-beta: messages-2023-12-15` header in your requests. If you are using
our client SDKs, this is handled for you automatically.
Args:
max_tokens: The maximum number of tokens to generate before stopping.
Expand Down Expand Up @@ -922,6 +982,11 @@ async def create(
[guide to prompt design](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design)
for more details on how to best construct prompts.
Note that if you want to include a
[system prompt](https://docs.anthropic.com/claude/docs/how-to-use-system-prompts),
you can use the top-level `system` parameter — there is no `"system"` role for
input messages in the Messages API.
model: The model that will complete your prompt.
As we improve Claude, we develop new versions of it that you can query. The
Expand All @@ -935,8 +1000,8 @@ async def create(
stream: Whether to incrementally stream the response using server-sent events.
See [streaming](https://docs.anthropic.com/claude/reference/streaming) for
details.
See [streaming](https://docs.anthropic.com/claude/reference/messages-streaming)
for details.
metadata: An object describing metadata about the request.
Expand Down Expand Up @@ -1007,7 +1072,15 @@ async def create(
"""
Create a Message.
The Messages API is currently in beta.
Send a structured list of input messages, and the model will generate the next
message in the conversation.
Messages can be used for either single queries to the model or for multi-turn
conversations.
The Messages API is currently in beta. During beta, you must send the
`anthropic-beta: messages-2023-12-15` header in your requests. If you are using
our client SDKs, this is handled for you automatically.
Args:
max_tokens: The maximum number of tokens to generate before stopping.
Expand Down Expand Up @@ -1079,6 +1152,11 @@ async def create(
[guide to prompt design](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design)
for more details on how to best construct prompts.
Note that if you want to include a
[system prompt](https://docs.anthropic.com/claude/docs/how-to-use-system-prompts),
you can use the top-level `system` parameter — there is no `"system"` role for
input messages in the Messages API.
model: The model that will complete your prompt.
As we improve Claude, we develop new versions of it that you can query. The
Expand All @@ -1092,8 +1170,8 @@ async def create(
stream: Whether to incrementally stream the response using server-sent events.
See [streaming](https://docs.anthropic.com/claude/reference/streaming) for
details.
See [streaming](https://docs.anthropic.com/claude/reference/messages-streaming)
for details.
metadata: An object describing metadata about the request.
Expand Down
42 changes: 24 additions & 18 deletions src/anthropic/resources/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ def create(
timeout: float | httpx.Timeout | None | NotGiven = 600,
) -> Completion:
"""
Create a Completion
Create a Text Completion
Args:
max_tokens_to_sample: The maximum number of tokens to generate before stopping.
Expand Down Expand Up @@ -96,8 +96,9 @@ def create(
stream: Whether to incrementally stream the response using server-sent events.
See [streaming](https://docs.anthropic.com/claude/reference/streaming) for
details.
See
[streaming](https://docs.anthropic.com/claude/reference/text-completions-streaming)
for details.
temperature: Amount of randomness injected into the response.
Expand Down Expand Up @@ -147,7 +148,7 @@ def create(
timeout: float | httpx.Timeout | None | NotGiven = 600,
) -> Stream[Completion]:
"""
Create a Completion
Create a Text Completion
Args:
max_tokens_to_sample: The maximum number of tokens to generate before stopping.
Expand Down Expand Up @@ -183,8 +184,9 @@ def create(
stream: Whether to incrementally stream the response using server-sent events.
See [streaming](https://docs.anthropic.com/claude/reference/streaming) for
details.
See
[streaming](https://docs.anthropic.com/claude/reference/text-completions-streaming)
for details.
metadata: An object describing metadata about the request.
Expand Down Expand Up @@ -242,7 +244,7 @@ def create(
timeout: float | httpx.Timeout | None | NotGiven = 600,
) -> Completion | Stream[Completion]:
"""
Create a Completion
Create a Text Completion
Args:
max_tokens_to_sample: The maximum number of tokens to generate before stopping.
Expand Down Expand Up @@ -278,8 +280,9 @@ def create(
stream: Whether to incrementally stream the response using server-sent events.
See [streaming](https://docs.anthropic.com/claude/reference/streaming) for
details.
See
[streaming](https://docs.anthropic.com/claude/reference/text-completions-streaming)
for details.
metadata: An object describing metadata about the request.
Expand Down Expand Up @@ -391,7 +394,7 @@ async def create(
timeout: float | httpx.Timeout | None | NotGiven = 600,
) -> Completion:
"""
Create a Completion
Create a Text Completion
Args:
max_tokens_to_sample: The maximum number of tokens to generate before stopping.
Expand Down Expand Up @@ -435,8 +438,9 @@ async def create(
stream: Whether to incrementally stream the response using server-sent events.
See [streaming](https://docs.anthropic.com/claude/reference/streaming) for
details.
See
[streaming](https://docs.anthropic.com/claude/reference/text-completions-streaming)
for details.
temperature: Amount of randomness injected into the response.
Expand Down Expand Up @@ -486,7 +490,7 @@ async def create(
timeout: float | httpx.Timeout | None | NotGiven = 600,
) -> AsyncStream[Completion]:
"""
Create a Completion
Create a Text Completion
Args:
max_tokens_to_sample: The maximum number of tokens to generate before stopping.
Expand Down Expand Up @@ -522,8 +526,9 @@ async def create(
stream: Whether to incrementally stream the response using server-sent events.
See [streaming](https://docs.anthropic.com/claude/reference/streaming) for
details.
See
[streaming](https://docs.anthropic.com/claude/reference/text-completions-streaming)
for details.
metadata: An object describing metadata about the request.
Expand Down Expand Up @@ -581,7 +586,7 @@ async def create(
timeout: float | httpx.Timeout | None | NotGiven = 600,
) -> Completion | AsyncStream[Completion]:
"""
Create a Completion
Create a Text Completion
Args:
max_tokens_to_sample: The maximum number of tokens to generate before stopping.
Expand Down Expand Up @@ -617,8 +622,9 @@ async def create(
stream: Whether to incrementally stream the response using server-sent events.
See [streaming](https://docs.anthropic.com/claude/reference/streaming) for
details.
See
[streaming](https://docs.anthropic.com/claude/reference/text-completions-streaming)
for details.
metadata: An object describing metadata about the request.
Expand Down
2 changes: 2 additions & 0 deletions src/anthropic/types/beta/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,15 @@

from __future__ import annotations

from .usage import Usage as Usage
from .message import Message as Message
from .text_delta import TextDelta as TextDelta
from .content_block import ContentBlock as ContentBlock
from .message_param import MessageParam as MessageParam
from .text_block_param import TextBlockParam as TextBlockParam
from .message_stop_event import MessageStopEvent as MessageStopEvent
from .message_delta_event import MessageDeltaEvent as MessageDeltaEvent
from .message_delta_usage import MessageDeltaUsage as MessageDeltaUsage
from .message_start_event import MessageStartEvent as MessageStartEvent
from .message_stream_event import MessageStreamEvent as MessageStreamEvent
from .message_create_params import MessageCreateParams as MessageCreateParams
Expand Down
Loading

0 comments on commit 554098e

Please sign in to comment.