diff --git a/fern/definition/gdpr-privacy.yml b/fern/definition/gdpr-privacy.yml
index 38e77ff..909ed4d 100644
--- a/fern/definition/gdpr-privacy.yml
+++ b/fern/definition/gdpr-privacy.yml
@@ -22,7 +22,7 @@ service:
docs: |
By default, all prompts and responses are logged.
- If you’ve disabled this behavior by following [this guide](/open-ll-metry/privacy/prompts-completions-and-embeddings), and then selectively enabled it for some of your users, then you can use this API to view which users you’ve enabled.
+ If you’ve disabled this behavior by following [this guide](/docs/openllmetry/privacy/prompts-completions-and-embeddings), and then selectively enabled it for some of your users, then you can use this API to view which users you’ve enabled.
path: "/data-deletion"
request:
name: GetDeletionStatusRequest
diff --git a/fern/definition/tracing.yml b/fern/definition/tracing.yml
index 3076ec1..93b0333 100644
--- a/fern/definition/tracing.yml
+++ b/fern/definition/tracing.yml
@@ -9,7 +9,7 @@ service:
docs: |
By default, all prompts and responses are logged.
- If you want to disable this behavior by following [this guide](/open-ll-metry/privacy/prompts-completions-and-embeddings), you can selectively enable it for some of your users with this API.
+ If you want to disable this behavior by following [this guide](/docs/openllmetry/privacy/prompts-completions-and-embeddings), you can selectively enable it for some of your users with this API.
path: "/tracing-allow-list"
method: POST
request: EnableLogging
@@ -20,7 +20,7 @@ service:
docs: |
By default, all prompts and responses are logged.
- If you’ve disabled this behavior by following [this guide](/open-ll-metry/privacy/prompts-completions-and-embeddings), and then selectively enabled it for some of your users, then you can use this API to view which users you’ve enabled.
+ If you’ve disabled this behavior by following [this guide](/docs/openllmetry/privacy/prompts-completions-and-embeddings), and then selectively enabled it for some of your users, then you can use this API to view which users you’ve enabled.
path: "/tracing-allow-list"
method: GET
response: GetIdentifiersResponse
@@ -32,7 +32,7 @@ service:
docs: |
By default, all prompts and responses are logged.
- If you’ve disabled this behavior by following [this guide](/open-ll-metry/privacy/prompts-completions-and-embeddings), and then selectively enabled it for some of your users, then you can use this API to disable it for previously enabled ones.
+ If you’ve disabled this behavior by following [this guide](/docs/openllmetry/privacy/prompts-completions-and-embeddings), and then selectively enabled it for some of your users, then you can use this API to disable it for previously enabled ones.
path: "/tracing-allow-list"
method: DELETE
request: DisableLogging
diff --git a/fern/docs.yml b/fern/docs.yml
index 917358a..5bd2069 100644
--- a/fern/docs.yml
+++ b/fern/docs.yml
@@ -1,6 +1,6 @@
instances:
- url: traceloop.docs.buildwithfern.com/docs
- custom-domain: traceloop.com/docs
+ custom-domain: fern.traceloop.com/docs
title: Traceloop | Docs
@@ -15,7 +15,7 @@ tabs:
api:
display-name: Dashboard API
icon: "fa-duotone fa-webhook"
-
+ slug: dashboard-api
navigation:
- tab: docs
layout:
diff --git a/fern/pages/documentation/learn/intro.mdx b/fern/pages/documentation/learn/intro.mdx
index c2d9930..f789db9 100644
--- a/fern/pages/documentation/learn/intro.mdx
+++ b/fern/pages/documentation/learn/intro.mdx
@@ -19,28 +19,28 @@ Traceloop natively plugs into OpenLLMetry SDK. To get started, pick the language
Available
Available
Beta
Beta
diff --git a/fern/pages/documentation/prompt-management/fetching-prompts.mdx b/fern/pages/documentation/prompt-management/fetching-prompts.mdx
index d1fc6c5..cfa3b19 100644
--- a/fern/pages/documentation/prompt-management/fetching-prompts.mdx
+++ b/fern/pages/documentation/prompt-management/fetching-prompts.mdx
@@ -10,7 +10,7 @@ The default poll interval is 60 seconds but can be configured with the `TRACELOO
To disable polling all together, set the `TRACELOOP_SYNC_ENABLED` environment variable to false (its enabled by default).
-Make sure you’ve configured the SDK with the right environment and API Key. See the [SDK documentation](/open-ll-metry/integrations/traceloop) for more information.
+Make sure you’ve configured the SDK with the right environment and API Key. See the [SDK documentation](/docs/openllmetry/integrations/traceloop) for more information.
The SDK uses smart caching mechanisms to proide zero latency for fetching prompts.
diff --git a/fern/pages/documentation/prompt-management/prompt-registry.mdx b/fern/pages/documentation/prompt-management/prompt-registry.mdx
index 6e89c39..e8894e2 100644
--- a/fern/pages/documentation/prompt-management/prompt-registry.mdx
+++ b/fern/pages/documentation/prompt-management/prompt-registry.mdx
@@ -53,7 +53,7 @@ Here, you can see all recent prompt versions, and which environments they are de
As a safeguard, you cannot deploy a prompt to the `Staging` environment before first deploying it to `Development`. Similarly, you cannot deploy to `Production` without first deploying to `Staging`.
-To fetch prompts from a specific environment, you must supply that environment’s API key to the Traceloop SDK. See the [SDK Configuration](/open-ll-metry/integrations/traceloop) for details
+To fetch prompts from a specific environment, you must supply that environment’s API key to the Traceloop SDK. See the [SDK Configuration](/docs/openllmetry/integrations/traceloop) for details
## Prompt Versions
diff --git a/fern/pages/documentation/prompt-management/quickstart.mdx b/fern/pages/documentation/prompt-management/quickstart.mdx
index 1b82ece..178bf65 100644
--- a/fern/pages/documentation/prompt-management/quickstart.mdx
+++ b/fern/pages/documentation/prompt-management/quickstart.mdx
@@ -1,11 +1,11 @@
-
+
![prompt-management-quickstart](https://fern-image-hosting.s3.amazonaws.com/traceloop/prompt-management-quickstart.png)
-
+
You can use Traceloop to manage your prompts and model configurations. That way you can easily experiment with different prompts, and rollout changes gradually and safely.
-Make sure you’ve created an API key and set it as an environment variable `TRACELOOP_API_KEY` before you start. Check out the SDK’s [getting started guide](/open-ll-metry/quick-start/python) for more information.
+Make sure you’ve created an API key and set it as an environment variable `TRACELOOP_API_KEY` before you start. Check out the SDK’s [getting started guide](/docs/openllmetry/quick-start/python) for more information.
diff --git a/fern/pages/openllmetry/integrations/opentelemetry-collector.mdx b/fern/pages/openllmetry/integrations/opentelemetry-collector.mdx
index a27cbfc..065810a 100644
--- a/fern/pages/openllmetry/integrations/opentelemetry-collector.mdx
+++ b/fern/pages/openllmetry/integrations/opentelemetry-collector.mdx
@@ -8,4 +8,4 @@ Since Traceloop is emitting standard OTLP HTTP (standard OpenTelemetry protocol)
TRACELOOP_BASE_URL=https://:4318
```
-You can connect your collector to Traceloop by following the instructions in the [Traceloop integration section](/open-ll-metry/integrations/traceloop#using-an-opentelemetry-collector).
\ No newline at end of file
+You can connect your collector to Traceloop by following the instructions in the [Traceloop integration section](/docs/openllmetry/integrations/traceloop#using-an-opentelemetry-collector).
\ No newline at end of file
diff --git a/fern/pages/openllmetry/integrations/overview.mdx b/fern/pages/openllmetry/integrations/overview.mdx
index ae3cd18..0ace824 100644
--- a/fern/pages/openllmetry/integrations/overview.mdx
+++ b/fern/pages/openllmetry/integrations/overview.mdx
@@ -7,19 +7,19 @@ Since Traceloop SDK is using OpenTelemetry under the hood, you can see everythin
## The Integrations Catalog
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/fern/pages/openllmetry/integrations/traceloop.mdx b/fern/pages/openllmetry/integrations/traceloop.mdx
index 6316622..3979423 100644
--- a/fern/pages/openllmetry/integrations/traceloop.mdx
+++ b/fern/pages/openllmetry/integrations/traceloop.mdx
@@ -55,6 +55,6 @@ excerpt: LLM Observability with Traceloop
exporters: [otlp/traceloop]
```
- You can route OpenLLMetry to your collector by following the [OpenTelemetry Collector](/open-ll-metry/integrations/open-telemetry-collector) integration instructions.
+ You can route OpenLLMetry to your collector by following the [OpenTelemetry Collector](/docs/openllmetry/integrations/open-telemetry-collector) integration instructions.
\ No newline at end of file
diff --git a/fern/pages/openllmetry/intro/what-is-llmetry.mdx b/fern/pages/openllmetry/intro/what-is-llmetry.mdx
index d403dcf..41efd18 100644
--- a/fern/pages/openllmetry/intro/what-is-llmetry.mdx
+++ b/fern/pages/openllmetry/intro/what-is-llmetry.mdx
@@ -53,22 +53,22 @@ You can use OpenLLMetry whether you use a framework like LangChain, or directly
## Getting Started
-
+
Set up Traceloop Python SDK in your project
-
+
Set up Traceloop Javascript SDK in your project
-
+
Set up Traceloop Go SDK in your project
-
+
Learn how to annotate your code to enrich your traces
-
+
Learn how to connect to your existing observability stack
-
+
How we secure your data
diff --git a/fern/pages/openllmetry/privacy/prompts-completions-embeddings.mdx b/fern/pages/openllmetry/privacy/prompts-completions-embeddings.mdx
index 9b2f9b1..a35bc40 100644
--- a/fern/pages/openllmetry/privacy/prompts-completions-embeddings.mdx
+++ b/fern/pages/openllmetry/privacy/prompts-completions-embeddings.mdx
@@ -56,7 +56,7 @@ You can decide to selectively enable or disable prompt logging for specific user
- We have an API to enable content tracing for specific users, as defined by [association entities](/open-ll-metry/tracing/associating-entities-with-traces). See the [Traceloop API documentation](/dashboard-api/endpoints) for more information.
+ We have an API to enable content tracing for specific users, as defined by [association entities](/docs/openllmetry/tracing/associating-entities-with-traces). See the [Traceloop API documentation](/docs/dashboard-api/endpoints) for more information.
Set a key called `override_enable_content_tracing` in the OpenTelemetry context to `True` right before making the LLM call you want to trace with prompts. This will create a new context that will instruct instrumentations to log prompts and completions as span attributes.
diff --git a/fern/pages/openllmetry/quickstart/go.mdx b/fern/pages/openllmetry/quickstart/go.mdx
index 25a77c6..66a13f0 100644
--- a/fern/pages/openllmetry/quickstart/go.mdx
+++ b/fern/pages/openllmetry/quickstart/go.mdx
@@ -99,7 +99,7 @@ func call_llm() {
Lastly, you’ll need to configure where to export your traces. The 2 environment variables controlling this are `TRACELOOP_API_KEY` and `TRACELOOP_BASE_URL`.
-For Traceloop, read on. For other options, see [Exporting](/open-ll-metry/integrations/overview).
+For Traceloop, read on. For other options, see [Exporting](/docs/openllmetry/integrations/overview).
### Using Traceloop Cloud
diff --git a/fern/pages/openllmetry/quickstart/next.mdx b/fern/pages/openllmetry/quickstart/next.mdx
index 7f298d5..55c8de2 100644
--- a/fern/pages/openllmetry/quickstart/next.mdx
+++ b/fern/pages/openllmetry/quickstart/next.mdx
@@ -158,7 +158,7 @@ You can check out our full working example with Next.js 13 [here](https://github
f you have complex workflows or chains, you can annotate them to get a better understanding of what’s going on. You’ll see the complete trace of your workflow on Traceloop or any other dashboard you’re using.
- We have a set of [methods and decorators](/open-ll-metry/tracing/workflows-tasks-agents-and-tools) to make this easier. Assume you have a function that renders a prompt and calls an LLM, simply wrap it in a `withWorkflow()` function call.
+ We have a set of [methods and decorators](/docs/openllmetry/tracing/workflows-tasks-agents-and-tools) to make this easier. Assume you have a function that renders a prompt and calls an LLM, simply wrap it in a `withWorkflow()` function call.
We also have compatible Typescript decorators for class methods which are more convenient.
@@ -188,13 +188,13 @@ You can check out our full working example with Next.js 13 [here](https://github
- For more information, see the [dedicated section in the docs](/open-ll-metry/tracing/workflows-tasks-agents-and-tools).
+ For more information, see the [dedicated section in the docs](/docs/openllmetry/tracing/workflows-tasks-agents-and-tools).
### Configure Trace Exporting
Lastly, you’ll need to configure where to export your traces. The 2 environment variables controlling this are `TRACELOOP_API_KEY` and `TRACELOOP_BASE_URL`.
- For Traceloop, read on. For other options, see [Exporting](/open-ll-metry/integrations/overview).
+ For Traceloop, read on. For other options, see [Exporting](/docs/openllmetry/integrations/overview).
### Using Traceloop Cloud
diff --git a/fern/pages/openllmetry/quickstart/node.mdx b/fern/pages/openllmetry/quickstart/node.mdx
index 2ad6a56..0d3cb16 100644
--- a/fern/pages/openllmetry/quickstart/node.mdx
+++ b/fern/pages/openllmetry/quickstart/node.mdx
@@ -54,7 +54,7 @@ traceloop.initialize({ disableBatch: true });
If you have complex workflows or chains, you can annotate them to get a better understanding of what’s going on. You’ll see the complete trace of your workflow on Traceloop or any other dashboard you’re using.
-We have a set of [methods and decorators](/open-ll-metry/tracing/workflows-tasks-agents-and-tools) to make this easier. Assume you have a function that renders a prompt and calls an LLM, simply wrap it in a `withWorkflow()` function call.
+We have a set of [methods and decorators](/docs/openllmetry/tracing/workflows-tasks-agents-and-tools) to make this easier. Assume you have a function that renders a prompt and calls an LLM, simply wrap it in a `withWorkflow()` function call.
We also have compatible Typescript decorators for class methods which are more convenient.
@@ -84,13 +84,13 @@ If you’re using an LLM framework like Haystack, Langchain or LlamaIndex - we
-For more information, see the [dedicated section in the docs](/open-ll-metry/tracing/workflows-tasks-agents-and-tools).
+For more information, see the [dedicated section in the docs](/docs/openllmetry/tracing/workflows-tasks-agents-and-tools).
### Configure Trace Exporting
Lastly, you’ll need to configure where to export your traces. The 2 environment variables controlling this are `TRACELOOP_API_KEY` and `TRACELOOP_BASE_URL`.
-For Traceloop, read on. For other options, see [Exporting](/open-ll-metry/integrations/overview).
+For Traceloop, read on. For other options, see [Exporting](/docs/openllmetry/integrations/overview).
### Using Traceloop Cloud
diff --git a/fern/pages/openllmetry/quickstart/python.mdx b/fern/pages/openllmetry/quickstart/python.mdx
index be9449b..c768b06 100644
--- a/fern/pages/openllmetry/quickstart/python.mdx
+++ b/fern/pages/openllmetry/quickstart/python.mdx
@@ -45,7 +45,7 @@ Traceloop.init(disable_batch=True)
If you have complex workflows or chains, you can annotate them to get a better understanding of what’s going on. You’ll see the complete trace of your workflow on Traceloop or any other dashboard you’re using.
-We have a set of [decorators](/open-ll-metry/tracing/workflows-tasks-agents-and-tools) to make this easier. Assume you have a function that renders a prompt and calls an LLM, simply add `@workflow` (or for asynchronous methods - `@aworkflow`).
+We have a set of [decorators](/docs/openllmetry/tracing/workflows-tasks-agents-and-tools) to make this easier. Assume you have a function that renders a prompt and calls an LLM, simply add `@workflow` (or for asynchronous methods - `@aworkflow`).
If you’re using an LLM framework like Haystack, Langchain or LlamaIndex - we’ll do that for you. No need to add any annotations to your code.
@@ -60,13 +60,13 @@ def suggest_answers(question: str):
...
```
-For more information, see the [dedicated section in the docs](/open-ll-metry/tracing/workflows-tasks-agents-and-tools).
+For more information, see the [dedicated section in the docs](/docs/openllmetry/tracing/workflows-tasks-agents-and-tools).
### Configure trace exporting
Lastly, you’ll need to configure where to export your traces. The 2 environment variables controlling this are `TRACELOOP_API_KEY` and `TRACELOOP_BASE_URL`.
-For Traceloop, read on. For other options, see [Exporting](/open-ll-metry/integrations/overview).
+For Traceloop, read on. For other options, see [Exporting](/docs/openllmetry/integrations/overview).
### Using Traceloop Cloud
diff --git a/fern/pages/openllmetry/quickstart/ruby.mdx b/fern/pages/openllmetry/quickstart/ruby.mdx
index 37d43e8..72a05b3 100644
--- a/fern/pages/openllmetry/quickstart/ruby.mdx
+++ b/fern/pages/openllmetry/quickstart/ruby.mdx
@@ -72,7 +72,7 @@ end
Lastly, you’ll need to configure where to export your traces. The 2 environment variables controlling this are `TRACELOOP_API_KEY` and `TRACELOOP_BASE_URL`.
-For Traceloop, read on. For other options, see [Exporting](/open-ll-metry/integrations/overview).
+For Traceloop, read on. For other options, see [Exporting](/docs/openllmetry/integrations/overview).
### Using Traceloop Cloud
diff --git a/fern/pages/openllmetry/quickstart/sdk-initialization.mdx b/fern/pages/openllmetry/quickstart/sdk-initialization.mdx
index 20ac647..694cb3d 100644
--- a/fern/pages/openllmetry/quickstart/sdk-initialization.mdx
+++ b/fern/pages/openllmetry/quickstart/sdk-initialization.mdx
@@ -41,7 +41,7 @@ This defines the OpenTelemetry endpoint to connect to. It defaults to https://ap
If you prefix it with `http` or `https`, it will use the OTLP/HTTP protocol. Otherwise, it will use the OTLP/GRPC protocol.
-For configuring this to different observability platform, check out our [integrations section](/open-ll-metry/integrations/overview).
+For configuring this to different observability platform, check out our [integrations section](/docs/openllmetry/integrations/overview).
The OpenTelemetry standard defines that the actual endpoint should always end with `/v1/traces`. Thus, if you specify a base URL, we always append `/v1/traces` to it. This is similar to how `OTLP_EXPORTER_OTLP_ENDPOINT` works in all OpenTelemetry SDKs.
@@ -69,7 +69,7 @@ The OpenTelemetry standard defines that the actual endpoint should always end wi
If set, this is sent as a bearer token on the Authorization header.
-[Traceloop](/open-ll-metry/integrations/traceloop), for example, use this to authenticate incoming traces and requests.
+[Traceloop](/docs/openllmetry/integrations/traceloop), for example, use this to authenticate incoming traces and requests.
If this is not set, and the base URL is set to `https://api.traceloop.com`, the SDK will generate a new API key automatically with the Traceloop dashboard.
diff --git a/fern/pages/openllmetry/quickstart/troubleshooting.mdx b/fern/pages/openllmetry/quickstart/troubleshooting.mdx
index 5356d98..f2f9211 100644
--- a/fern/pages/openllmetry/quickstart/troubleshooting.mdx
+++ b/fern/pages/openllmetry/quickstart/troubleshooting.mdx
@@ -11,7 +11,7 @@ We’ve all been there. You followed all the instructions, but you’re not seei
### Disable batch sending
-Sending traces in batch is useful in production, but can be confusing if you’re working locally. Make sure you’ve [disabled batch sending](/open-ll-metry/quick-start/sdk-initialization-options#disable-batch).
+Sending traces in batch is useful in production, but can be confusing if you’re working locally. Make sure you’ve [disabled batch sending](/docs/openllmetry/quick-start/sdk-initialization-options#disable-batch).
@@ -44,7 +44,7 @@ import OpenAI from "openai";
// ...
```
-If that doesn’t work, you may need to manually instrument the libraries you’re using. See the [manual instrumentation guide](/open-ll-metry/tracing/manual-implementations-typescript-javascript) for more details.
+If that doesn’t work, you may need to manually instrument the libraries you’re using. See the [manual instrumentation guide](/docs/openllmetry/tracing/manual-implementations-typescript-javascript) for more details.
```javascript
import OpenAI from "openai";
@@ -85,7 +85,7 @@ Use the `ConsoleExporter` and check if you see traces in the console.
-If you see traces in the console, then you probable haven’t configured the exporter properly. Check the [integration guide](/open-ll-metry/integrations/overview) again, and make sure you’re using the right endpoint and API key.
+If you see traces in the console, then you probable haven’t configured the exporter properly. Check the [integration guide](/docs/openllmetry/integrations/overview) again, and make sure you’re using the right endpoint and API key.
### Talk to us!
diff --git a/fern/pages/openllmetry/tracing/tracking-feedback.mdx b/fern/pages/openllmetry/tracing/tracking-feedback.mdx
index f77dcb5..95e7411 100644
--- a/fern/pages/openllmetry/tracing/tracking-feedback.mdx
+++ b/fern/pages/openllmetry/tracing/tracking-feedback.mdx
@@ -1,6 +1,6 @@
When building LLM applications, it quickly becomes highly useful and important to track user feedback on the result of your LLM workflow.
-Doing that with OpenLLMetry is easy. First, make sure you [associate your LLM workflow with unique identifiers](/open-ll-metry/tracing/associating-entities-with-traces).
+Doing that with OpenLLMetry is easy. First, make sure you [associate your LLM workflow with unique identifiers](/docs/openllmetry/tracing/associating-entities-with-traces).
Then, you can simply log a user feedback by calling our Python SDK or Typescript SDK. Feedbacks are always between -1 and 1, where -1 is the worst possible feedback and 1 is the best possible feedback.
diff --git a/fern/pages/openllmetry/tracing/workflows-tasks-agents-tools.mdx b/fern/pages/openllmetry/tracing/workflows-tasks-agents-tools.mdx
index 4f7254f..bca08b3 100644
--- a/fern/pages/openllmetry/tracing/workflows-tasks-agents-tools.mdx
+++ b/fern/pages/openllmetry/tracing/workflows-tasks-agents-tools.mdx
@@ -275,7 +275,7 @@ In Typescript, you can use the same syntax for async methods.
In Python, you’ll need to switch to an equivalent async decorator. So, if you’re decorating an `async` method, use `@aworkflow`, `@atask` and so forth.
-See also a separate section on [using threads in Python with OpenLLMetry](/open-ll-metry/tracing/usage-with-threads-python).
+See also a separate section on [using threads in Python with OpenLLMetry](/docs/openllmetry/tracing/usage-with-threads-python).
## Decorating Classes (Python only)