diff --git a/website/docs/docs/build/incremental-microbatch.md b/website/docs/docs/build/incremental-microbatch.md
index e6c8284cc4b..9055aa7650b 100644
--- a/website/docs/docs/build/incremental-microbatch.md
+++ b/website/docs/docs/build/incremental-microbatch.md
@@ -61,7 +61,7 @@ models:
```
-We run the `sessions` model on October 1, 2024, and then again on October 2. It produces the following queries:
+We run the `sessions` model for October 1, 2024, and then again for October 2. It produces the following queries:
diff --git a/website/docs/docs/build/incremental-models.md b/website/docs/docs/build/incremental-models.md
index d7b6ecd8f54..0560797c9bc 100644
--- a/website/docs/docs/build/incremental-models.md
+++ b/website/docs/docs/build/incremental-models.md
@@ -156,15 +156,17 @@ Building this model incrementally without the `unique_key` parameter would resul
## How do I rebuild an incremental model?
If your incremental model logic has changed, the transformations on your new rows of data may diverge from the historical transformations, which are stored in your target table. In this case, you should rebuild your incremental model.
-To force dbt to rebuild the entire incremental model from scratch, use the `--full-refresh` flag on the command line. This flag will cause dbt to drop the existing target table in the database before rebuilding it for all-time.
+To force dbt to rebuild the entire incremental model from scratch, use the `--full-refresh` flag on the command line. This flag will cause dbt to drop the existing target table in the database before rebuilding it for all-time.
```bash
$ dbt run --full-refresh --select my_incremental_model+
```
+
It's also advisable to rebuild any downstream models, as indicated by the trailing `+`.
-For detailed usage instructions, check out the [dbt run](/reference/commands/run) documentation.
+You can optionally use the [`full_refresh config`](/reference/resource-configs/full_refresh) to set a resource to always or never full-refresh at the project or resource level. If specified as true or false, the `full_refresh` config will take precedence over the presence or absence of the `--full-refresh` flag.
+For detailed usage instructions, check out the [dbt run](/reference/commands/run) documentation.
## What if the columns of my incremental model change?
diff --git a/website/docs/docs/build/incremental-strategy.md b/website/docs/docs/build/incremental-strategy.md
index 959671bd16e..9a8f8358f0f 100644
--- a/website/docs/docs/build/incremental-strategy.md
+++ b/website/docs/docs/build/incremental-strategy.md
@@ -30,7 +30,7 @@ Click the name of the adapter in the below table for more information about supp
| [dbt-redshift](/reference/resource-configs/redshift-configs#incremental-materialization-strategies) | ✅ | ✅ | ✅ | | ✅ |
| [dbt-bigquery](/reference/resource-configs/bigquery-configs#merge-behavior-incremental-models) | | ✅ | | ✅ | ✅ |
| [dbt-spark](/reference/resource-configs/spark-configs#incremental-models) | ✅ | ✅ | | ✅ | ✅ |
-| [dbt-databricks](/reference/resource-configs/databricks-configs#incremental-models) | ✅ | ✅ | | ✅ | |
+| [dbt-databricks](/reference/resource-configs/databricks-configs#incremental-models) | ✅ | ✅ | | ✅ | ✅ |
| [dbt-snowflake](/reference/resource-configs/snowflake-configs#merge-behavior-incremental-models) | ✅ | ✅ | ✅ | | ✅ |
| [dbt-trino](/reference/resource-configs/trino-configs#incremental) | ✅ | ✅ | ✅ | | |
| [dbt-fabric](/reference/resource-configs/fabric-configs#incremental) | ✅ | ✅ | ✅ | | |
diff --git a/website/docs/docs/build/python-models.md b/website/docs/docs/build/python-models.md
index c3222fb76b8..eac477b03fd 100644
--- a/website/docs/docs/build/python-models.md
+++ b/website/docs/docs/build/python-models.md
@@ -641,7 +641,8 @@ In their initial launch, Python models are supported on three of the most popula
**Installing packages:** Snowpark supports several popular packages via Anaconda. Refer to the [complete list](https://repo.anaconda.com/pkgs/snowflake/) for more details. Packages are installed when your model is run. Different models can have different package dependencies. If you use third-party packages, Snowflake recommends using a dedicated virtual warehouse for best performance rather than one with many concurrent users.
**Python version:** To specify a different python version, use the following configuration:
-```
+
+```python
def model(dbt, session):
dbt.config(
materialized = "table",
@@ -653,7 +654,7 @@ def model(dbt, session):
**External access integrations and secrets**: To query external APIs within dbt Python models, use Snowflake’s [external access](https://docs.snowflake.com/en/developer-guide/external-network-access/external-network-access-overview) together with [secrets](https://docs.snowflake.com/en/developer-guide/external-network-access/secret-api-reference). Here are some additional configurations you can use:
-```
+```python
import pandas
import snowflake.snowpark as snowpark
diff --git a/website/docs/docs/build/snapshots.md b/website/docs/docs/build/snapshots.md
index 3b21549a3c7..9a020c7c940 100644
--- a/website/docs/docs/build/snapshots.md
+++ b/website/docs/docs/build/snapshots.md
@@ -487,7 +487,7 @@ Snapshot results:
For information about configuring snapshots in dbt versions 1.8 and earlier, select **1.8** from the documentation version picker, and it will appear in this section.
-To configure snapshots in versions 1.9 and later, refer to [Configuring snapshots](#configuring-snapshots). The latest versions use an updated snapshot configuration syntax that optimizes performance.
+To configure snapshots in versions 1.9 and later, refer to [Configuring snapshots](#configuring-snapshots). The latest versions use a more ergonomic snapshot configuration syntax that also speeds up parsing and compilation.
diff --git a/website/docs/docs/cloud/connect-data-platform/connect-starburst-trino.md b/website/docs/docs/cloud/connect-data-platform/connect-starburst-trino.md
index db0d3f61728..4c460f0d705 100644
--- a/website/docs/docs/cloud/connect-data-platform/connect-starburst-trino.md
+++ b/website/docs/docs/cloud/connect-data-platform/connect-starburst-trino.md
@@ -11,7 +11,7 @@ The following are the required fields for setting up a connection with a [Starbu
| **Host** | The hostname of your cluster. Don't include the HTTP protocol prefix. | `mycluster.mydomain.com` |
| **Port** | The port to connect to your cluster. By default, it's 443 for TLS enabled clusters. | `443` |
| **User** | The username (of the account) to log in to your cluster. When connecting to Starburst Galaxy clusters, you must include the role of the user as a suffix to the username.
| Format for Starburst Enterprise or Trino depends on your configured authentication method.
Format for Starburst Galaxy:
- `user.name@mydomain.com/role`
|
-| **Password** | The user's password. | |
+| **Password** | The user's password. | - |
| **Database** | The name of a catalog in your cluster. | `example_catalog` |
| **Schema** | The name of a schema that exists within the specified catalog. | `example_schema` |
diff --git a/website/docs/docs/cloud/connect-data-platform/connnect-bigquery.md b/website/docs/docs/cloud/connect-data-platform/connnect-bigquery.md
index 1ce9712ab91..ffe7e468bd2 100644
--- a/website/docs/docs/cloud/connect-data-platform/connnect-bigquery.md
+++ b/website/docs/docs/cloud/connect-data-platform/connnect-bigquery.md
@@ -11,7 +11,12 @@ sidebar_label: "Connect BigQuery"
:::info Uploading a service account JSON keyfile
-While the fields in a BigQuery connection can be specified manually, we recommend uploading a service account keyfile to quickly and accurately configure a connection to BigQuery.
+While the fields in a BigQuery connection can be specified manually, we recommend uploading a service account keyfile to quickly and accurately configure a connection to BigQuery.
+
+You can provide the JSON keyfile in one of two formats:
+
+- JSON keyfile upload — Upload the keyfile directly in its normal JSON format.
+- Base64-encoded string — Provide the keyfile as a base64-encoded string. When you provide a base64-encoded string, dbt decodes it automatically and populates the necessary fields.
:::
diff --git a/website/docs/docs/cloud/git/connect-gitlab.md b/website/docs/docs/cloud/git/connect-gitlab.md
index 648a4543932..40d84f7d164 100644
--- a/website/docs/docs/cloud/git/connect-gitlab.md
+++ b/website/docs/docs/cloud/git/connect-gitlab.md
@@ -61,8 +61,8 @@ In GitLab, when creating your Group Application, input the following:
| ------ | ----- |
| **Name** | dbt Cloud |
| **Redirect URI** | `https://YOUR_ACCESS_URL/complete/gitlab` |
-| **Confidential** | ✔️ |
-| **Scopes** | ✔️ api |
+| **Confidential** | ✅ |
+| **Scopes** | ✅ api |
Replace `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/about-cloud/access-regions-ip-addresses) for your region and plan.
diff --git a/website/docs/docs/cloud/manage-access/self-service-permissions.md b/website/docs/docs/cloud/manage-access/self-service-permissions.md
index a5bdba825c2..6b326645d44 100644
--- a/website/docs/docs/cloud/manage-access/self-service-permissions.md
+++ b/website/docs/docs/cloud/manage-access/self-service-permissions.md
@@ -52,33 +52,33 @@ The following tables outline the access that users have if they are assigned a D
| Account-level permission| Owner | Member | Read-only license| IT license |
|:------------------------|:-----:|:------:|:----------------:|:------------:|
-| Account settings | W | W | | W |
-| Billing | W | | | W |
-| Invitations | W | W | | W |
-| Licenses | W | R | | W |
-| Users | W | R | | W |
-| Project (create) | W | W | | W |
-| Connections | W | W | | W |
-| Service tokens | W | | | W |
-| Webhooks | W | W | | |
+| Account settings | W | W | - | W |
+| Billing | W | - | - | W |
+| Invitations | W | W | - | W |
+| Licenses | W | R | - | W |
+| Users | W | R | - | W |
+| Project (create) | W | W | - | W |
+| Connections | W | W | - | W |
+| Service tokens | W | - | - | W |
+| Webhooks | W | W | - | - |
#### Project permissions for account roles
|Project-level permission | Owner | Member | Read-only | IT license |
|:------------------------|:-----:|:-------:|:---------:|:----------:|
-| Adapters | W | W | R | |
-| Connections | W | W | R | |
-| Credentials | W | W | R | |
-| Custom env. variables | W | W | R | |
-| Develop (IDE or dbt Cloud CLI)| W | W | | |
-| Environments | W | W | R | |
-| Jobs | W | W | R | |
-| dbt Explorer | W | W | R | |
-| Permissions | W | R | | |
-| Profile | W | W | R | |
-| Projects | W | W | R | |
-| Repositories | W | W | R | |
-| Runs | W | W | R | |
-| Semantic Layer Config | W | W | R | |
+| Adapters | W | W | R | - |
+| Connections | W | W | R | - |
+| Credentials | W | W | R | - |
+| Custom env. variables | W | W | R | - |
+| Develop (IDE or dbt Cloud CLI)| W | W | - | - |
+| Environments | W | W | R | - |
+| Jobs | W | W | R | - |
+| dbt Explorer | W | W | R | - |
+| Permissions | W | R | - | - |
+| Profile | W | W | R | - |
+| Projects | W | W | R | - |
+| Repositories | W | W | R | - |
+| Runs | W | W | R | - |
+| Semantic Layer Config | W | W | R | - |
diff --git a/website/docs/docs/collaborate/govern/model-contracts.md b/website/docs/docs/collaborate/govern/model-contracts.md
index d30024157c8..9b75e518719 100644
--- a/website/docs/docs/collaborate/govern/model-contracts.md
+++ b/website/docs/docs/collaborate/govern/model-contracts.md
@@ -205,13 +205,11 @@ At the same time, for models with many columns, we understand that this can mean
When comparing to a previous project state, dbt will look for breaking changes that could impact downstream consumers. If breaking changes are detected, dbt will present a contract error.
-Breaking changes include:
-- Removing an existing column.
-- Changing the `data_type` of an existing column.
-- Removing or modifying one of the `constraints` on an existing column (dbt v1.6 or higher).
-- Removing a contracted model by deleting, renaming, or disabling it (dbt v1.9 or higher).
- - versioned models will raise an error.
- - unversioned models will raise a warning.
+import BreakingChanges from '/snippets/_versions-contracts.md';
-More details are available in the [contract reference](/reference/resource-configs/contract#detecting-breaking-changes).
+
+More details are available in the [contract reference](/reference/resource-configs/contract#detecting-breaking-changes).
diff --git a/website/docs/docs/core/connect-data-platform/trino-setup.md b/website/docs/docs/core/connect-data-platform/trino-setup.md
index 4caa56dcb00..06c94d7e7ff 100644
--- a/website/docs/docs/core/connect-data-platform/trino-setup.md
+++ b/website/docs/docs/core/connect-data-platform/trino-setup.md
@@ -34,7 +34,7 @@ The following profile fields are always required except for `user`, which is als
| Field | Example | Description |
| --------- | ------- | ----------- |
-| `host` | `mycluster.mydomain.com` | The hostname of your cluster.
Don't include the `http://` or `https://` prefix. |
+| `host` | `mycluster.mydomain.com`
Format for Starburst Galaxy:
- `mygalaxyaccountname-myclustername.trino.galaxy.starburst.io`
| The hostname of your cluster.
Don't include the `http://` or `https://` prefix. |
| `database` | `my_postgres_catalog` | The name of a catalog in your cluster. |
| `schema` | `my_schema` | The name of a schema within your cluster's catalog.
It's _not recommended_ to use schema names that have upper case or mixed case letters. |
| `port` | `443` | The port to connect to your cluster. By default, it's 443 for TLS enabled clusters. |
diff --git a/website/docs/docs/dbt-cloud-apis/user-tokens.md b/website/docs/docs/dbt-cloud-apis/user-tokens.md
index 02a81d80139..b7bf4fdce28 100644
--- a/website/docs/docs/dbt-cloud-apis/user-tokens.md
+++ b/website/docs/docs/dbt-cloud-apis/user-tokens.md
@@ -8,7 +8,7 @@ pagination_next: "docs/dbt-cloud-apis/service-tokens"
:::Warning
-User API tokens have been deprecated and will no longer work. [Migrate](#migrate-from-user-api-keys-to-personal-access-tokens) to personal access tokens to resume services.
+User API tokens have been deprecated and will no longer work. [Migrate](#migrate-deprecated-user-api-keys-to-personal-access-tokens) to personal access tokens to resume services.
:::
diff --git a/website/docs/guides/bigquery-qs.md b/website/docs/guides/bigquery-qs.md
index 0820c23934d..194b73f25bf 100644
--- a/website/docs/guides/bigquery-qs.md
+++ b/website/docs/guides/bigquery-qs.md
@@ -85,13 +85,14 @@ In order to let dbt connect to your warehouse, you'll need to generate a keyfile
3. Create a service account key for your new project from the [Service accounts page](https://console.cloud.google.com/iam-admin/serviceaccounts?walkthrough_id=iam--create-service-account-keys&start_index=1#step_index=1). For more information, refer to [Create a service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating) in the Google Cloud docs. When downloading the JSON file, make sure to use a filename you can easily remember. For example, `dbt-user-creds.json`. For security reasons, dbt Labs recommends that you protect this JSON file like you would your identity credentials; for example, don't check the JSON file into your version control software.
## Connect dbt Cloud to BigQuery
-1. Create a new project in [dbt Cloud](/docs/cloud/about-cloud/access-regions-ip-addresses). Navigate to **Account settings** (by clicking on your account name in the left side menu), and click **+ New Project**.
+1. Create a new project in [dbt Cloud](/docs/cloud/about-cloud/access-regions-ip-addresses). Navigate to **Account settings** (by clicking on your account name in the left side menu), and click **+ New project**.
2. Enter a project name and click **Continue**.
3. For the warehouse, click **BigQuery** then **Next** to set up your connection.
4. Click **Upload a Service Account JSON File** in settings.
5. Select the JSON file you downloaded in [Generate BigQuery credentials](#generate-bigquery-credentials) and dbt Cloud will fill in all the necessary fields.
-6. Click **Test Connection**. This verifies that dbt Cloud can access your BigQuery account.
-7. Click **Next** if the test succeeded. If it failed, you might need to go back and regenerate your BigQuery credentials.
+6. Optional — dbt Cloud Enterprise plans can configure developer OAuth with BigQuery, providing an additional layer of security. For more information, refer to [Set up BigQuery OAuth](/docs/cloud/manage-access/set-up-bigquery-oauth).
+7. Click **Test Connection**. This verifies that dbt Cloud can access your BigQuery account.
+8. Click **Next** if the test succeeded. If it failed, you might need to go back and regenerate your BigQuery credentials.
## Set up a dbt Cloud managed repository
diff --git a/website/docs/reference/resource-configs/database.md b/website/docs/reference/resource-configs/database.md
index 338159b30dc..48ac0c8451c 100644
--- a/website/docs/reference/resource-configs/database.md
+++ b/website/docs/reference/resource-configs/database.md
@@ -79,22 +79,19 @@ This results in the generated relation being located in the `snapshots` database
-Configure a database in your `dbt_project.yml` file.
+Customize the database for storing test results in your `dbt_project.yml` file.
-For example, to load a test into a database called `reporting` instead of the target database, you can configure it like this:
+For example, to save test results in a specific database, you can configure it like this:
```yml
tests:
- - my_not_null_test:
- column_name: order_id
- type: not_null
- +database: reporting
+ +store_failures: true
+ +database: test_results
```
-This would result in the generated relation being located in the `reporting` database, so the full relation name would be `reporting.finance.my_not_null_test`.
-
+This would result in the test results being stored in the `test_results` database.
diff --git a/website/docs/reference/resource-properties/versions.md b/website/docs/reference/resource-properties/versions.md
index f6b71852aef..748aa477a4f 100644
--- a/website/docs/reference/resource-properties/versions.md
+++ b/website/docs/reference/resource-properties/versions.md
@@ -73,13 +73,13 @@ Note that the value of `defined_in` and the `alias` configuration of a model are
When you use the `state:modified` selection method in Slim CI, dbt will detect changes to versioned model contracts, and raise an error if any of those changes could be breaking for downstream consumers.
-Breaking changes include:
-- Removing an existing column
-- Changing the `data_type` of an existing column
-- Removing or modifying one of the `constraints` on an existing column (dbt v1.6 or higher)
-- Changing unversioned, contracted models.
- - dbt also warns if a model has or had a contract but isn't versioned
-
+import BreakingChanges from '/snippets/_versions-contracts.md';
+
+
+
diff --git a/website/docs/reference/snapshot-configs.md b/website/docs/reference/snapshot-configs.md
index 7b3c0f8e5b1..3445c7ecac9 100644
--- a/website/docs/reference/snapshot-configs.md
+++ b/website/docs/reference/snapshot-configs.md
@@ -347,6 +347,7 @@ The following examples demonstrate how to configure snapshots using the `dbt_pro
{{
config(
unique_key='id',
+ target_schema='snapshots',
strategy='timestamp',
updated_at='updated_at'
)
diff --git a/website/snippets/_versions-contracts.md b/website/snippets/_versions-contracts.md
new file mode 100644
index 00000000000..1207e02fba9
--- /dev/null
+++ b/website/snippets/_versions-contracts.md
@@ -0,0 +1,7 @@
+Breaking changes include:
+
+- Removing an existing column
+- Changing the data_type of an existing column
+- Removing or modifying one of the `constraints` on an existing column (dbt v1.6 or higher)
+- {props.value}
+ - {props.value2}