From 96241e8b53d5a6113ce4bd880df06bf97273f57e Mon Sep 17 00:00:00 2001 From: Ben Ye Date: Sat, 4 May 2024 20:10:29 -0700 Subject: [PATCH] chore: remove deprecated/removed features from docs Signed-off-by: Ben Ye --- .../migrate-from-chunks-to-blocks.md | 12 ------ docs/configuration/arguments.md | 39 ------------------- docs/configuration/v1-guarantees.md | 4 -- 3 files changed, 55 deletions(-) diff --git a/docs/blocks-storage/migrate-from-chunks-to-blocks.md b/docs/blocks-storage/migrate-from-chunks-to-blocks.md index 6fe05cd0cc..848f22d2ba 100644 --- a/docs/blocks-storage/migrate-from-chunks-to-blocks.md +++ b/docs/blocks-storage/migrate-from-chunks-to-blocks.md @@ -57,18 +57,6 @@ As chunks ingesters shut down, they flush chunks to the storage. They are then r to use blocks. Queriers cannot fetch recent chunks from ingesters directly (as blocks ingester don't reload chunks), and need to use storage instead. -### Query-frontend - -Query-frontend needs to be reconfigured as follow: - -- `-querier.parallelise-shardable-queries=false` - -#### `-querier.parallelise-shardable-queries=false` - -Query frontend has an option `-querier.parallelise-shardable-queries` to split some incoming queries into multiple queries based on sharding factor used in v11 schema of chunk storage. -As the description implies, it only works when using chunks storage. -During and after the migration to blocks (and also after possible rollback), this option needs to be disabled otherwise query-frontend will generate queries that cannot be satisfied by blocks storage. - ### Compactor and Store-gateway [Compactor](./compactor.md) and [store-gateway](./store-gateway.md) services should be deployed and successfully up and running before migrating ingesters. diff --git a/docs/configuration/arguments.md b/docs/configuration/arguments.md index ed073ec62d..18ddf0ea89 100644 --- a/docs/configuration/arguments.md +++ b/docs/configuration/arguments.md @@ -53,51 +53,12 @@ The next three options only apply when the querier is used together with the Que ## Querier and Ruler -The ingester query API was improved over time, but defaults to the old behaviour for backwards-compatibility. For best results both of these next two flags should be set to `true`: - -- `-querier.batch-iterators` - - This uses iterators to execute query, as opposed to fully materialising the series in memory, and fetches multiple results per loop. - -- `-querier.ingester-streaming` - - Use streaming RPCs to query ingester, to reduce memory pressure in the ingester. - -- `-querier.iterators` - - This is similar to `-querier.batch-iterators` but less efficient. - If both `iterators` and `batch-iterators` are `true`, `batch-iterators` will take precedence. - - `-promql.lookback-delta` Time since the last sample after which a time series is considered stale and ignored by expression evaluations. ## Query Frontend -- `-querier.parallelise-shardable-queries` - - If set to true, will cause the query frontend to mutate incoming queries when possible by turning `sum` operations into sharded `sum` operations. This requires a shard-compatible schema (v10+). An abridged example: - `sum by (foo) (rate(bar{baz=”blip”}[1m]))` -> - ``` - sum by (foo) ( - sum by (foo) (rate(bar{baz=”blip”,__cortex_shard__=”0of16”}[1m])) or - sum by (foo) (rate(bar{baz=”blip”,__cortex_shard__=”1of16”}[1m])) or - ... - sum by (foo) (rate(bar{baz=”blip”,__cortex_shard__=”15of16”}[1m])) - ) - ``` - When enabled, the query-frontend requires a schema config to determine how/when to shard queries, either from a file or from flags (i.e. by the `-schema-config-file` CLI flag). This is the same schema config the queriers consume. - It's also advised to increase downstream concurrency controls as well to account for more queries of smaller sizes: - - - `querier.max-outstanding-requests-per-tenant` - - `querier.max-query-parallelism` - - `querier.max-concurrent` - - `server.grpc-max-concurrent-streams` (for both query-frontends and queriers) - - Furthermore, both querier and query-frontend components require the `querier.query-ingesters-within` parameter to know when to start sharding requests (ingester queries are not sharded). - - Instrumentation (traces) also scale with the number of sharded queries and it's suggested to account for increased throughput there as well (for instance via `JAEGER_REPORTER_MAX_QUEUE_SIZE`). - - `-querier.align-querier-with-step` If set to true, will cause the query frontend to mutate incoming queries and align their start and end parameters to the step parameter of the query. This improves the cacheability of the query results. diff --git a/docs/configuration/v1-guarantees.md b/docs/configuration/v1-guarantees.md index 41a99169b7..1436cfc940 100644 --- a/docs/configuration/v1-guarantees.md +++ b/docs/configuration/v1-guarantees.md @@ -45,7 +45,6 @@ Currently experimental features are: - Sharding of tenants across multiple instances (enabled via `-alertmanager.sharding-enabled`) - Receiver integrations firewall (configured via `-alertmanager.receivers-firewall.*`) - Memcached client DNS-based service discovery. -- Delete series APIs. - In-memory (FIFO) and Redis cache. - gRPC Store. - TLS configuration in gRPC and HTTP clients. @@ -65,9 +64,6 @@ Currently experimental features are: - Querier: tenant federation - The thanosconvert tool for converting Thanos block metadata to Cortex - HA Tracker: cleanup of old replicas from KV Store. -- Flags for configuring whether blocks-ingester streams samples or chunks are temporary, and will be removed on next release: - - `-ingester.stream-chunks-when-using-blocks` CLI flag - - `-ingester_stream_chunks_when_using_blocks` (boolean) field in runtime config file - Instance limits in ingester and distributor - Exemplar storage, currently in-memory only within the Ingester based on Prometheus exemplar storage (`-blocks-storage.tsdb.max-exemplars`) - Querier limits: