diff --git a/docs/_freeze/posts/flink-quickstart/index/execute-results/html.json b/docs/_freeze/posts/flink-quickstart/index/execute-results/html.json
new file mode 100644
index 0000000000000..c413428c105dd
--- /dev/null
+++ b/docs/_freeze/posts/flink-quickstart/index/execute-results/html.json
@@ -0,0 +1,15 @@
+{
+ "hash": "6f93a56c5d381bd6e0bcff9db8e48f7c",
+ "result": {
+ "markdown": "---\ntitle: \"Ibis goes real-time! Introducing the new Flink backend for Ibis\"\nauthor: \"Deepyaman Datta\"\ndate: \"2024-01-15\"\ncategories:\n - blog\n - flink\n - stream processing\n---\n\n## Introduction\n\nIbis 8.0.0 marks the official release of the Apache Flink backend for Ibis. Flink is one of the most established stream-processing frameworks out there and a central part of the real-time data infrastructure at companies like DoorDash, LinkedIn, Netflix, and Uber. The Flink backend is also the first streaming backend Ibis supports. Follow along as we define and execute a simple streaming job using Ibis!\n\n## Installation prerequisites\n\n* **Docker Compose:** This tutorial uses Docker Compose to manage an Apache Kafka environment (including sample data generation) and a Flink cluster (for [remote execution](#remote-execution)). You can [download and install Docker Compose from the official website](https://docs.docker.com/compose/install/).\n* **JDK 11:** Flink requires Java 11. If you don't already have JDK 11 installed, you can [get the appropriate Eclipse Temurin release](https://adoptium.net/temurin/releases/?package=jdk&version=11).\n* **Python:** To follow along, you need Python 3.9 or 3.10.\n\n## Installing the Flink backend for Ibis\n\nWe use a Python client to explore data in Kafka topics. You can install it, alongside the Flink backend for Ibis, with `pip`, `conda`, `mamba`, or `pixi`:\n\n::: {.panel-tabset}\n\n## Using `pip`\n\n```bash\npip install ibis-framework apache-flink kafka-python\n```\n\n## Using `conda`\n\n```bash\nconda install -c conda-forge ibis-flink kafka-python\n```\n\n## Using `mamba`\n\n```bash\nmamba install -c conda-forge ibis-flink kafka-python\n```\n\n## Using `pixi`\n\n```bash\npixi add ibis-flink kafka-python\n```\n\n:::\n\n## Spinning up the services using Docker Compose\n\nThe [claypotai/ibis-flink-example GitHub repository](https://github.com/claypotai/ibis-flink-example) includes the relevant Docker Compose configuration for this tutorial. Clone the repository, and run `docker compose up` from the cloned directory to create Kafka topics, generate sample data, and launch a Flink cluster.\n\n```bash\ngit clone https://github.com/claypotai/ibis-flink-example.git\ncd ibis-flink-example\ndocker compose up\n```\n\n::: {.callout-tip}\nIf you don't intend to try [remote execution](#remote-execution), you can start only the Kafka-related services with `docker compose up kafka init-kafka data-generator`.\n:::\n\nAfter a few seconds, you should see messages indicating your Kafka environment is ready:\n\n```bash\nibis-flink-example-init-kafka-1 | Successfully created the following topics:\nibis-flink-example-init-kafka-1 | payment_msg\nibis-flink-example-init-kafka-1 | sink\nibis-flink-example-init-kafka-1 exited with code 0\nibis-flink-example-data-generator-1 | Connected to Kafka\nibis-flink-example-data-generator-1 | Producing 20000 records to Kafka topic payment_msg\n```\n\nThe `payment_msg` Kafka topic contains messages in the following format:\n\n```json\n{\n \"createTime\": \"2023-09-20 22:19:02.224\",\n \"orderId\": 1695248388,\n \"payAmount\": 88694.71922270155,\n \"payPlatform\": 0,\n \"provinceId\": 6\n}\n```\n\nIn a separate terminal, we can explore what these messages look like:\n\n::: {.cell execution_count=1}\n``` {.python .cell-code}\nfrom kafka import KafkaConsumer\n\nconsumer = KafkaConsumer(\"payment_msg\")\nfor _, msg in zip(range(3), consumer):\n print(msg)\n```\n\n::: {.cell-output .cell-output-stdout}\n```\nConsumerRecord(topic='payment_msg', partition=0, offset=51, timestamp=1704506339153, timestamp_type=0, key=None, value=b'{\"createTime\": \"2024-01-06 01:58:59.152\", \"orderId\": 1704506365, \"payAmount\": 36749.41282986828, \"payPlatform\": 0, \"provinceId\": 1}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=131, serialized_header_size=-1)\nConsumerRecord(topic='payment_msg', partition=0, offset=52, timestamp=1704506339655, timestamp_type=0, key=None, value=b'{\"createTime\": \"2024-01-06 01:58:59.654\", \"orderId\": 1704506366, \"payAmount\": 34620.98951893636, \"payPlatform\": 0, \"provinceId\": 4}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=131, serialized_header_size=-1)\nConsumerRecord(topic='payment_msg', partition=0, offset=53, timestamp=1704506340157, timestamp_type=0, key=None, value=b'{\"createTime\": \"2024-01-06 01:59:00.156\", \"orderId\": 1704506367, \"payAmount\": 28892.583606570523, \"payPlatform\": 0, \"provinceId\": 3}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=132, serialized_header_size=-1)\n```\n:::\n:::\n\n\n## Running the tutorial\n\nThis tutorial uses the Flink backend for Ibis to process the aforementioned payment messages. You can choose to either [run it locally](#local-execution) or [submit a job to an already-running Flink cluster](#remote-execution).\n\n### Local execution\n\nThe simpler option is to run the example using the Flink mini cluster.\n\n#### Create a table environment\n\nThe [table environment](https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/python/table/table_environment/) serves as the main entry point for interacting with the Flink runtime. The `flink` backend does not create `TableEnvironment` objects; you must create a `TableEnvironment` and pass that to [`ibis.flink.connect`](../../backends/flink#ibis.flink.connect).\n\n::: {.cell execution_count=2}\n``` {.python .cell-code}\nimport ibis\nfrom pyflink.table import EnvironmentSettings, TableEnvironment\n\nenv_settings = EnvironmentSettings.in_streaming_mode()\ntable_env = TableEnvironment.create(env_settings)\n# write all the data to one file\ntable_env.get_config().set(\"parallelism.default\", \"1\")\n\nconnection = ibis.flink.connect(table_env)\n```\n:::\n\n\nFlink’s streaming connectors aren't part of the binary distribution. Link the [Kafka connector](https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/kafka/) for cluster execution by adding the JAR file from the cloned repository.\n\n::: {.cell execution_count=3}\n``` {.python .cell-code}\nconnection._exec_sql(\"ADD JAR 'flink-sql-connector-kafka-3.0.2-1.18.jar'\")\n```\n:::\n\n\n#### Create the source and sink tables\n\nUse [`create_table`](../../backends/flink#ibis.backends.flink.Backend.create_table) to register tables. Notice the new top-level `ibis.watermark` API for [specifying a watermark strategy](https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/concepts/time/#event-time-and-watermarks).\n\n::: {.cell execution_count=4}\n``` {.python .cell-code}\nimport ibis.expr.datatypes as dt\nimport ibis.expr.schema as sch\n\n# create source Table\nsource_schema = sch.Schema(\n {\n \"createTime\": dt.timestamp(scale=3),\n \"orderId\": dt.int64,\n \"payAmount\": dt.float64,\n \"payPlatform\": dt.int32,\n \"provinceId\": dt.int32,\n }\n)\n\nsource_configs = {\n \"connector\": \"kafka\",\n \"topic\": \"payment_msg\",\n \"properties.bootstrap.servers\": \"localhost:9092\",\n \"properties.group.id\": \"test_3\",\n \"scan.startup.mode\": \"earliest-offset\",\n \"format\": \"json\",\n}\n\nt = connection.create_table(\n \"payment_msg\",\n schema=source_schema,\n tbl_properties=source_configs,\n watermark=ibis.watermark(\n time_col=\"createTime\", allowed_delay=ibis.interval(seconds=15)\n ),\n)\n\n# create sink Table\nsink_schema = sch.Schema(\n {\n \"province_id\": dt.int32,\n \"pay_amount\": dt.float64,\n }\n)\n\nsink_configs = {\n \"connector\": \"kafka\",\n \"topic\": \"sink\",\n \"properties.bootstrap.servers\": \"localhost:9092\",\n \"format\": \"json\",\n}\n\nconnection.create_table(\n \"total_amount_by_province_id\", schema=sink_schema, tbl_properties=sink_configs\n)\n```\n\n::: {.cell-output .cell-output-display execution_count=4}\n```{=html}\n
DatabaseTable: total_amount_by_province_id\n province_id int32\n pay_amount float64\n
\n```\n:::\n:::\n\n\n#### Perform calculations\n\nCompute the total pay amount per province in the past 10 seconds (as of each message, for the province in the incoming message).\n\n::: {.cell execution_count=5}\n``` {.python .cell-code}\nagged = t[\n t.provinceId.name(\"province_id\"),\n t.payAmount.sum()\n .over(\n range=(-ibis.interval(seconds=10), 0),\n group_by=t.provinceId,\n order_by=t.createTime,\n )\n .name(\"pay_amount\"),\n]\n```\n:::\n\n\nFinally, emit the query result to the sink table.\n\n::: {.cell execution_count=6}\n``` {.python .cell-code}\nconnection.insert(\"total_amount_by_province_id\", agged)\n```\n\n::: {.cell-output .cell-output-display execution_count=6}\n```\n\n```\n:::\n:::\n\n\n### Remote execution\n\nYou can also submit the example to the [remote cluster started using Docker Compose](#spinning-up-the-services-using-docker-compose). The `window_aggregation.py` file in the cloned repository contains the [same steps that we performed for local execution](#local-execution). We will [use the method described in the official Flink documentation](https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/deployment/cli/#submitting-pyflink-jobs).\n\n::: {.callout-tip}\nYou can find the `./bin/flink` executable with the following command:\n\n```bash\npython -c'from pathlib import Path; import pyflink; print(Path(pyflink.__spec__.origin).parent / \"bin\" / \"flink\")'\n```\n:::\n\nMy full command looks like this:\n\n```bash\n/opt/miniconda3/envs/ibis-dev/lib/python3.10/site-packages/pyflink/bin/flink run --jobmanager localhost:8081 --python window_aggregation.py\n```\n\nThe command will exit after displaying a submission message:\n\n```bash\nJob has been submitted with JobID b816faaf5ef9126ea5b9b6a37012cf56\n```\n\n## Viewing the results\n\nSimilar to how we viewed messages in the `payment_msg` topic, we can print results from the `sink` topic:\n\n::: {.cell execution_count=7}\n``` {.python .cell-code}\nconsumer = KafkaConsumer(\"sink\")\nfor _, msg in zip(range(10), consumer):\n print(msg)\n```\n\n::: {.cell-output .cell-output-stdout}\n```\nConsumerRecord(topic='sink', partition=0, offset=0, timestamp=1704506345413, timestamp_type=0, key=None, value=b'{\"province_id\":6,\"pay_amount\":67004.59342078592}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=48, serialized_header_size=-1)\nConsumerRecord(topic='sink', partition=0, offset=1, timestamp=1704506345417, timestamp_type=0, key=None, value=b'{\"province_id\":1,\"pay_amount\":79011.22494801365}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=48, serialized_header_size=-1)\nConsumerRecord(topic='sink', partition=0, offset=2, timestamp=1704506345418, timestamp_type=0, key=None, value=b'{\"province_id\":2,\"pay_amount\":86842.12119554746}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=48, serialized_header_size=-1)\nConsumerRecord(topic='sink', partition=0, offset=3, timestamp=1704506345418, timestamp_type=0, key=None, value=b'{\"province_id\":0,\"pay_amount\":35652.376597008086}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=49, serialized_header_size=-1)\nConsumerRecord(topic='sink', partition=0, offset=4, timestamp=1704506345418, timestamp_type=0, key=None, value=b'{\"province_id\":4,\"pay_amount\":87971.48924377888}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=48, serialized_header_size=-1)\nConsumerRecord(topic='sink', partition=0, offset=5, timestamp=1704506345419, timestamp_type=0, key=None, value=b'{\"province_id\":5,\"pay_amount\":41337.648026475945}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=49, serialized_header_size=-1)\nConsumerRecord(topic='sink', partition=0, offset=6, timestamp=1704506345419, timestamp_type=0, key=None, value=b'{\"province_id\":4,\"pay_amount\":113986.36150245408}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=49, serialized_header_size=-1)\nConsumerRecord(topic='sink', partition=0, offset=7, timestamp=1704506345419, timestamp_type=0, key=None, value=b'{\"province_id\":5,\"pay_amount\":125331.16526995043}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=49, serialized_header_size=-1)\nConsumerRecord(topic='sink', partition=0, offset=8, timestamp=1704506345419, timestamp_type=0, key=None, value=b'{\"province_id\":5,\"pay_amount\":134360.06231724462}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=49, serialized_header_size=-1)\nConsumerRecord(topic='sink', partition=0, offset=9, timestamp=1704506345419, timestamp_type=0, key=None, value=b'{\"province_id\":6,\"pay_amount\":123064.34036621553}', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=49, serialized_header_size=-1)\n```\n:::\n:::\n\n\nVoilà! You've run your first streaming application using Ibis.\n\n## Shutting down the Compose environment\n\nPress Ctrl+C to stop the Docker Compose containers. Once stopped, run `docker compose down` to remove the services created for this tutorial.\n\n",
+ "supporting": [
+ "index_files"
+ ],
+ "filters": [],
+ "includes": {
+ "include-in-header": [
+ "\n\n\n"
+ ]
+ }
+ }
+}
\ No newline at end of file
diff --git a/docs/posts/flink-quickstart/flink-sql-connector-kafka-3.0.2-1.18.jar b/docs/posts/flink-quickstart/flink-sql-connector-kafka-3.0.2-1.18.jar
new file mode 100644
index 0000000000000..97ba07a62fed5
Binary files /dev/null and b/docs/posts/flink-quickstart/flink-sql-connector-kafka-3.0.2-1.18.jar differ
diff --git a/docs/posts/flink-quickstart/index.qmd b/docs/posts/flink-quickstart/index.qmd
new file mode 100644
index 0000000000000..dfc38fe239dae
--- /dev/null
+++ b/docs/posts/flink-quickstart/index.qmd
@@ -0,0 +1,250 @@
+---
+title: "Ibis goes real-time! Introducing the new Flink backend for Ibis"
+author: "Deepyaman Datta"
+date: "2024-01-15"
+categories:
+ - blog
+ - flink
+ - stream processing
+---
+
+## Introduction
+
+Ibis 8.0.0 marks the official release of the Apache Flink backend for Ibis. Flink is one of the most established stream-processing frameworks out there and a central part of the real-time data infrastructure at companies like DoorDash, LinkedIn, Netflix, and Uber. The Flink backend is also the first streaming backend Ibis supports. Follow along as we define and execute a simple streaming job using Ibis!
+
+## Installation prerequisites
+
+* **Docker Compose:** This tutorial uses Docker Compose to manage an Apache Kafka environment (including sample data generation) and a Flink cluster (for [remote execution](#remote-execution)). You can [download and install Docker Compose from the official website](https://docs.docker.com/compose/install/).
+* **JDK 11:** Flink requires Java 11. If you don't already have JDK 11 installed, you can [get the appropriate Eclipse Temurin release](https://adoptium.net/temurin/releases/?package=jdk&version=11).
+* **Python:** To follow along, you need Python 3.9 or 3.10.
+
+## Installing the Flink backend for Ibis
+
+We use a Python client to explore data in Kafka topics. You can install it, alongside the Flink backend for Ibis, with `pip`, `conda`, `mamba`, or `pixi`:
+
+::: {.panel-tabset}
+
+## Using `pip`
+
+```bash
+pip install ibis-framework apache-flink kafka-python
+```
+
+## Using `conda`
+
+```bash
+conda install -c conda-forge ibis-flink kafka-python
+```
+
+## Using `mamba`
+
+```bash
+mamba install -c conda-forge ibis-flink kafka-python
+```
+
+## Using `pixi`
+
+```bash
+pixi add ibis-flink kafka-python
+```
+
+:::
+
+## Spinning up the services using Docker Compose
+
+The [claypotai/ibis-flink-example GitHub repository](https://github.com/claypotai/ibis-flink-example) includes the relevant Docker Compose configuration for this tutorial. Clone the repository, and run `docker compose up` from the cloned directory to create Kafka topics, generate sample data, and launch a Flink cluster.
+
+```bash
+git clone https://github.com/claypotai/ibis-flink-example.git
+cd ibis-flink-example
+docker compose up
+```
+
+::: {.callout-tip}
+If you don't intend to try [remote execution](#remote-execution), you can start only the Kafka-related services with `docker compose up kafka init-kafka data-generator`.
+:::
+
+After a few seconds, you should see messages indicating your Kafka environment is ready:
+
+```bash
+ibis-flink-example-init-kafka-1 | Successfully created the following topics:
+ibis-flink-example-init-kafka-1 | payment_msg
+ibis-flink-example-init-kafka-1 | sink
+ibis-flink-example-init-kafka-1 exited with code 0
+ibis-flink-example-data-generator-1 | Connected to Kafka
+ibis-flink-example-data-generator-1 | Producing 20000 records to Kafka topic payment_msg
+```
+
+The `payment_msg` Kafka topic contains messages in the following format:
+
+```json
+{
+ "createTime": "2023-09-20 22:19:02.224",
+ "orderId": 1695248388,
+ "payAmount": 88694.71922270155,
+ "payPlatform": 0,
+ "provinceId": 6
+}
+```
+
+In a separate terminal, we can explore what these messages look like:
+
+```{python}
+from kafka import KafkaConsumer
+
+consumer = KafkaConsumer("payment_msg")
+for _, msg in zip(range(3), consumer):
+ print(msg)
+```
+
+## Running the tutorial
+
+This tutorial uses the Flink backend for Ibis to process the aforementioned payment messages. You can choose to either [run it locally](#local-execution) or [submit a job to an already-running Flink cluster](#remote-execution).
+
+### Local execution
+
+The simpler option is to run the example using the Flink mini cluster.
+
+#### Create a table environment
+
+The [table environment](https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/python/table/table_environment/) serves as the main entry point for interacting with the Flink runtime. The `flink` backend does not create `TableEnvironment` objects; you must create a `TableEnvironment` and pass that to [`ibis.flink.connect`](../../backends/flink.qmd#ibis.flink.connect).
+
+```{python}
+import ibis
+from pyflink.table import EnvironmentSettings, TableEnvironment
+
+env_settings = EnvironmentSettings.in_streaming_mode()
+table_env = TableEnvironment.create(env_settings)
+# write all the data to one file
+table_env.get_config().set("parallelism.default", "1")
+
+connection = ibis.flink.connect(table_env)
+```
+
+Flink’s streaming connectors aren't part of the binary distribution. Link the [Kafka connector](https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/kafka/) for cluster execution by adding the JAR file from the cloned repository.
+
+```{python}
+#| output: false
+
+connection._exec_sql("ADD JAR 'flink-sql-connector-kafka-3.0.2-1.18.jar'")
+```
+
+#### Create the source and sink tables
+
+Use [`create_table`](../../backends/flink.qmd#ibis.backends.flink.Backend.create_table) to register tables. Notice the new top-level `ibis.watermark` API for [specifying a watermark strategy](https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/concepts/time/#event-time-and-watermarks).
+
+```{python}
+import ibis.expr.datatypes as dt
+import ibis.expr.schema as sch
+
+# create source Table
+source_schema = sch.Schema(
+ {
+ "createTime": dt.timestamp(scale=3),
+ "orderId": dt.int64,
+ "payAmount": dt.float64,
+ "payPlatform": dt.int32,
+ "provinceId": dt.int32,
+ }
+)
+
+source_configs = {
+ "connector": "kafka",
+ "topic": "payment_msg",
+ "properties.bootstrap.servers": "localhost:9092",
+ "properties.group.id": "test_3",
+ "scan.startup.mode": "earliest-offset",
+ "format": "json",
+}
+
+t = connection.create_table(
+ "payment_msg",
+ schema=source_schema,
+ tbl_properties=source_configs,
+ watermark=ibis.watermark(
+ time_col="createTime", allowed_delay=ibis.interval(seconds=15)
+ ),
+)
+
+# create sink Table
+sink_schema = sch.Schema(
+ {
+ "province_id": dt.int32,
+ "pay_amount": dt.float64,
+ }
+)
+
+sink_configs = {
+ "connector": "kafka",
+ "topic": "sink",
+ "properties.bootstrap.servers": "localhost:9092",
+ "format": "json",
+}
+
+connection.create_table(
+ "total_amount_by_province_id", schema=sink_schema, tbl_properties=sink_configs
+)
+```
+
+#### Perform calculations
+
+Compute the total pay amount per province in the past 10 seconds (as of each message, for the province in the incoming message).
+
+```{python}
+agged = t[
+ t.provinceId.name("province_id"),
+ t.payAmount.sum()
+ .over(
+ range=(-ibis.interval(seconds=10), 0),
+ group_by=t.provinceId,
+ order_by=t.createTime,
+ )
+ .name("pay_amount"),
+]
+```
+
+Finally, emit the query result to the sink table.
+
+```{python}
+connection.insert("total_amount_by_province_id", agged)
+```
+
+### Remote execution
+
+You can also submit the example to the [remote cluster started using Docker Compose](#spinning-up-the-services-using-docker-compose). The `window_aggregation.py` file in the cloned repository contains the [same steps that we performed for local execution](#local-execution). We will [use the method described in the official Flink documentation](https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/deployment/cli/#submitting-pyflink-jobs).
+
+::: {.callout-tip}
+You can find the `./bin/flink` executable with the following command:
+
+```bash
+python -c'from pathlib import Path; import pyflink; print(Path(pyflink.__spec__.origin).parent / "bin" / "flink")'
+```
+:::
+
+My full command looks like this:
+
+```bash
+/opt/miniconda3/envs/ibis-dev/lib/python3.10/site-packages/pyflink/bin/flink run --jobmanager localhost:8081 --python window_aggregation.py
+```
+
+The command will exit after displaying a submission message:
+
+```bash
+Job has been submitted with JobID b816faaf5ef9126ea5b9b6a37012cf56
+```
+
+## Viewing the results
+
+Similar to how we viewed messages in the `payment_msg` topic, we can print results from the `sink` topic:
+
+```{python}
+consumer = KafkaConsumer("sink")
+for _, msg in zip(range(10), consumer):
+ print(msg)
+```
+
+Voilà! You've run your first streaming application using Ibis.
+
+## Shutting down the Compose environment
+
+Press Ctrl+C to stop the Docker Compose containers. Once stopped, run `docker compose down` to remove the services created for this tutorial.
diff --git a/poetry.lock b/poetry.lock
index 4f6115b8e35b2..69e8122b18ad1 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -2899,6 +2899,20 @@ files = [
{file = "jupyterlab_widgets-3.0.9.tar.gz", hash = "sha256:6005a4e974c7beee84060fdfba341a3218495046de8ae3ec64888e5fe19fdb4c"},
]
+[[package]]
+name = "kafka-python"
+version = "2.0.2"
+description = "Pure Python client for Apache Kafka"
+optional = false
+python-versions = "*"
+files = [
+ {file = "kafka-python-2.0.2.tar.gz", hash = "sha256:04dfe7fea2b63726cd6f3e79a2d86e709d608d74406638c5da33a01d45a9d7e3"},
+ {file = "kafka_python-2.0.2-py2.py3-none-any.whl", hash = "sha256:2d92418c7cb1c298fa6c7f0fb3519b520d0d7526ac6cb7ae2a4fc65a51a94b6e"},
+]
+
+[package.extras]
+crc32c = ["crc32c"]
+
[[package]]
name = "keyring"
version = "24.3.0"
@@ -7368,4 +7382,4 @@ visualization = ["graphviz"]
[metadata]
lock-version = "2.0"
python-versions = "^3.9"
-content-hash = "5c87a1dfd599f22df33573ccef7cca9649d62834df6c17cf55f165b44e8f3cfa"
+content-hash = "3ce086842d366df9d579d2aed9be6b59ee719b50a3b492886cb25d6773eb1df9"
diff --git a/pyproject.toml b/pyproject.toml
index d719be6723e9d..c7da456d30eef 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -146,6 +146,7 @@ scikit-learn = { version = ">=1.3,<2", python = ">=3.10,<3.13" }
seaborn = { version = ">=0.12.2,<1", python = ">=3.10,<3.13" }
leafmap = { version = ">=0.29.6,<1", python = ">=3.10,<3.13" }
lonboard = { version = ">=0.5.0,<1", python = ">=3.10,<3.13" }
+kafka-python = "^2.0.2"
[tool.poetry.extras]
# generate the `all` extra using nix run '.#gen-all-extras'