Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add context storage benchmarking #144

Merged
merged 119 commits into from
Sep 28, 2023
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
Show all changes
119 commits
Select commit Hold shift + click to select a range
a870dd2
add context storage benchmarking
RLKRo Jun 7, 2023
afcdce9
add all dff.utils modules to doc
RLKRo Jun 7, 2023
365e996
fix doc
RLKRo Jun 7, 2023
4d6a23f
add support for benchmarking multiple context storages at a time
RLKRo Jun 7, 2023
5815511
add type annotations; option to pass multiple context storages; expor…
RLKRo Jun 8, 2023
17b11fc
add option to get results as a dataframe
RLKRo Jun 8, 2023
2c83245
format
RLKRo Jun 8, 2023
e8564b1
add tutorial for benchmark
RLKRo Jun 9, 2023
a07588c
update benchmark dependencies
RLKRo Jun 9, 2023
258be60
update benchmark utils
RLKRo Jun 19, 2023
b970e83
add benchmark_dbs and benchmark_streamlit
RLKRo Jun 20, 2023
3c42227
update dependencies
RLKRo Jun 20, 2023
15795df
use python3.8 compatible typing
RLKRo Jun 20, 2023
13cc2d7
return ydb & reorder benchmark sets
RLKRo Jun 20, 2023
25c5b7e
add more benchmark cases
RLKRo Jun 21, 2023
2df1054
improve diff viewing
RLKRo Jun 21, 2023
cd0eabf
reduce dialog len for extreme cases
RLKRo Jun 21, 2023
185d950
change benchmark format
RLKRo Jun 21, 2023
c16cd28
change benchmark format: generic factory
RLKRo Jun 21, 2023
5de5a83
bugfix: repeated format update
RLKRo Jun 21, 2023
8fc62a8
move generic benchmark tools to utils
RLKRo Jun 21, 2023
1f450e5
add average read+update column
RLKRo Jun 21, 2023
4404ef3
add mass compare tab
RLKRo Jun 21, 2023
5fb2e69
update extreme cases params
RLKRo Jun 21, 2023
4c51473
set step_dialog_len to 1 by default
RLKRo Jun 22, 2023
0a35013
print exception message during benchmark
RLKRo Jun 23, 2023
b6a38a1
store read times under supposed dialog len
RLKRo Jun 23, 2023
eda7ccc
set streamlit version in dependencies
RLKRo Jun 26, 2023
60bf152
add exist_ok flag for saving to file
RLKRo Jun 26, 2023
ec19a30
move average results calculations from streamlit to benchmark utils
RLKRo Jun 26, 2023
8abc60f
rename lengths to dimensions
RLKRo Jun 27, 2023
c65ca37
remove context checking comments
RLKRo Jun 27, 2023
fb999e3
move partial benchmarking logic to partial benchmark file
RLKRo Jun 27, 2023
e413025
add partial file saving
RLKRo Jun 27, 2023
827861b
update report function
RLKRo Jun 27, 2023
a25ee56
add BenchmarkConfig class to avoid repetition of parameters
RLKRo Jun 27, 2023
0424b43
revert add partial file saving
RLKRo Jun 28, 2023
e6b88b7
fix benchmark name
RLKRo Jun 28, 2023
f68d71c
update streamlit for new benchmark_config
RLKRo Jun 28, 2023
0fe49a4
update benchmark_new_format.py
RLKRo Jun 28, 2023
9e8f310
fix bug with delisting benchmarks
RLKRo Jun 28, 2023
7494772
not include zero-point in graphs
RLKRo Jun 28, 2023
99a4e36
move get_context_updater to BenchmarkConfig
RLKRo Jun 29, 2023
f787577
add benchmark_dir variable for benchmark_dbs.py
RLKRo Jun 29, 2023
1dbb7f9
add get_context method to BenchmarkConfig
RLKRo Jun 29, 2023
8e2ab54
change update benchmarking logic
RLKRo Jun 29, 2023
8741334
return empty dicts as update_times if context_updater is None
RLKRo Jul 2, 2023
5a4eb75
remove write_times from average_results
RLKRo Jul 2, 2023
a4e71c7
add support for no update times
RLKRo Jul 2, 2023
d7cb290
group sizes stat
RLKRo Jul 2, 2023
d3d66c0
add benchmark tests
RLKRo Jul 3, 2023
9ee9e45
Merge branch 'dev' into feat/db-benchmark
RLKRo Jul 3, 2023
e995195
clear context storage at the end of each context_num cycle
RLKRo Jul 3, 2023
a92ca57
save benchmarks in a list
RLKRo Jul 4, 2023
370d138
add json schema for benchmark results
RLKRo Jul 5, 2023
435af6e
move streamlit to utils
RLKRo Jul 5, 2023
40a053c
refactor benchmark streamlit
RLKRo Jul 5, 2023
f8ecda9
remove partial-dev comparison tools
RLKRo Jul 5, 2023
32f5dd8
add doc & update tutorial
RLKRo Jul 6, 2023
d9a161d
move benchmark configs to utils
RLKRo Jul 6, 2023
f52931a
remove partial from benchmark_dbs.py
RLKRo Jul 6, 2023
41ea85c
remove format updater for benchmark files
RLKRo Jul 6, 2023
465f41b
fix mass compare bug
RLKRo Jul 6, 2023
958db10
lint & format
RLKRo Jul 6, 2023
6d55a3f
add utils to backup_files for test_coverage
RLKRo Jul 6, 2023
a9402ee
skip benchmark tests if benchmark not installed
RLKRo Jul 6, 2023
7b5c502
Revert "remove format updater for benchmark files"
RLKRo Jul 6, 2023
5e98c0e
move format updater to utils
RLKRo Jul 6, 2023
a9a7af2
move format updating to a separate function
RLKRo Jul 6, 2023
266e280
remove old format support from format updater
RLKRo Jul 6, 2023
d18025c
use format updater in streamlit
RLKRo Jul 6, 2023
d9cbca5
add ability to upload files to streamlit, add all files from one dire…
RLKRo Jul 6, 2023
65ef6a2
store benchmarks for a specific db in one file
RLKRo Jul 10, 2023
97002c8
change report function, update tutorial
RLKRo Jul 10, 2023
99ee6a2
remove literal include of the files
RLKRo Jul 10, 2023
791be50
format
RLKRo Jul 10, 2023
7c5f7ed
add ability to edit name and description of benchmark sets
RLKRo Jul 11, 2023
2d7432b
add help for displayed metrics
RLKRo Jul 11, 2023
10e6f29
reformat
RLKRo Jul 13, 2023
8f23fd5
preserve file order when delisting benchmarks
RLKRo Jul 18, 2023
5c04f6e
Merge branch 'dev' into feat/db-benchmark
RLKRo Aug 16, 2023
2b2c2bb
remove typing as tp
RLKRo Aug 16, 2023
acb0557
add type annotations
RLKRo Aug 16, 2023
fbb3a5f
move model configuration to kwargs
RLKRo Aug 16, 2023
bdc6277
change misc key type to str
RLKRo Aug 16, 2023
a36ddd3
accept context factory for benchmark instead of context
RLKRo Aug 16, 2023
5d671a0
add type hints for test cases
RLKRo Aug 17, 2023
819dcae
uncomment dbs in benchmark_dbs.py
RLKRo Aug 17, 2023
8e53b2e
add an explanation in tutorial
RLKRo Aug 17, 2023
e50158c
remove benchmark_new_format.py
RLKRo Aug 17, 2023
55fdece
add info messages
RLKRo Aug 17, 2023
62a33d9
randomize strings returned by `get_dict`
RLKRo Aug 17, 2023
c1f6fbe
add comments
RLKRo Aug 17, 2023
837e285
rename benchmark.context_storage to db_benchmark.benchmark
RLKRo Aug 17, 2023
85df6d1
move report function to a separate module
RLKRo Aug 17, 2023
7054c1b
import Path instead of pathlib
RLKRo Aug 17, 2023
9cf1ef0
fix doc
RLKRo Aug 17, 2023
7584902
reformat
RLKRo Aug 17, 2023
aa65bc7
add imports to __init__.py
RLKRo Aug 17, 2023
25eabff
minor report change
RLKRo Aug 17, 2023
e928397
replace deprecated method call
RLKRo Aug 17, 2023
72e991b
rename context vars
RLKRo Aug 17, 2023
87b1820
generalize BenchmarkConfig
RLKRo Aug 18, 2023
2420455
add .dockerignore && add benchmark files to ignore
RLKRo Aug 18, 2023
988f0ac
reformat
RLKRo Aug 18, 2023
4263b7e
move databases inside the benchmark_dir
RLKRo Aug 18, 2023
ebcbc07
fix doc
RLKRo Aug 18, 2023
8968b28
Merge branch 'dev' into feat/db-benchmark
RLKRo Aug 18, 2023
5e604bd
use tutorial directives
RLKRo Aug 18, 2023
0fd1fc6
Merge branch 'dev' into feat/db-benchmark
RLKRo Aug 22, 2023
3d54611
remove ability to add files from filesystem
RLKRo Aug 24, 2023
e422621
delete files from filesystem when sets are deleted via the interface
RLKRo Aug 24, 2023
ec80dc3
link files referenced in the documentation to docs/source
RLKRo Aug 24, 2023
e441da0
add dependency info for streamlit app
RLKRo Aug 24, 2023
512ed66
add streamlit screenshots inside the tutorial
RLKRo Aug 24, 2023
378d87e
Merge branch 'dev' into feat/db-benchmark
RLKRo Sep 11, 2023
bb0e86b
reupload images
RLKRo Sep 11, 2023
8c419b5
add more exception info
RLKRo Sep 11, 2023
093fa52
Merge branch 'dev' into feat/db-benchmark
RLKRo Sep 28, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ pip install dff[postgresql] # dependencies for using PostgreSQL
pip install dff[sqlite] # dependencies for using SQLite
pip install dff[ydb] # dependencies for using Yandex Database
pip install dff[telegram] # dependencies for using Telegram
pip install dff[benchmark] # dependencies for benchmarking
pip install dff[full] # full dependencies including all options above
pip install dff[tests] # dependencies for running tests
pip install dff[test_full] # full dependencies for running all tests (all options above)
Expand Down
3 changes: 3 additions & 0 deletions dff/utils/benchmark/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# -*- coding: utf-8 -*-
# flake8: noqa: F401
from dff.utils.benchmark.context_storage import report as context_storage_benchmark_report
311 changes: 311 additions & 0 deletions dff/utils/benchmark/context_storage.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,311 @@
"""
Context storage benchmarking
----------------------------
This module contains functions for context storages benchmarking.

Basic usage::


from dff.utils.benchmark.context_storage import report
from dff.context_storages import context_storage_factory

storage = context_storage_factory("postgresql+asyncpg://postgres:pass@localhost:5432/test", table_name="benchmark")

report(storage)

"""
from uuid import uuid4
from time import perf_counter
import typing as tp
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idk if this is a problem, but as far as I know we have used from typing import ... in out codebase.
Same about typing_extensions.

Copy link
Member Author

@RLKRo RLKRo Aug 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replaced module import with object imports:
2b2c2bb


from pympler import asizeof
from tqdm.auto import tqdm

try:
import matplotlib
from matplotlib.backends.backend_pdf import PdfPages
import matplotlib.pyplot as plt
except ImportError:
matplotlib = None

try:
import pandas
except ImportError:
pandas = None

try:
import polars
except ImportError:
polars = None

from dff.context_storages import DBContextStorage
from dff.script import Context, Message


def get_context_size(context: Context) -> int:
"""Return size of a provided context."""
return asizeof.asizeof(context)


def get_context(dialog_len: int, misc_len: int) -> Context:
"""
Return a context with a given number of dialog turns and a given length of misc field.

Misc field is needed in case context storage reads only the most recent requests/responses.

Context size is approximately 1000 * dialog_len + 100 * misc_len bytes if dialog_len and misc_len > 100.
"""
return Context(
labels={i: (f"flow_{i}", f"node_{i}") for i in range(dialog_len)},
requests={i: Message(text=f"request_{i}") for i in range(dialog_len)},
responses={i: Message(text=f"response_{i}") for i in range(dialog_len)},
misc={str(i): i for i in range(misc_len)},
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should make message size random in length to reflect irregular nature of messages?



@tp.overload
def time_context_read_write(
context_storage: DBContextStorage,
context: Context,
context_num: int,
as_dataframe: None = None,
) -> tp.Tuple[tp.List[float], tp.List[float]]:
...


@tp.overload
def time_context_read_write(
context_storage: DBContextStorage,
context: Context,
context_num: int,
as_dataframe: tp.Literal["pandas"],
) -> "pandas.DataFrame":
...


@tp.overload
def time_context_read_write(
context_storage: DBContextStorage,
context: Context,
context_num: int,
as_dataframe: tp.Literal["polars"],
) -> "polars.DataFrame":
...


def time_context_read_write(
context_storage: DBContextStorage,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually we tend to preserve ContextStorage with Dict compatibility. Shouldn't we do that here as well? E.g. use Union[Dict, DBContextStorage] type - and consider typechecks later?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmark requires clear method from context storage. Also, I don't see why someone would want to benchmark Dict as context storage.

I think a much cleaner solution would be to create a DBContextStorage to wrap Dict.

context: Context,
context_num: int,
as_dataframe: tp.Optional[tp.Literal["pandas", "polars"]] = None,
) -> tp.Union[tp.Tuple[tp.List[float], tp.List[float]], "pandas.DataFrame", "polars.DataFrame"]:
"""
Generate `context_num` ids and for each write into `context_storage` value of `context` under generated id,
after that read the value stored in `context_storage` under generated id and compare it to `context`.

Keep track of the time it takes to write and read context to/from the context storage.

This function clear context storage before and after execution.

:param context_storage: Context storage to benchmark.
:param context: An instance of context which will be repeatedly written into context storage.
:param context_num: A number of times the context will be written and checked.
:param as_dataframe:
If the function should return the results as a pandas or a polars DataFrame.
If set to None, does not return a Dataframe.
Defaults to None.
:return:
Depends on `as_dataframe` parameter.
1. By default, it is set to None in which case it returns:
two lists: first one contains individual write times, second one contains individual read times.
2. If set to "pandas":
A pandas DataFrame with two columns: "write" and "read" which contain corresponding data series.
3. If set to "polars":
A polars DataFrame with the same columns as in a pandas DataFrame.
:raises RuntimeError: If context written into context storage does not match read context.
"""
context_storage.clear()

write_times: tp.List[float] = []
read_times: tp.List[float] = []
for _ in tqdm(range(context_num), desc=f"Benchmarking context storage:{context_storage.full_path}"):
ctx_id = uuid4()

# write operation benchmark
write_start = perf_counter()
context_storage[ctx_id] = context
write_times.append(perf_counter() - write_start)

# read operation benchmark
read_start = perf_counter()
actual_context = context_storage[ctx_id]
read_times.append(perf_counter() - read_start)

# check returned context
if actual_context != context:
raise RuntimeError(f"True context:\n{context}\nActual context:\n{actual_context}")
Copy link
Collaborator

@pseusys pseusys Jun 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How should this succeed if we read, say, 3 last requests from context storage only?
We should mind that OR manually set read/write policy of context storage to ALL.
But that should be done after merge of partial context updates only.


context_storage.clear()

if as_dataframe is None:
return write_times, read_times
elif as_dataframe == "pandas":
if pandas is None:
raise RuntimeError("Install `pandas` in order to get benchmark results as a pandas DataFrame.")
return pandas.DataFrame(data={"write": write_times, "read": read_times})
elif as_dataframe == "polars":
if polars is None:
raise RuntimeError("Install `polars` in order to get benchmark results as a polars DataFrame.")
return polars.DataFrame({"write": write_times, "read": read_times})


def report(
*context_storages: DBContextStorage,
context_num: int = 1000,
dialog_len: int = 10000,
misc_len: int = 0,
pdf: tp.Optional[str] = None,
):
"""
Benchmark context storage(s) and generate a report.

:param context_storages: Context storages to benchmark.
:param context_num: Number of times a single context should be written to/read from context storage.
:param dialog_len:
A number of turns inside a single context. The context will contain simple text requests/responses.
:param misc_len:
Number of items in the misc field.
Use this parameter if context storage only has access to the most recent requests/responses.
:param pdf:
A pdf file name to save report to.
Defaults to None.
If set to None, prints the result to stdout instead of creating a pdf file.
"""
context = get_context(dialog_len, misc_len)
context_size = get_context_size(context)

benchmark_config = (
f"Number of contexts: {context_num}\n"
f"Dialog len: {dialog_len}\n"
f"Misc len: {misc_len}\n"
f"Size of one context: {context_size} ({tqdm.format_sizeof(context_size, divisor=1024)})"
)

print(f"Starting benchmarking with following parameters:\n{benchmark_config}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe output as string/string list instead of printing?


benchmarking_results: tp.Dict[str, tp.Union[tp.Tuple[tp.List[float], tp.List[float]], str]] = {}

for context_storage in context_storages:
try:
write, read = time_context_read_write(context_storage, context, context_num)

benchmarking_results[context_storage.full_path] = write, read
except Exception as e:
benchmarking_results[context_storage.full_path] = getattr(e, "message", repr(e))

# define functions for displaying results
line_separator = "-" * 80

pretty_config = f"{line_separator}\nDB benchmark\n{line_separator}\n{benchmark_config}\n{line_separator}"

def pretty_benchmark_result(storage_name, benchmarking_result) -> str:
result = f"{storage_name}\n{line_separator}\n"
if not isinstance(benchmarking_result, str):
write, read = benchmarking_result
result += (
f"Average write time: {sum(write) / len(write)} s\n"
f"Average read time: {sum(read) / len(read)} s\n{line_separator}"
)
else:
result += f"{benchmarking_result}\n{line_separator}"
return result

def get_scores_and_leaderboard(
sort_by: tp.Literal["Write", "Read"]
) -> tp.Tuple[tp.List[tp.Tuple[str, tp.Optional[float]]], str]:
benchmark_index = 0 if sort_by == "Write" else 1

scores = sorted(
[
(storage_name, sum(result[benchmark_index]) / len(result[benchmark_index]))
for storage_name, result in benchmarking_results.items()
if not isinstance(result, str)
],
key=lambda benchmark: benchmark[1], # sort in ascending order
)
scores += [
(storage_name, None) for storage_name, result in benchmarking_results.items() if isinstance(result, str)
]
leaderboard = (
f"{sort_by} time leaderboard\n{line_separator}\n"
+ "\n".join(
[f"{result}{' s' if result is not None else ''}: {storage_name}" for storage_name, result in scores]
)
+ "\n"
+ line_separator
)

return scores, leaderboard

_, write_leaderboard = get_scores_and_leaderboard("Write")
_, read_leaderboard = get_scores_and_leaderboard("Read")

if pdf is None:
result = pretty_config

for storage_name, benchmarking_result in benchmarking_results.items():
result += f"\n{pretty_benchmark_result(storage_name, benchmarking_result)}"

if len(context_storages) > 1:
result += f"\n{write_leaderboard}\n{read_leaderboard}"

print(result)
else:
if matplotlib is None:
raise RuntimeError("`matplotlib` is required to generate pdf reports.")

figure_size = (11, 8)

def text_page(text, *, x=0.5, y=0.5, size=18, ha="center", family="monospace", **kwargs):
page = plt.figure(figsize=figure_size)
page.clf()
page.text(x, y, text, transform=page.transFigure, size=size, ha=ha, family=family, **kwargs)

def scatter_page(storage_name, write, read):
plt.figure(figsize=figure_size)
plt.scatter(range(len(write)), write, label="write times")
plt.scatter(range(len(read)), read, label="read times")
plt.legend(loc="best")
plt.grid(True)
plt.title(storage_name)

with PdfPages(pdf) as mpl_pdf:
text_page(pretty_config, size=24)
mpl_pdf.savefig()
plt.close()

if len(context_storages) > 1:
text_page(write_leaderboard, x=0.05, size=14, ha="left")
mpl_pdf.savefig()
plt.close()
text_page(read_leaderboard, x=0.05, size=14, ha="left")
mpl_pdf.savefig()
plt.close()

for storage_name, benchmarking_result in benchmarking_results.items():
txt = pretty_benchmark_result(storage_name, benchmarking_result)

if not isinstance(benchmarking_result, str):
write, read = benchmarking_result

text_page(txt)
mpl_pdf.savefig()
plt.close()

scatter_page(storage_name, write, read)
mpl_pdf.savefig()
plt.close()
else:
text_page(txt)
mpl_pdf.savefig()
plt.close()
5 changes: 5 additions & 0 deletions dff/utils/turn_caching/singleton_turn_caching.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
"""
Singleton Turn Caching
----------------------
This module contains functions for caching function results on each dialog turn.
"""
Comment on lines +1 to +5
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to be unrelated to PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to add this due to this change in docs/source/conf.py:

-            ("dff.utils.testing", "Utils"),
+            ("dff.utils", "Utils"),

import functools
from typing import Callable, List, Optional

Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ def setup(_):
("dff.messengers", "Messenger Interfaces"),
("dff.pipeline", "Pipeline"),
("dff.script", "Script"),
("dff.utils.testing", "Utils"),
("dff.utils", "Utils"),
]
)
pull_release_notes_from_github()
1 change: 1 addition & 0 deletions docs/source/get_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ The installation process allows the user to choose from different packages based
pip install dff[sqlite] # dependencies for using SQLite
pip install dff[ydb] # dependencies for using Yandex Database
pip install dff[telegram] # dependencies for using Telegram
pip install dff[benchmark] # dependencies for benchmarking
pip install dff[full] # full dependencies including all options above
pip install dff[tests] # dependencies for running tests
pip install dff[test_full] # full dependencies for running all tests (all options above)
Expand Down
8 changes: 8 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,11 @@ def merge_req_lists(*req_lists: List[str]) -> List[str]:
"pytelegrambotapi~=4.5.1",
]

benchmark_dependencies = [
"pympler",
"tqdm",
]

full = merge_req_lists(
core,
async_files_dependencies,
Expand All @@ -89,6 +94,7 @@ def merge_req_lists(*req_lists: List[str]) -> List[str]:
postgresql_dependencies,
ydb_dependencies,
telegram_dependencies,
benchmark_dependencies,
)

requests_requirements = [
Expand Down Expand Up @@ -116,6 +122,7 @@ def merge_req_lists(*req_lists: List[str]) -> List[str]:
"uvicorn~=0.21.1",
"websockets~=11.0.2",
"locust~=2.15",
"matplotlib",
],
requests_requirements,
)
Expand Down Expand Up @@ -171,6 +178,7 @@ def merge_req_lists(*req_lists: List[str]) -> List[str]:
"postgresql": postgresql_dependencies, # dependencies for using PostgreSQL
"ydb": ydb_dependencies, # dependencies for using Yandex Database
"telegram": telegram_dependencies, # dependencies for using Telegram
"benchmark": benchmark_dependencies, # dependencies for benchmarking
"full": full, # full dependencies including all options above
"tests": test_requirements, # dependencies for running tests
"test_full": tests_full, # full dependencies for running all tests (all options above)
Expand Down
Loading