Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
alexey-milovidov authored Nov 12, 2024
1 parent 5564903 commit b826a27
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions tinybird/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,11 @@
Benchmarking a database often requires deep expertise and fine-tuning. Here, our goal is merely to test the default experience of a new
user, i.e. someone who does not invest the time to optimize performance.

Testing is semi-automatized.
Testing is semi-automated.

The system as a timeout of 10s, after that it recommends to optimize (rewrite) the query.
The system has a timeout of 10s; after that, it recommends to optimize (rewrite) the query.

Load time and data size in the results are set to 0 as Tinybird did not indicate these resources.
Load time and data size in the results are set to 0, as Tinybird did not indicate these resources.

# Creating an account

Expand All @@ -16,15 +16,15 @@ Head to https://www.tinybird.co and create an account.
# Inserting data

Tinybird supports data inserts from various sources. We are going to use S3 to load a Parquet file into Tinybird. Since Tinybird limits the
file size to 1 GB and the test data set is larger than that, we split it into smaller chunks using ClickHouse:
file size to 1 GB, and the test data set is larger than that, we split it into smaller chunks using ClickHouse:

```sql
INSERT INTO FUNCTION s3('https://hitsparquet.s3.eu-west-3.amazonaws.com/data/hits_{_partition_id}.parquet', '', '', 'Parquet')
PARTITION BY rand() % 50
SELECT * FROM hits
```

Import of files with sizes a little bit less than 1 GB did not always work. We instead used 50 files of around 280 MB each. You will need to
Importing files with sizes a little bit less than 1 GB did not always work. We instead used 50 files of around 280 MB each. You will need to
use the auto mode to make sure all the files are read.

# Querying the data
Expand Down

0 comments on commit b826a27

Please sign in to comment.