How does CockroachDB deal with the meta data locality in a global cluster #67109
cindyzqtnew
started this conversation in
General
Replies: 1 comment 1 reply
-
What metadata are you talking about here? System table data? Yes, system table data is not currently partitioned in any way and will incur latency. But, this latency shouldn't affect user operations significantly past things like schema changes or settings changes, since metadata is cached on each node. Does that help? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Beginning from the early release, I can see CockroachDB has been striving to store the data in the global cluster which can survive regional failures. But I have a question about the meta data. From the view of many application teams, they want to set up the db service in multiple regions since they have application services in multiple regions, so app service can access the local region db, which has a low latency. However, they also want to write the data once (to one of the regions) and the data will replicated automatically to other regions. In CockroachDB, I am thinking to set up a global cluster and create a table configured with 7 replicas (4 in the primary-write region, 2 in one of the rest regions, and 1 in the remaining). Thus the users can write the data to the primary region only once and get more than half of followers's response quickly (since in the same region with low latency). With Read-From-Follower enabled, read request from all regions can get the response quickly. But I have a question about the meta data. How to configure the locality and number of replicas for the meta data? All nodes from all regions need to access the meta data. Considering there is no dedicated connection between regions, the latency between regions is relatively high.
Beta Was this translation helpful? Give feedback.
All reactions