Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc review fixes part 2 #155

Merged
merged 10 commits into from
Jul 8, 2024

Conversation

zuiderkwast
Copy link
Contributor

@zuiderkwast zuiderkwast commented Jul 5, 2024

Fixes #113, fixes #114, fixes #115, fixes #117, fixes #119.

memory-optimization.md, mass-insertion.md, lua-api.md, lfu-cache.md, latency.md

topics/latency.md Outdated Show resolved Hide resolved
Copy link
Member

@stockholmux stockholmux left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One nit and one suggestion

@@ -75,7 +75,8 @@ or setup.

We call this kind of latency **intrinsic latency**, and `valkey-cli`
is able to measure it. This is an example run
under Linux 3.11.0 running on an entry level server.
under Linux 3.11.0 running on an entry level server around 2014.
(This is old, but it illustrates how to measure this.)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with this for now, but would it be good to illustrate what this looks like on modern linux/hardware sometime in the future with an issue?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I tried on the laptop but I got pretty bad numbers. It needs a real machine. I opened #156.

Another option is to just not mention which hardware and kernel it was running on. The numbers could still make sense today, depending on hardware, OS and virtualized environment. A machine with lots of other loads can very well have these numbers today. What it does is actually "it will just try to measure the largest time the kernel does not provide CPU time to run to the valkey-cli process itself" and its on the microseconds scale. (I'd expect the difference to depend more on the scheduler, whether it's a real-time kernel, than anything else.)

The comment for the second run further down "The following is a run on a Linode 4096
instance running Valkey and Apache" we can change to something more generic like "in a virtualized environment sharing hardware with other loads".

topics/lua-api.md Outdated Show resolved Hide resolved
Signed-off-by: Viktor Söderqvist <[email protected]>
Signed-off-by: Viktor Söderqvist <[email protected]>
Signed-off-by: Viktor Söderqvist <[email protected]>
Copy link
Member

@stockholmux stockholmux left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@zuiderkwast
Copy link
Contributor Author

Thanks for reviewing!

@zuiderkwast zuiderkwast merged commit ea16d85 into valkey-io:main Jul 8, 2024
2 checks passed
@zuiderkwast zuiderkwast deleted the doc-review-fixes-part-2 branch July 8, 2024 23:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment