Skip to content

Commit a8c6c33

Browse files
jsorefmadolson
andauthored
Spelling (#267)
### Description Fixes spelling errors and accessibility issues ### Issues Resolved N/A ### Check List - [x] Commits are signed per the DCO using `--signoff` By submitting this pull request, I confirm that my contribution is made under the terms of the BSD-3-Clause License. --------- Signed-off-by: Josh Soref <[email protected]> Signed-off-by: Madelyn Olson <[email protected]> Co-authored-by: Madelyn Olson <[email protected]>
1 parent c84ce95 commit a8c6c33

20 files changed

+34
-35
lines changed

CONTRIBUTING-BLOG-POST.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ The contribution model is lightweight, transparent, and uses well-understood ope
55

66
## Step 0: Setup
77

8-
Before you begin, it’s a good idea to setup an environment to write, test, and contribute your blog post.
8+
Before you begin, it’s a good idea to set up an environment to write, test, and contribute your blog post.
99
First, [fork the website repo](https://github.com/valkey-io/valkey-io.github.io), then clone the website to your local environment.
1010
The general workflow is to write your post locally in a branch, confirm that it looks the way you want, then contribute your changes as a pull request.
1111

@@ -74,7 +74,7 @@ title= "Using Valkey for mind control experiments"
7474
# `date` is when your post will be published.
7575
# For the most part, you can leave this as the day you _started_ the post.
7676
# The maintainers will update this value before publishing
77-
# The time is generally irrelvant in how Valkey published, so '01:01:01' is a good placeholder
77+
# The time is generally irrelevant in how Valkey published, so '01:01:01' is a good placeholder
7878
date= 2024-07-01 01:01:01
7979
# 'description' is what is shown as a snippet/summary in various contexts.
8080
# You can make this the first few lines of the post or (better) a hook for readers.

content/authors/rdias.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,5 +5,5 @@ extra:
55
github: rjd15372
66
---
77

8-
Ricardo is a principal software engineer at Percona where he works as contributor to the Valkey project. Ricardo has been working in distributed storage systems for many years, but his interests are not limited to distribtued systems, he also enjoys designing and implementing lock-free data structures, as well as, developing static code analyzers. In his free time, he's a family guy and also manages a roller hockey club.
8+
Ricardo is a principal software engineer at Percona where he works as contributor to the Valkey project. Ricardo has been working in distributed storage systems for many years, but his interests are not limited to distributed systems, he also enjoys designing and implementing lock-free data structures, as well as, developing static code analyzers. In his free time, he's a family guy and also manages a roller hockey club.
99

content/blog/2024-04-26-modules-101.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ int ValkeyModule_OnLoad(ValkeyModuleCtx *ctx, ValkeyModuleString **argv, int arg
4747
4848
Here we are calling `ValkeyModule_OnLoad` C function (required by Valkey) to initialize `module1` using `ValkeyModule_Init`.
4949
Then we use `ValkeyModule_CreateCommand` to create a Valkey command `hello` which uses C function `hello` and returns `world1` string.
50-
In future blog posts we will expore these areas at greater depth.
50+
In future blog posts we will explore these areas at greater depth.
5151
5252
Now we need to update `src/modules/Makefile`
5353
@@ -68,7 +68,7 @@ This will compile our module in the `src/modules` folder.
6868
We will create a new Rust package by running `cargo new --lib module2` in bash.
6969
Inside the `module2` folder we will have `Cargo.toml` and `src/lib.rs` files.
7070
To install the valkey-module SDK run `cargo add valkey-module` inside `module2` folder.
71-
Alternativley we can add `valkey-module = "0.1.0` in `Cargo.toml` under `[dependencies]`.
71+
Alternatively we can add `valkey-module = "0.1.0` in `Cargo.toml` under `[dependencies]`.
7272
Run `cargo build` and it will create or update the `Cargo.lock` file.
7373

7474
Modify `Cargo.toml` to specify the crate-type to be "cdylib", which will tell cargo to build the target as a shared library.
@@ -173,7 +173,7 @@ OK
173173

174174
Please stay tuned for more articles in the future as we explore the possibilities of Valkey modules and where using C or Rust makes sense.
175175

176-
## Usefull links
176+
## Useful links
177177

178178
* [Valkey repo](https://github.com/valkey-io/valkey)
179179
* [Valkey Rust SDK](https://github.com/valkey-io/valkeymodule-rs)

content/blog/2024-06-27-using-bitnami-valkey-chart/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Valkey is a high-performance key/value datastore that supports workloads such as
1212

1313
[Bitnami](https://bitnami.com/) offers a number of secure, up-to-date, and easy to deploy charts for a number of popular open source applications.
1414

15-
This blog will serve as a walk-through on how you can deploy and use the [Bitnami Helm chart for Valkey](https://github.com/bitnami/charts/tree/main/bitnami/valkey).
15+
This blog will serve as a walkthrough on how you can deploy and use the [Bitnami Helm chart for Valkey](https://github.com/bitnami/charts/tree/main/bitnami/valkey).
1616

1717
# Assumptions and prerequisites
1818

content/blog/2024-07-07-unlock-one-million-rps.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,12 @@ title= "Unlock 1 Million RPS: Experience Triple the Speed with Valkey"
44
# `date` is when your post will be published.
55
# For the most part, you can leave this as the day you _started_ the post.
66
# The maintainers will update this value before publishing
7-
# The time is generally irrelvant in how Valkey published, so '01:01:01' is a good placeholder
7+
# The time is generally irrelevant in how Valkey published, so '01:01:01' is a good placeholder
88
date= 2024-08-05 01:01:01
99
# 'description' is what is shown as a snippet/summary in various contexts.
1010
# You can make this the first few lines of the post or (better) a hook for readers.
1111
# Aim for 2 short sentences.
12-
description= "Learn about the new performnace improvements in Valkey 8 which reduces cost, improves latency and makes our environment greener."
12+
description= "Learn about the new performance improvements in Valkey 8 which reduces cost, improves latency and makes our environment greener."
1313
# 'authors' are the folks who wrote or contributed to the post.
1414
# Each author corresponds to a biography file (more info later in this document)
1515
authors= [ "dantouitou", "uriyagelnik"]

content/blog/2024-07-31-valkey-8-0-0-rc1.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title= "Valkey 8.0: Delivering Enhanced Performance and Reliability"
44
# `date` is when your post will be published.
55
# For the most part, you can leave this as the day you _started_ the post.
66
# The maintainers will update this value before publishing
7-
# The time is generally irrelvant in how Valkey published, so '01:01:01' is a good placeholder
7+
# The time is generally irrelevant in how Valkey published, so '01:01:01' is a good placeholder
88
date= 2024-08-02 01:01:01
99
# 'description' is what is shown as a snippet/summary in various contexts.
1010
# You can make this the first few lines of the post or (better) a hook for readers.

content/blog/2024-08-29-valkey-memory-efficiency-8-0.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,4 +137,4 @@ So, the drop in percentage is approximately **20.63% in overall memory usage on
137137
## Conclusion
138138

139139
Through the memory efficiency achieved by introducing dictionary per slot and key embedding into dictionary entry, users should have additional capacity to store more keys per node in Valkey 8.0 (up to 20%, but it will vary based on the workload). For users, upgrading from Valkey 7.2 to Valkey 8.0, the improvement should be observed automatically and no configuration changes are required.
140-
Give it a try by spinning up a [Valkey cluster](https://valkey.io/download/) and join us in the [community](https://github.com/valkey-io/valkey/) to provide feedback. Further, there is an ongoing discussion around overhauling the main dictionary with a more compact memory layout and introduce an open addressing scheme which will significantly improve memory efficiency. More details can be found [here](https://github.com/valkey-io/valkey/issues/169).
140+
Give it a try by spinning up a [Valkey cluster](https://valkey.io/download/) and join us in the [community](https://github.com/valkey-io/valkey/) to provide feedback. Further, there is an ongoing discussion around overhauling the main dictionary with a more compact memory layout and introduce an open addressing scheme which will significantly improve memory efficiency. More details can be found in [Issue 169: Re-thinking the main hash table](https://github.com/valkey-io/valkey/issues/169).

content/blog/2024-09-13-unlock-one-million-rps-part2.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title= "Unlock 1 Million RPS: Experience Triple the Speed with Valkey - part 2"
44
# `date` is when your post will be published.
55
# For the most part, you can leave this as the day you _started_ the post.
66
# The maintainers will update this value before publishing
7-
# The time is generally irrelvant in how Valkey published, so '01:01:01' is a good placeholder
7+
# The time is generally irrelevant in how Valkey published, so '01:01:01' is a good placeholder
88
date= 2024-09-13 01:01:01
99
# 'description' is what is shown as a snippet/summary in various contexts.
1010
# You can make this the first few lines of the post or (better) a hook for readers.
@@ -15,7 +15,7 @@ description= "Maximize the performance of your hardware with memory access amort
1515
authors= [ "dantouitou", "uriyagelnik"]
1616
+++
1717

18-
In the [first part](/blog/unlock-one-million-rps/) of this blog, we described how we offloaded almost all I/O operations to I/O threads, thereby freeing more CPU cycles in the main thread to execute commands. When we profiled the execution of the main thread, we found that a considerable amount of time was spent waiting for external memory. This was not entirely surprising, as when accessing random keys, the probability of finding the key in one of the processor caches is relatively low. Considering that external memory access latency is approximately 50 times higher than L1 cache, it became clear that despite showing 100% CPU utilization, the main process was mostly “waiting”. In this blog, we describe the technique we have been using to increase the number of parallel memory accesses, thereby reducing the impact that external memory latency has on performance.
18+
In the [first part](/blog/unlock-one-million-rps/) of this blog, we described how we offloaded almost all I/O operations to I/O threads, thereby freeing more CPU cycles in the main thread to execute commands. When we profiled the execution of the main thread, we found that a considerable amount of time was spent waiting for external memory. This was not entirely surprising, as when accessing random keys, the probability of finding the key in one of the processor caches is relatively low. Considering that external memory access latency is approximately 50 times greater than L1 cache, it became clear that despite showing 100% CPU utilization, the main process was mostly “waiting”. In this blog, we describe the technique we have been using to increase the number of parallel memory accesses, thereby reducing the impact that external memory latency has on performance.
1919

2020
### Speculative execution and linked lists
2121
Speculative execution is a performance optimization technique used by modern processors, where the processor guesses the outcome of conditional operations and executes in parallel subsequent instructions ahead of time. Dynamic data structures, such as linked lists and search trees, have many advantages over static data structures: they are economical in memory consumption, provide fast insertion and deletion mechanisms, and can be resized efficiently. However, some dynamic data structures have a major drawback: they hinder the processor's ability to speculate on future memory load instructions that could be executed in parallel. This lack of concurrency is especially problematic in very large dynamic data structures, where most pointer accesses result in high-latency external memory access.

content/blog/2024-11-21-testing-the-limits/index.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ While doing extensive performance testing on a Raspberry Pi is silly, it's made
1111

1212
![Picture of the Compute Module 4 credit Raspberry Pi Ltd (CC BY-SA)](images/cm4.png)
1313

14-
For hardware we are going to be using a Raspberry Pi [Compute Module 4 (CM4)](https://www.raspberrypi.com/products/compute-module-4/?variant=raspberry-pi-cm4001000). It's a single board computer (SBC) that comes with a tiny 1.5Ghz 4-core Broadcomm CPU and 8GB of system memory. This is hardly the first device someone would pick when deciding on a production system. Using the CM4 makes it easy to showcase how to optimize Valkey depending on your different hardware constraints.
14+
For hardware we are going to be using a Raspberry Pi [Compute Module 4 (CM4)](https://www.raspberrypi.com/products/compute-module-4/?variant=raspberry-pi-cm4001000). It's a single board computer (SBC) that comes with a tiny 1.5Ghz 4-core Broadcom CPU and 8GB of system memory. This is hardly the first device someone would pick when deciding on a production system. Using the CM4 makes it easy to showcase how to optimize Valkey depending on your different hardware constraints.
1515

1616
Our operating system will be a 64-bit Debian based operating system (OS) called [Rasbian](https://www.raspbian.org/). This distribution is specifically modified to perform well on the CM4. Valkey will run in a docker container orchestrated with docker compose. I like deploying in containers as it simplifies operations. If you'd like to follow along here is [a guide for installing Docker](https://docs.docker.com/engine/install/debian/). Make sure to continue to the [second page of the installation process](https://docs.docker.com/engine/install/linux-postinstall/) as well. It's easy to miss and skipping it could make it harder to follow along.
1717

@@ -83,7 +83,7 @@ Since Valkey is a single threaded application, it makes sense that higher clock
8383

8484
**Note:** Clock speeds generally are only comparable between CPU's with a similar architecture. For example, you could reasonably compare clock speeds between an 12th generation Intel i5 and a 12th generation Intel i7. If the 12th gen i7 had a max clock speed of 5Ghz that doesn't necessarily mean it will be slower than a AMD Ryzen 9 9900X clocked at 5.6Ghz.
8585

86-
If you're following along on a Pi of your own I've outlined the steps to overclock your CM4 below. Otherwise you can skip to the results section below.
86+
If you're following along on a Pi of your own I've outlined the steps to overclock your CM4 below. Otherwise, you can skip to the results section below.
8787

8888
**Warning** Just a reminder overclocking your device can cause damage to your device. Please use caution and do your own research for settings that are safe.
8989

@@ -105,7 +105,7 @@ SET: 394368.41 requests per second, p50=1.223 msec
105105
GET: 438058.53 requests per second, p50=1.135 msec
106106
```
107107

108-
We're up to 416,000 requests per second (reminder this is the average between the two operations). The mathmaticians out there might notice that this speed up is a lot more than the expected 47% increase. It's a 73% increase in requests per second. What's happening?!
108+
We're up to 416,000 requests per second (reminder this is the average between the two operations). The mathematicians out there might notice that this speed up is a lot more than the expected 47% increase. It's a 73% increase in requests per second. What's happening?!
109109

110110
## Adding IO Threading
111111

@@ -147,9 +147,9 @@ Much better! Now we are seeing around 565,000 requests per second. Thats a 35% i
147147

148148
Right? Well believe it or not we can squeeze even more performance out of our little CM4!
149149

150-
![A picture of our Valkey server with the 4 core boxes and to the right of them is a memory box. In the first core box is the Valkey process and in the next two there are IO Threads. The valkey process has a loop showing it communiacting with both of the IO threads. It also has a bracket showing it managing all the memory.](images/io_threading_arch.png)
150+
![A picture of our Valkey server with the 4 core boxes and to the right of them is a memory box. In the first core box is the Valkey process and in the next two there are IO Threads. The valkey process has a loop showing it communicating with both of the IO threads. It also has a bracket showing it managing all the memory.](images/io_threading_arch.png)
151151

152-
Above is a representitive outline of what's happening on the server. The Valkey process has to take up valuble cycles managing the IO Threads. Not only that it has to perform a lot of work to manage all the memory assigned to it. That's a lot of work for a single process.
152+
Above is a representative outline of what's happening on the server. The Valkey process has to take up valuable cycles managing the IO Threads. Not only that it has to perform a lot of work to manage all the memory assigned to it. That's a lot of work for a single process.
153153

154154
Now there is actually one more optimization we can use to make single threaded Valkey even faster. Valkey recently has done a substantial amount of work to support speculative execution. This work allows Valkey to predict which values will be needed from memory in future processing steps. This way Valkey server doesn't have to wait for memory access which is an order of magnitude slower than L1 caches. While I won't go through the details of how this works as there's already a [great blog that describes how to take advantage of these optimizations](https://valkey.io/blog/unlock-one-million-rps-part2/). Here are the results:
155155

@@ -163,7 +163,7 @@ While these results are better they are a bit confusing. After talking with some
163163

164164
## Clustered Valkey
165165

166-
![A picture of our Valkey server with the 4 core boxes and to the right of them is a memory box. In the first three core boxs are Valkey processes. Each of them has a bracket around a portion of the memory.](images/valkey_clustered.png)
166+
![A picture of our Valkey server with the 4 core boxes and to the right of them is a memory box. In the first three core boxes are Valkey processes. Each of them has a bracket around a portion of the memory.](images/valkey_clustered.png)
167167

168168
For our last step we are going to spin up a Valkey cluster. This cluster will have individual instances of Valkey running that each will be responsible for managing their own keys. This way each instance can execute operations in parallel much more easily.
169169

content/blog/2024-12-22-az-affinity-strategy.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,9 @@ Additionally, one of the most common use cases for caching is to store database
2626
GLIDE provides flexible options tailored to your application’s needs:
2727

2828
* ```PRIMARY```: Always read from the primary to ensure the freshness of data.
29-
* ```PREFER_REPLICA```: Distribute requests among all replicas in a round-robin manner. If no replica is available, fallback to the primary.
30-
* ```AZ_AFFINITY```: Prioritize replicas in the same AZ as the client. If no replicas are available in the zone, fallback to other replicas or the primary if needed.
31-
* ```AZ_AFFINITY_REPLICAS_AND_PRIMARY```: Prioritize replicas in the same AZ as the client. If no replicas are available in the zone, fallback to the primary in the same AZ. If neither are available, fallback to other replicas or the primary in other zones.
29+
* ```PREFER_REPLICA```: Distribute requests among all replicas in a round-robin manner. If no replica is available, fall back to the primary.
30+
* ```AZ_AFFINITY```: Prioritize replicas in the same AZ as the client. If no replicas are available in the zone, fall back to other replicas or the primary if needed.
31+
* ```AZ_AFFINITY_REPLICAS_AND_PRIMARY```: Prioritize replicas in the same AZ as the client. If no replicas are available in the zone, fall back to the primary in the same AZ. If neither are available, fall back to other replicas or the primary in other zones.
3232

3333
In Valkey 8, ```availability-zone``` configuration was introduced, allowing clients to specify the AZ for each Valkey server. GLIDE leverages this new configuration to empower its users with the ability to use AZ Affinity routing. At the time of writing, GLIDE is the only Valkey client library supporting the AZ Affinity strategies, offering a unique advantage.
3434

0 commit comments

Comments
 (0)