You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### Description
Fixes spelling errors and accessibility issues
### Issues Resolved
N/A
### Check List
- [x] Commits are signed per the DCO using `--signoff`
By submitting this pull request, I confirm that my contribution is made
under the terms of the BSD-3-Clause License.
---------
Signed-off-by: Josh Soref <[email protected]>
Signed-off-by: Madelyn Olson <[email protected]>
Co-authored-by: Madelyn Olson <[email protected]>
Copy file name to clipboardExpand all lines: CONTRIBUTING-BLOG-POST.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ The contribution model is lightweight, transparent, and uses well-understood ope
5
5
6
6
## Step 0: Setup
7
7
8
-
Before you begin, it’s a good idea to setup an environment to write, test, and contribute your blog post.
8
+
Before you begin, it’s a good idea to set up an environment to write, test, and contribute your blog post.
9
9
First, [fork the website repo](https://github.com/valkey-io/valkey-io.github.io), then clone the website to your local environment.
10
10
The general workflow is to write your post locally in a branch, confirm that it looks the way you want, then contribute your changes as a pull request.
11
11
@@ -74,7 +74,7 @@ title= "Using Valkey for mind control experiments"
74
74
# `date` is when your post will be published.
75
75
# For the most part, you can leave this as the day you _started_ the post.
76
76
# The maintainers will update this value before publishing
77
-
# The time is generally irrelvant in how Valkey published, so '01:01:01' is a good placeholder
77
+
# The time is generally irrelevant in how Valkey published, so '01:01:01' is a good placeholder
78
78
date= 2024-07-01 01:01:01
79
79
# 'description' is what is shown as a snippet/summary in various contexts.
80
80
# You can make this the first few lines of the post or (better) a hook for readers.
Copy file name to clipboardExpand all lines: content/authors/rdias.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -5,5 +5,5 @@ extra:
5
5
github: rjd15372
6
6
---
7
7
8
-
Ricardo is a principal software engineer at Percona where he works as contributor to the Valkey project. Ricardo has been working in distributed storage systems for many years, but his interests are not limited to distribtued systems, he also enjoys designing and implementing lock-free data structures, as well as, developing static code analyzers. In his free time, he's a family guy and also manages a roller hockey club.
8
+
Ricardo is a principal software engineer at Percona where he works as contributor to the Valkey project. Ricardo has been working in distributed storage systems for many years, but his interests are not limited to distributed systems, he also enjoys designing and implementing lock-free data structures, as well as, developing static code analyzers. In his free time, he's a family guy and also manages a roller hockey club.
Copy file name to clipboardExpand all lines: content/blog/2024-06-27-using-bitnami-valkey-chart/index.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ Valkey is a high-performance key/value datastore that supports workloads such as
12
12
13
13
[Bitnami](https://bitnami.com/) offers a number of secure, up-to-date, and easy to deploy charts for a number of popular open source applications.
14
14
15
-
This blog will serve as a walk-through on how you can deploy and use the [Bitnami Helm chart for Valkey](https://github.com/bitnami/charts/tree/main/bitnami/valkey).
15
+
This blog will serve as a walkthrough on how you can deploy and use the [Bitnami Helm chart for Valkey](https://github.com/bitnami/charts/tree/main/bitnami/valkey).
Copy file name to clipboardExpand all lines: content/blog/2024-08-29-valkey-memory-efficiency-8-0.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -137,4 +137,4 @@ So, the drop in percentage is approximately **20.63% in overall memory usage on
137
137
## Conclusion
138
138
139
139
Through the memory efficiency achieved by introducing dictionary per slot and key embedding into dictionary entry, users should have additional capacity to store more keys per node in Valkey 8.0 (up to 20%, but it will vary based on the workload). For users, upgrading from Valkey 7.2 to Valkey 8.0, the improvement should be observed automatically and no configuration changes are required.
140
-
Give it a try by spinning up a [Valkey cluster](https://valkey.io/download/) and join us in the [community](https://github.com/valkey-io/valkey/) to provide feedback. Further, there is an ongoing discussion around overhauling the main dictionary with a more compact memory layout and introduce an open addressing scheme which will significantly improve memory efficiency. More details can be found [here](https://github.com/valkey-io/valkey/issues/169).
140
+
Give it a try by spinning up a [Valkey cluster](https://valkey.io/download/) and join us in the [community](https://github.com/valkey-io/valkey/) to provide feedback. Further, there is an ongoing discussion around overhauling the main dictionary with a more compact memory layout and introduce an open addressing scheme which will significantly improve memory efficiency. More details can be found in [Issue 169: Re-thinking the main hash table](https://github.com/valkey-io/valkey/issues/169).
Copy file name to clipboardExpand all lines: content/blog/2024-09-13-unlock-one-million-rps-part2.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ title= "Unlock 1 Million RPS: Experience Triple the Speed with Valkey - part 2"
4
4
# `date` is when your post will be published.
5
5
# For the most part, you can leave this as the day you _started_ the post.
6
6
# The maintainers will update this value before publishing
7
-
# The time is generally irrelvant in how Valkey published, so '01:01:01' is a good placeholder
7
+
# The time is generally irrelevant in how Valkey published, so '01:01:01' is a good placeholder
8
8
date= 2024-09-13 01:01:01
9
9
# 'description' is what is shown as a snippet/summary in various contexts.
10
10
# You can make this the first few lines of the post or (better) a hook for readers.
@@ -15,7 +15,7 @@ description= "Maximize the performance of your hardware with memory access amort
15
15
authors= [ "dantouitou", "uriyagelnik"]
16
16
+++
17
17
18
-
In the [first part](/blog/unlock-one-million-rps/) of this blog, we described how we offloaded almost all I/O operations to I/O threads, thereby freeing more CPU cycles in the main thread to execute commands. When we profiled the execution of the main thread, we found that a considerable amount of time was spent waiting for external memory. This was not entirely surprising, as when accessing random keys, the probability of finding the key in one of the processor caches is relatively low. Considering that external memory access latency is approximately 50 times higher than L1 cache, it became clear that despite showing 100% CPU utilization, the main process was mostly “waiting”. In this blog, we describe the technique we have been using to increase the number of parallel memory accesses, thereby reducing the impact that external memory latency has on performance.
18
+
In the [first part](/blog/unlock-one-million-rps/) of this blog, we described how we offloaded almost all I/O operations to I/O threads, thereby freeing more CPU cycles in the main thread to execute commands. When we profiled the execution of the main thread, we found that a considerable amount of time was spent waiting for external memory. This was not entirely surprising, as when accessing random keys, the probability of finding the key in one of the processor caches is relatively low. Considering that external memory access latency is approximately 50 times greater than L1 cache, it became clear that despite showing 100% CPU utilization, the main process was mostly “waiting”. In this blog, we describe the technique we have been using to increase the number of parallel memory accesses, thereby reducing the impact that external memory latency has on performance.
19
19
20
20
### Speculative execution and linked lists
21
21
Speculative execution is a performance optimization technique used by modern processors, where the processor guesses the outcome of conditional operations and executes in parallel subsequent instructions ahead of time. Dynamic data structures, such as linked lists and search trees, have many advantages over static data structures: they are economical in memory consumption, provide fast insertion and deletion mechanisms, and can be resized efficiently. However, some dynamic data structures have a major drawback: they hinder the processor's ability to speculate on future memory load instructions that could be executed in parallel. This lack of concurrency is especially problematic in very large dynamic data structures, where most pointer accesses result in high-latency external memory access.
Copy file name to clipboardExpand all lines: content/blog/2024-11-21-testing-the-limits/index.md
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ While doing extensive performance testing on a Raspberry Pi is silly, it's made
11
11
12
12

13
13
14
-
For hardware we are going to be using a Raspberry Pi [Compute Module 4 (CM4)](https://www.raspberrypi.com/products/compute-module-4/?variant=raspberry-pi-cm4001000). It's a single board computer (SBC) that comes with a tiny 1.5Ghz 4-core Broadcomm CPU and 8GB of system memory. This is hardly the first device someone would pick when deciding on a production system. Using the CM4 makes it easy to showcase how to optimize Valkey depending on your different hardware constraints.
14
+
For hardware we are going to be using a Raspberry Pi [Compute Module 4 (CM4)](https://www.raspberrypi.com/products/compute-module-4/?variant=raspberry-pi-cm4001000). It's a single board computer (SBC) that comes with a tiny 1.5Ghz 4-core Broadcom CPU and 8GB of system memory. This is hardly the first device someone would pick when deciding on a production system. Using the CM4 makes it easy to showcase how to optimize Valkey depending on your different hardware constraints.
15
15
16
16
Our operating system will be a 64-bit Debian based operating system (OS) called [Rasbian](https://www.raspbian.org/). This distribution is specifically modified to perform well on the CM4. Valkey will run in a docker container orchestrated with docker compose. I like deploying in containers as it simplifies operations. If you'd like to follow along here is [a guide for installing Docker](https://docs.docker.com/engine/install/debian/). Make sure to continue to the [second page of the installation process](https://docs.docker.com/engine/install/linux-postinstall/) as well. It's easy to miss and skipping it could make it harder to follow along.
17
17
@@ -83,7 +83,7 @@ Since Valkey is a single threaded application, it makes sense that higher clock
83
83
84
84
**Note:** Clock speeds generally are only comparable between CPU's with a similar architecture. For example, you could reasonably compare clock speeds between an 12th generation Intel i5 and a 12th generation Intel i7. If the 12th gen i7 had a max clock speed of 5Ghz that doesn't necessarily mean it will be slower than a AMD Ryzen 9 9900X clocked at 5.6Ghz.
85
85
86
-
If you're following along on a Pi of your own I've outlined the steps to overclock your CM4 below. Otherwise you can skip to the results section below.
86
+
If you're following along on a Pi of your own I've outlined the steps to overclock your CM4 below. Otherwise, you can skip to the results section below.
87
87
88
88
**Warning** Just a reminder overclocking your device can cause damage to your device. Please use caution and do your own research for settings that are safe.
GET: 438058.53 requests per second, p50=1.135 msec
106
106
```
107
107
108
-
We're up to 416,000 requests per second (reminder this is the average between the two operations). The mathmaticians out there might notice that this speed up is a lot more than the expected 47% increase. It's a 73% increase in requests per second. What's happening?!
108
+
We're up to 416,000 requests per second (reminder this is the average between the two operations). The mathematicians out there might notice that this speed up is a lot more than the expected 47% increase. It's a 73% increase in requests per second. What's happening?!
109
109
110
110
## Adding IO Threading
111
111
@@ -147,9 +147,9 @@ Much better! Now we are seeing around 565,000 requests per second. Thats a 35% i
147
147
148
148
Right? Well believe it or not we can squeeze even more performance out of our little CM4!
149
149
150
-

150
+

151
151
152
-
Above is a representitive outline of what's happening on the server. The Valkey process has to take up valuble cycles managing the IO Threads. Not only that it has to perform a lot of work to manage all the memory assigned to it. That's a lot of work for a single process.
152
+
Above is a representative outline of what's happening on the server. The Valkey process has to take up valuable cycles managing the IO Threads. Not only that it has to perform a lot of work to manage all the memory assigned to it. That's a lot of work for a single process.
153
153
154
154
Now there is actually one more optimization we can use to make single threaded Valkey even faster. Valkey recently has done a substantial amount of work to support speculative execution. This work allows Valkey to predict which values will be needed from memory in future processing steps. This way Valkey server doesn't have to wait for memory access which is an order of magnitude slower than L1 caches. While I won't go through the details of how this works as there's already a [great blog that describes how to take advantage of these optimizations](https://valkey.io/blog/unlock-one-million-rps-part2/). Here are the results:
155
155
@@ -163,7 +163,7 @@ While these results are better they are a bit confusing. After talking with some
163
163
164
164
## Clustered Valkey
165
165
166
-

166
+

167
167
168
168
For our last step we are going to spin up a Valkey cluster. This cluster will have individual instances of Valkey running that each will be responsible for managing their own keys. This way each instance can execute operations in parallel much more easily.
Copy file name to clipboardExpand all lines: content/blog/2024-12-22-az-affinity-strategy.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -26,9 +26,9 @@ Additionally, one of the most common use cases for caching is to store database
26
26
GLIDE provides flexible options tailored to your application’s needs:
27
27
28
28
*```PRIMARY```: Always read from the primary to ensure the freshness of data.
29
-
*```PREFER_REPLICA```: Distribute requests among all replicas in a round-robin manner. If no replica is available, fallback to the primary.
30
-
*```AZ_AFFINITY```: Prioritize replicas in the same AZ as the client. If no replicas are available in the zone, fallback to other replicas or the primary if needed.
31
-
*```AZ_AFFINITY_REPLICAS_AND_PRIMARY```: Prioritize replicas in the same AZ as the client. If no replicas are available in the zone, fallback to the primary in the same AZ. If neither are available, fallback to other replicas or the primary in other zones.
29
+
*```PREFER_REPLICA```: Distribute requests among all replicas in a round-robin manner. If no replica is available, fall back to the primary.
30
+
*```AZ_AFFINITY```: Prioritize replicas in the same AZ as the client. If no replicas are available in the zone, fall back to other replicas or the primary if needed.
31
+
*```AZ_AFFINITY_REPLICAS_AND_PRIMARY```: Prioritize replicas in the same AZ as the client. If no replicas are available in the zone, fall back to the primary in the same AZ. If neither are available, fall back to other replicas or the primary in other zones.
32
32
33
33
In Valkey 8, ```availability-zone``` configuration was introduced, allowing clients to specify the AZ for each Valkey server. GLIDE leverages this new configuration to empower its users with the ability to use AZ Affinity routing. At the time of writing, GLIDE is the only Valkey client library supporting the AZ Affinity strategies, offering a unique advantage.
0 commit comments