Pin package manager dependencies in CI workflows to improve the Pinned-Dependencies
score in OpenSSF Scorecard.
Changes:
- benchmark-on-label.yml, benchmark-release.yml: add `--require-hashes`
to `pip install` adding on valkey-perf-benchmark repo:
https://github.com/valkey-io/valkey-perf-benchmark/pull/44
- ci.yml: pin `yamlfmt` to `v0.21.0` instead of `@latest`
- reply-schemas-linter.yml: use npm ci with `package-lock.json` instead
of unpinned npm install, package files in `utils/reply-schema-linter/`
Signed-off-by: Roshaan Khatri <rvkhatri@amazon.com>
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Closes https://github.com/valkey-io/valkey/issues/3077
### Overview
URI in SAN is used to represent client identities in modern mTLS
deployments where CN may be empty or deprecated. See the
https://github.com/valkey-io/valkey/issues/3077#issuecomment-3775615304
for more details.
When `tls-auth-clients-user URI` is configured, during the TLS
handshake, the server iterates through the URIs in the client
certificate and authenticates the client as the first enabled user whose
name matches one of those URIs.
### Implementation
- Introduced a new value `URI` for `tls-auth-clients-user`
- Added new function `getCertSanUri` that:
- Extracts URI entries from the certificate's SAN extension
- Checks each URI against existing Valkey users
- Returns the first URI that matches an enabled user
- Renamed `getCertFieldByName` → `getCertSubjectFieldByName` for clarity
- Modified `tlsGetPeerUsername` to support both CN and URI
authentication modes
### Example behavior
Common setup
```
# client certificate X509v3 Subject Alternative Name
URI:urn:valkey:user:first, URI:urn:valkey:user:second
# valkey.conf
tls-auth-clients-user URI
hide-user-data-from-log no
```
Use case 1: multiple enabled users
```
user urn:valkey:user:first on >clientpass allcommands allkeys
user urn:valkey:user:second on >clientpass allcommands allkeys
39762:M 26 Jan 2026 22:06:25.122 - TLS: Auto-authenticated client as urn:valkey:user:first
```
Use case 2: first URI disabled, second enabled
```
user urn:valkey:user:first off >clientpass allcommands allkeys
user urn:valkey:user:second on >clientpass allcommands allkeys
39792:M 26 Jan 2026 22:07:08.006 - TLS: Auto-authenticated client as urn:valkey:user:second
```
Use case 3: all matching users disabled or no matching user
```
user urn:valkey:user:first off >clientpass allcommands allkeys
user urn:valkey:user:second off >clientpass allcommands allkeys
39812:M 26 Jan 2026 22:07:34.174 * TLS: No matching user found in certificate SAN URI fields
127.0.0.1:6379> acl whoami
"default"
127.0.0.1:6379> acl log
1) 1) "count"
2) (integer) 1
3) "reason"
4) "tls-cert"
5) "context"
6) "toplevel"
7) "object"
8) ""
9) "username"
10) "urn:valkey:user:second"
11) "age-seconds"
12) "17.381"
13) "client-info"
14) "id=3 addr=127.0.0.1:57236 laddr=127.0.0.1:6379 fd=8 name= age=0 idle=0 flags=N capa= db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=0 argv-mem=0 multi-mem=0 rbs=16384 rbp=16384 obl=0 oll=0 omem=0 tot-mem=17024 events=r cmd=NULL user=default redir=-1 resp=2 lib-name= lib-ver= tot-net-in=0 tot-net-out=0 tot-cmds=0"
15) "entry-id"
16) (integer) 0
17) "timestamp-created"
18) (integer) 1771963041866
19) "timestamp-last-updated"
20) (integer) 1771963041866
127.0.0.1:6379>
```
---------
Signed-off-by: Yang Zhao <zymy701@gmail.com>
This PR adds the certificates validation at TLS load, rejects invalid
(expired/not-yet-valid) certificates:
Apply to all TLS config paths:
- Server certificates `tls-cert-file`
- Server-side client certificates `tls-client-cert-file`
- CA certificate file `tls-ca-cert-file`
- CA certificate directory `tls-ca-cert-dir` (now eagerly loaded to be
consistent with file-based CAs)
Apply to both scenarios:
- Server startup (initial TLS load)
- Runtime reload vis `CONFIG SET`
### Implementation
- Added `isCertValid` function to check if an X509 certificate is within
its validity period (not expired, not future-dated)
- Added `areAllCaCertsValid` function to iterate through all loaded CA
certificates and validate them
- Added `loadCaCertDir` function to eagerly load all certificates from a
directory into the X509_STORE
- Modified `createSSLContext` to validate:
- Server/client certificates immediately after loading
- All CA certificates after loading from file/directory
### Test results
#### 1. Server startup (initial TLS load)
```
tls-cert-file ./tests/tls/server-expired.crt
41522:M 31 Dec 2025 16:13:18.851 # Server TLS certificate is invalid. Aborting TLS configuration.
41522:M 31 Dec 2025 16:13:18.851 # Failed to configure TLS. Check logs for more info.
tls-client-cert-file ./tests/tls/client-expired.crt
41557:M 31 Dec 2025 16:14:43.296 # Client TLS certificate is invalid. Aborting TLS configuration.
41557:M 31 Dec 2025 16:14:43.296 # Failed to configure TLS. Check logs for more info.
tls-ca-cert-file ./tests/tls/ca-expired.crt
tls-ca-cert-dir ./tests/tls/ca-expired
41567:M 31 Dec 2025 16:15:15.635 # One or more loaded CA certificates are invalid. Aborting TLS configuration.
41567:M 31 Dec 2025 16:15:15.635 # Failed to configure TLS. Check logs for more info.
```
#### 2. Runtime reload via CONFIG SET
```
127.0.0.1:6379> config set tls-cert-file ./tests/tls/server-expired.crt
(error) ERR CONFIG SET failed (possibly related to argument 'tls-cert-file') - Unable to update TLS configuration. Check server logs.
62975:M 02 Jan 2026 20:10:43.588 # Server TLS certificate is invalid. Aborting TLS configuration.
62975:M 02 Jan 2026 20:10:43.588 # Failed applying new configuration. Possibly related to new tls-cert-file setting. Restoring previous settings.
127.0.0.1:6379> config set tls-client-cert-file ./tests/tls/client-expired.crt
(error) ERR CONFIG SET failed (possibly related to argument 'tls-client-cert-file') - Unable to update TLS configuration. Check server logs.
62975:M 02 Jan 2026 20:10:57.972 # Client TLS certificate is invalid. Aborting TLS configuration.
62975:M 02 Jan 2026 20:10:57.972 # Failed applying new configuration. Possibly related to new tls-client-cert-file setting. Restoring previous settings.
127.0.0.1:6379> config set tls-ca-cert-file ./tests/tls/ca-expired.crt
127.0.0.1:6379> config set tls-ca-cert-dir ./tests/tls/ca-expired
(error) ERR CONFIG SET failed (possibly related to argument 'tls-ca-cert-file') - Unable to update TLS configuration. Check server logs.
62975:M 02 Jan 2026 20:10:50.175 # One or more loaded CA certificates are invalid. Aborting TLS configuration.
62975:M 02 Jan 2026 20:10:50.175 # Failed applying new configuration. Possibly related to new tls-ca-cert-file setting. Restoring previous settings.
```
---------
Signed-off-by: Yang Zhao <zymy701@gmail.com>
## API changes and user behavior:
- [x] Default behavior for database access.
Default is `alldbs` permissions.
### Database Permissions (`db=`)
- [x] Accessing particular database
```
> ACL SETUSER test1 on +@all ~* resetdbs db=0,1 nopass
"user test1 on nopass sanitize-payload ~* resetchannels db=0,1 +@all"
```
- [x] (Same behavior without usage of `resetdbs`)
```
> ACL SETUSER test1 on +@all ~* db=0,1 nopass
"user test1 on nopass sanitize-payload ~* resetchannels db=0,1 +@all"
```
- [x] Multiple selector can be provided
```
> ACL SETUSER test1 on nopass (db=0,1 +@write +select ~*) (db=2,3 +@read +select ~*)
"user test1 on nopass sanitize-payload resetchannels alldbs -@all (~* resetchannels db=0,1 -@all +@write +select) (~* resetchannels db=2,3 -@all +@read +select)"
```
- [x] Restricting special commands which access databases as part of the
command.
The user needs to have access to both the commands and db(s) part of the
command to run these commands.
1. SWAPDB
2. SELECT
3. MOVE - (Select command would have went through for the source
database). Have access for the target database.
4. COPY
- [x] Restricting special commands which doesn't specify database
number, however, accesses multiple databases.
The user needs to have access to both the commands and all databases
(`alldbs`) to run these commands.
1. FLUSHALL - Access all databases
2. CLUSTER commands that access all databases:
- CANCELSLOTMIGRATIONS
- MIGRATESLOTS
- [x] New connection establishment behavior
New client connection gets established to DB 0 by default.
Authentication and authorisation are decoupled and the user can
connect/authenticate and further perform `SELECT` or other operation
that do not access keyspace.
(Do we want to extend HELLO?) Alternative suggestion by @madolson:
Extend `HELLO` command to pass the dbid to which the user should get
connected after authentication if they have right set of permission. I
think it will become a long poll for adoption.
- [x] Observability
Extend `ACL LOG` to log user which received denied permission error
while accessing a database.
- [x] Module API
* Introduce module API `int VM_ACLCheckPermissions(ValkeyModuleUser
*user, ValkeyModuleString **argv, int argc, int dbid,
ValkeyModuleACLLogEntryReason *denial_reason);`
* Stop support of `VM_ACLCheckCommandPermissions()`.
Resolves: #1336
---------
Signed-off-by: Daniil Kashapov <daniil.kashapov.ykt@gmail.com>
* A formatting error where there was a line break in the wrong place in
a code example in a doc comment (used in the generated API docs). The
error was introduced in an automatic code formatting commit.
* Improve API doc generation script by considering release candidates
when detecting "since" for each API function. This makes it possible to
run the script on a release candidate to have the docs ready before a GA
release.
---------
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
This change makes it so that the module API reference can be cleanly
generated from the module.c file. Most of this seems to be because of
the code formatting work we did. There are two parts:
1. Updated some of the odd corner cases in module.c so they can be
handled. For example, there was a method that had none of the method on
the first line, which was unhandled. None of these are functional and I
think format should be OK with them.
2. Better handle multi-line function prototypes in the ruby code. Before
we just relied on the function to be on a single line, now we handle it
on multiple lines. This feels pretty hacked in, but I don't really
understand the rest of the code but it does work.
Generated this PR:
https://github.com/valkey-io/valkey-doc/pull/374/files.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Introduces a new family of commands for migrating slots via replication.
The procedure is driven by the source node which pushes an AOF formatted
snapshot of the slots to the target, followed by a replication stream of
changes on that slot (a la manual failover).
This solution is an adaptation of the solution provided by
@enjoy-binbin, combined with the solution I previously posted at #1591,
modified to meet the designs we had outlined in #23.
## New commands
* `CLUSTER MIGRATESLOTS SLOTSRANGE start end [start end]... NODE
node-id`: Begin sending the slot via replication to the target. Multiple
targets can be specified by repeating `SLOTSRANGE ... NODE ...`
* `CLUSTER CANCELMIGRATION ALL`: Cancel all slot migrations
* `CLUSTER GETSLOTMIGRATIONS`: See a recent log of migrations
This PR only implements "one shot" semantics with an asynchronous model.
Later, "two phase" (e.g. slot level replicate/failover commands) can be
added with the same core.
## Slot migration jobs
Introduces the concept of a slot migration job. While active, a job
tracks a connection created by the source to the target over which the
contents of the slots are sent. This connection is used for control
messages as well as replicated slot data. Each job is given a 40
character random name to help uniquely identify it.
All jobs, including those that finished recently, can be observed using
the `CLUSTER GETSLOTMIGRATIONS` command.
## Replication
* Since the snapshot uses AOF, the snapshot can be replayed verbatim to
any replicas of the target node.
* We use the same proxying mechanism used for chaining replication to
copy the content sent by the source node directly to the replica nodes.
## `CLUSTER SYNCSLOTS`
To coordinate the state machine transitions across the two nodes, a new
command is added, `CLUSTER SYNCSLOTS`, that performs this control flow.
Each end of the slot migration connection is expected to install a read
handler in order to handle `CLUSTER SYNCSLOTS` commands:
* `ESTABLISH`: Begins a slot migration. Provides slot migration
information to the target and authorizes the connection to write to
unowned slots.
* `SNAPSHOT-EOF`: appended to the end of the snapshot to signal that the
snapshot is done being written to the target.
* `PAUSE`: informs the source node to pause whenever it gets the
opportunity
* `PAUSED`: added to the end of the client output buffer when the pause
is performed. The pause is only performed after the buffer shrinks below
a configurable size
* `REQUEST-FAILOVER`: request the source to either grant or deny a
failover for the slot migration. The grant is only granted if the target
is still paused. Once a failover is granted, the paused is refreshed for
a short duration
* `FAILOVER-GRANTED`: sent to the target to inform that REQUEST-FAILOVER
is granted
* `ACK`: heartbeat command used to ensure liveness
## Interactions with other commands
* FLUSHDB on the source node (which flushes the migrating slot) will
result in the source dropping the connection, which will flush the slot
on the target and reset the state machine back to the beginning. The
subsequent retry should very quickly succeed (it is now empty)
* FLUSHDB on the target will fail the slot migration. We can iterate
with better handling, but for now it is expected that the operator would
retry.
* Genearlly, FLUSHDB is expected to be executed cluster wide, so
preserving partially migrated slots doesn't make much sense
* SCAN and KEYS are filtered to avoid exposing importing slot data
## Error handling
* For any transient connection drops, the migration will be failed and
require the user to retry.
* If there is an OOM while reading from the import connection, we will
fail the import, which will drop the importing slot data
* If there is a client output buffer limit reached on the source node,
it will drop the connection, which will cause the migration to fail
* If at any point the export loses ownership or either node is failed
over, a callback will be triggered on both ends of the migration to fail
the import. The import will not reattempt with a new owner
* The two ends of the migration are routinely pinging each other with
SYNCSLOTS ACK messages. If at any point there is no interaction on the
connection for longer than `repl-timeout`, the connection will be
dropped, resulting in migration failure
* If a failover happens, we will drop keys in all unowned slots. The
migration does not persist through failovers and would need to be
retried on the new source/target.
## State machine
```
Target/Importing Node State Machine
─────────────────────────────────────────────────────────────
┌────────────────────┐
│SLOT_IMPORT_WAIT_ACK┼──────┐
└──────────┬─────────┘ │
ACK│ │
┌──────────────▼─────────────┐ │
│SLOT_IMPORT_RECEIVE_SNAPSHOT┼──┤
└──────────────┬─────────────┘ │
SNAPSHOT-EOF│ │
┌───────────────▼──────────────┐ │
│SLOT_IMPORT_WAITING_FOR_PAUSED┼─┤
└───────────────┬──────────────┘ │
PAUSED│ │
┌───────────────▼──────────────┐ │ Error Conditions:
│SLOT_IMPORT_FAILOVER_REQUESTED┼─┤ 1. OOM
└───────────────┬──────────────┘ │ 2. Slot Ownership Change
FAILOVER-GRANTED│ │ 3. Demotion to replica
┌──────────────▼─────────────┐ │ 4. FLUSHDB
│SLOT_IMPORT_FAILOVER_GRANTED┼──┤ 5. Connection Lost
└──────────────┬─────────────┘ │ 6. No ACK from source (timeout)
Takeover Performed│ │
┌──────────────▼───────────┐ │
│SLOT_MIGRATION_JOB_SUCCESS┼────┤
└──────────────────────────┘ │
│
┌─────────────────────────────────────▼─┐
│SLOT_IMPORT_FINISHED_WAITING_TO_CLEANUP│
└────────────────────┬──────────────────┘
Unowned Slots Cleaned Up│
┌─────────────▼───────────┐
│SLOT_MIGRATION_JOB_FAILED│
└─────────────────────────┘
Source/Exporting Node State Machine
─────────────────────────────────────────────────────────────
┌──────────────────────┐
│SLOT_EXPORT_CONNECTING├─────────┐
└───────────┬──────────┘ │
Connected│ │
┌─────────────▼────────────┐ │
│SLOT_EXPORT_AUTHENTICATING┼───────┤
└─────────────┬────────────┘ │
Authenticated│ │
┌─────────────▼────────────┐ │
│SLOT_EXPORT_SEND_ESTABLISH┼───────┤
└─────────────┬────────────┘ │
ESTABLISH command written│ │
┌─────────────────────▼─────────────┐ │
│SLOT_EXPORT_READ_ESTABLISH_RESPONSE┼──────┤
└─────────────────────┬─────────────┘ │
Full response read (+OK)│ │
┌────────────────▼──────────────┐ │ Error Conditions:
│SLOT_EXPORT_WAITING_TO_SNAPSHOT┼─────┤ 1. User sends CANCELMIGRATION
└────────────────┬──────────────┘ │ 2. Slot ownership change
No other child process│ │ 3. Demotion to replica
┌────────────▼───────────┐ │ 4. FLUSHDB
│SLOT_EXPORT_SNAPSHOTTING┼────────┤ 5. Connection Lost
└────────────┬───────────┘ │ 6. AUTH failed
Snapshot done│ │ 7. ERR from ESTABLISH command
┌───────────▼─────────┐ │ 8. Unpaused before failover completed
│SLOT_EXPORT_STREAMING┼──────────┤ 9. Snapshot failed (e.g. Child OOM)
└───────────┬─────────┘ │ 10. No ack from target (timeout)
PAUSE│ │ 11. Client output buffer overrun
┌──────────────▼─────────────┐ │
│SLOT_EXPORT_WAITING_TO_PAUSE┼──────┤
└──────────────┬─────────────┘ │
Buffer drained│ │
┌──────────────▼────────────┐ │
│SLOT_EXPORT_FAILOVER_PAUSED┼───────┤
└──────────────┬────────────┘ │
Failover request granted│ │
┌───────────────▼────────────┐ │
│SLOT_EXPORT_FAILOVER_GRANTED┼───────┤
└───────────────┬────────────┘ │
New topology received│ │
┌──────────────▼───────────┐ │
│SLOT_MIGRATION_JOB_SUCCESS│ │
└──────────────────────────┘ │
│
┌─────────────────────────┐ │
│SLOT_MIGRATION_JOB_FAILED│◄────────┤
└─────────────────────────┘ │
│
┌────────────────────────────┐ │
│SLOT_MIGRATION_JOB_CANCELLED│◄──────┘
└────────────────────────────┘
```
Co-authored-by: Binbin <binloveplay1314@qq.com>
---------
Signed-off-by: Binbin <binloveplay1314@qq.com>
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
Co-authored-by: Ping Xie <pingxie@outlook.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
- Moves `build-config.json` to workflow dir to build old versions with
new configs.
- Enables contributors to test release Wf on private repo by adding
`github.event_name == 'workflow_dispatch' ||`
---------
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
This change makes the binary build on the target ubuntu version.
This PR also deprecated ubuntu18 and valkey will not support:
- X86:
- Ubuntu 20
- Ubuntu 22
- Ubuntu 24
- ARM:
- Ubuntu 20
- Ubuntu 22
Removed ARM ubuntu 24 as the action we are using for ARM builds does not
support Ubuntu 24.
---------
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
This commit hopefully improves the formatting of the codebase by setting
ColumnLimit to 0 and hence stopping clang-format from trying to put as
much stuff in one line as possible.
This change enabled us to remove most of `clang-format off` directives
and fixed a bunch of lines that looked like this:
```c
#define KEY \
VALUE /* comment */
```
Additionally, one pair of `clang-format off` / `clang-format on` had
`clang-format off` as the second comment and hence didn't enable the
formatting for the rest of the file. This commit addresses this issue as
well.
Please tell me if anything in the changes seem off. If everything is
fine, I will add this commit to `.git-blame-ignore-revs` later.
---------
Signed-off-by: Mikhail Koviazin <mikhail.koviazin@aiven.io>
This PR utilizes the IO threads to execute commands in batches, allowing
us to prefetch the dictionary data in advance.
After making the IO threads asynchronous and offloading more work to
them in the first 2 PRs, the `lookupKey` function becomes a main
bottle-neck and it takes about 50% of the main-thread time (Tested with
SET command). This is because the Valkey dictionary is a straightforward
but inefficient chained hash implementation. While traversing the hash
linked lists, every access to either a dictEntry structure, pointer to
key, or a value object requires, with high probability, an expensive
external memory access.
### Memory Access Amortization
Memory Access Amortization (MAA) is a technique designed to optimize the
performance of dynamic data structures by reducing the impact of memory
access latency. It is applicable when multiple operations need to be
executed concurrently. The principle behind it is that for certain
dynamic data structures, executing operations in a batch is more
efficient than executing each one separately.
Rather than executing operations sequentially, this approach interleaves
the execution of all operations. This is done in such a way that
whenever a memory access is required during an operation, the program
prefetches the necessary memory and transitions to another operation.
This ensures that when one operation is blocked awaiting memory access,
other memory accesses are executed in parallel, thereby reducing the
average access latency.
We applied this method in the development of `dictPrefetch`, which takes
as parameters a vector of keys and dictionaries. It ensures that all
memory addresses required to execute dictionary operations for these
keys are loaded into the L1-L3 caches when executing commands.
Essentially, `dictPrefetch` is an interleaved execution of dictFind for
all the keys.
**Implementation details**
When the main thread iterates over the `clients-pending-io-read`, for
clients with ready-to-execute commands (i.e., clients for which the IO
thread has parsed the commands), a batch of up to 16 commands is
created. Initially, the command's argv, which were allocated by the IO
thread, is prefetched to the main thread's L1 cache. Subsequently, all
the dict entries and values required for the commands are prefetched
from the dictionary before the command execution. Only then will the
commands be executed.
---------
Signed-off-by: Uri Yagelnik <uriy@amazon.com>
According to the Python document[1], any invalid escape sequences in
string literals now generate a DeprecationWarning (SyntaxWarning as of
3.12) and in the future this will become a SyntaxError.
This Change uses Python’s raw string notation for regular expression
patterns to avoid it.
[1]: https://docs.python.org/3.10/library/re.html
Signed-off-by: haoqixu <hq.xu0o0@gmail.com>
The create-cluster in utils mainly used to create a test cluster,
turning on these options is useful for testing purposes.
Signed-off-by: Binbin <binloveplay1314@qq.com>
The core idea was to take a lot of the stuff from the C unity framework
and adapt it a bit here. Each file in the `unit` directory that starts
with `test_` is automatically assumed to be a test suite. Within each
file, all functions that start with `test_` are assumed to be a test.
See unit/README.md for details about the implementation.
Instead of compiling basically a net new binary, the way the tests are
compiled is that the main valkey server is compiled as a static archive,
which we then compile the individual test files against to create a new
test executable. This is not all that important now, other than it makes
the compilation simpler, but what it will allow us to do is overwrite
functions in the archive to enable mocking for cross compilation unit
functions. There are also ways to enable mocking from within the same
compilation unit, but I don't know how important this is.
Tests are also written in one of two styles:
1. Including the header file and directly calling functions from the
archive.
2. Importing the original file, and then calling the functions. This
second approach is cool because we can call static functions. It won't
mess up the archive either.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
The output of this script becomes the contents of
`topics/module-api-ref.md` in the `valkey-doc` repo. (Updating it is a
manual process.)
The script uses git tags to find the version that first added an API
function. To preserve the history from old Redis OSS versions, for which
we don't keep git tags, a mapping is stored in a file.
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
1. Rename `REDIS_*` macros defined in object.c to `VALKEY_*`,
2. Rename `Redis` to `Valkey` , `redis-cli` to `valkey-cli` in logs
(i.e. put statement) and descriptions in object.c and
utils/create-cluster/README
---------
Signed-off-by: Sher Sun <sher.sun@huawei.com>
Co-authored-by: Sher Sun <sher.sun@huawei.com>
Rename the init script and a related `.tpl` file and rename variable
names inside (redis to valkey). Update variables in
`utils/install_server.sh`.
Fixes#354
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Readme in github shows that install script will help to install valkey
server, However the logs of thes cripts and variables in the script still
points to redis so renamed redis to valkey/server accordingly.
Signed-off-by: Shivshankar-Reddy <shiva.sheri.github@gmail.com>
Previously these scripts were updated but still some places are left so
updated the valkey.
---------
Signed-off-by: Shivshankar-Reddy <shiva.sheri.github@gmail.com>
[related to](https://github.com/valkey-io/valkey/issues/230)
Adds workflows to build Valkey binaries and push to S3 to make it
available to download from the website
The Workflows can be triggered by pushing a release to the repo and the
other option is manually by one of the Maintainers.
Once the workflow triggers, it will generate a matrix of Jobs for the
platforms we need to build from `utils/releasetools/build-config.json`
and then the respective Jobs are triggered. These jobs make Valkey with
respect to the platform binaries we want to release and would push to a
private S3 bucket.
---------
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Updated release-tool scripts to valkey, This PR covered only the file
names for the tarball but still location needs to be updated
accordingly.
Signed-off-by: Shivshankar-Reddy <shiva.sheri.github@gmail.com>
This utils/lru/README incorrectly refers to REDIS_LRU_CLOCK_RESOLUTION
in server.h to modify the LRU clock resolution. However, the actual
constant in server.h has been updated to LRU_CLOCK_RESOLUTION, but the
README was not updated to reflect this change.
1. Replaced REDIS_LRU_CLOCK_RESOLUTION with LRU_CLOCK_RESOLUTION in the
text of utils/lru/README.
2. Updated references from "Redis" to "Valkey" within the same README
file as part of the ongoing rebranding efforts:)
---------
Signed-off-by: Sher Sun <sher.sun@huawei.com>
Co-authored-by: Sher Sun <sher.sun@huawei.com>
This includes comments used for module API documentation.
* Strategy for replacement: Regex search: `(//|/\*| \*|#).* ("|\()?(r|R)edis( |\.
|'|\n|,|-|\)|")(?!nor the names of its contributors)(?!Ltd.)(?!Labs)(?!Contributors.)`
* Don't edit copyright comments
* Replace "Redis version X.X" -> "Redis OSS version X.X" to distinguish
from newly licensed repository
* Replace "Redis Object" -> "Object"
* Exclude markdown for now
* Don't edit Lua scripting comments referring to redis.X API
* Replace "Redis Protocol" -> "RESP"
* Replace redis-benchmark, -cli, -server, -check-aof/rdb with "valkey-"
prefix
* Most other places, I use best judgement to either remove "Redis", or
replace with "the server" or "server"
Fixes#148
---------
Signed-off-by: Jacob Murphy <jkmurphy@google.com>
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
This PR covers changing the redis.crt and redis.key to valkey certs for
TLS testing.
The files are generated by the gen-test-certs.sh script under tests/tls/.
Also covers comments provided.
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Co-authored-by: hwware <wen.hui.ware@gmail.com>
Remove trademarked wording on configuration layer.
Following changes for release notes:
1. Rename redis.conf to valkey.conf
2. Pre-filled config in the template config file: Changing pidfile to `/var/run/valkey_6379.pid`
Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
Documentation references should use `Valkey` while server and cli
references are all under `valkey`.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
The new regular expression break the validator:
```
In file included from commands.c:10:
commands_with_reply_schema.def:14528:72: error: stray ‘\’ in program
14528 | struct jsonObjectElement MEMORY_STATS_ReplySchema_patternProperties__db\_\d+__properties_overhead_hashtable_main_elements[] = {
```
The reason is that special characters are not added to to_c_name,
causes special characters to appear in the structure name, causing
c file compilation to fail.
Broken by #12913
When testing and debugging the cluster code before, you need
to stop the cluster after making changes, and then start the
cluster again. Add a restart option for ease of use.
In a long printf call with many placeholders, it's hard to see which argument
belongs to which placeholder.
The long printf-like calls in the INFO and CLIENT commands are rewritten into
pairs of (format, argument). These pairs are then rewritten to a single call with
a long format string and a long list of arguments, using a macro called FMTARGS.
The file `fmtargs.h` is added to the repo.
Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com>