Valkey 9 on Ubuntu 22.04 on Azure User Guide
Overview
This image runs Valkey 9 as a single node in memory key value store on Ubuntu 22.04 LTS. Valkey is the Linux Foundation fork of Redis 7.2 created after the Redis trademark and license change in 2024. The Valkey server speaks the Redis wire protocol, which means existing Redis client libraries such as redis py for Python, ioredis and node redis for Node, StackExchange.Redis for .NET, Jedis and Lettuce for Java, and go redis for Go connect without code changes. Existing Redis command and script libraries also work unchanged.
Authentication is enforced on the network listener using the classic requirepass directive, and a unique admin password is generated on the first boot of every deployed virtual machine, so two virtual machines launched from the same gallery image never share credentials. Append only file persistence is enabled by default, with RDB snapshots as a secondary durability layer. The default eviction policy is allkeys lru with a maxmemory sized to approximately 75 percent of the virtual machine's RAM at install time.
The image is intended for teams that want a production posture single node cache or data store on day one, without spending hours on packaging, systemd plumbing, persistence configuration, or password management. It is not a replicated Valkey cluster, it is not TLS encrypted out of the box, and it does not ship with Sentinel, Valkey Cluster sharding, or the RedisInsight companion UI. Section 14 documents the recommended path for adding TLS before you put real production traffic through the server.
The brand is lowercase cloudimg throughout this guide. All cloudimg URLs in this guide use the form https://www.cloudimg.co.uk.
Prerequisites
Before you deploy this image you need:
- A Microsoft Azure subscription where you can create resource groups, virtual networks, and virtual machines
- Azure role permissions equivalent to Contributor on the target resource group
- An SSH public key for first login to the admin user account
- A virtual network and subnet in the same region as the Azure Compute Gallery the image is published into, with an associated network security group
- The Azure CLI (
azversion 2.50 or later) installed locally if you intend to use the CLI deployment path in Section 2 - The cloudimg Valkey 9 offer enabled on your tenant in Azure Marketplace
Step 1: Deploy the Virtual Machine from the Azure Portal
Navigate to Marketplace in the Azure Portal, search for Valkey 9, and select the cloudimg publisher entry. Click Create to begin the wizard.
On the Basics tab choose your subscription, target resource group, and region. The region must match the region your Azure Compute Gallery exposes the image in. Set the virtual machine name. Choose SSH public key as the authentication type, set the username to a name of your choice, and paste your SSH public key. Standard_B2s is a reasonable starting size because Valkey itself is extremely lightweight at idle (around 5 megabytes of resident memory before you load data). The limiting factor is how much data you plan to hold in memory, so size the virtual machine's RAM to cover your working set plus headroom for the append only file rewrite buffer.
On the Disks tab the recommended OS disk type is Standard SSD. Leave the OS disk size at the default. You can attach a separate Premium SSD data disk now if you intend to move the Valkey data directory to it later, or you can do that after the server is running by following Section 12.
On the Networking tab select your existing virtual network and subnet. Attach a network security group that allows inbound TCP 22 from your management IP range and inbound TCP 6379 only from the virtual network CIDR or the specific application server subnets that need to talk to the server. Do not expose 6379 to the public internet. Valkey protected mode plus a password offers authentication only, there is no transport encryption without the TLS configuration described in Section 14, and unauthenticated Valkey or Redis instances on the public internet are one of the most scanned and exploited attack surfaces in cloud.
On the Management, Monitoring, and Advanced tabs the defaults are appropriate. Click Review + create, wait for validation to pass, then click Create. Deployment takes around two minutes.
Step 2: Deploy the Virtual Machine from the Azure CLI
If you prefer the command line, use the gallery image resource identifier as the source. The exact resource identifier is published on your Partner Center plan. A representative invocation:
RG="valkey-prod"
LOCATION="eastus"
VM_NAME="valkey-01"
ADMIN_USER="valkeyops"
GALLERY_IMAGE_ID="/subscriptions/<sub-id>/resourceGroups/azure-cloudimg/providers/Microsoft.Compute/galleries/cloudimgGallery/images/valkey-9-ubuntu-22-04/versions/1.0.20260417"
SSH_KEY="$(cat ~/.ssh/id_rsa.pub)"
az group create --name "$RG" --location "$LOCATION"
az network vnet create \
--resource-group "$RG" \
--name valkey-vnet \
--address-prefix 10.30.0.0/16 \
--subnet-name valkey-subnet \
--subnet-prefix 10.30.1.0/24
az network nsg create --resource-group "$RG" --name valkey-nsg
az network nsg rule create \
--resource-group "$RG" --nsg-name valkey-nsg \
--name allow-ssh-mgmt --priority 100 \
--source-address-prefixes "<your-mgmt-cidr>" \
--destination-port-ranges 22 --access Allow --protocol Tcp
az network nsg rule create \
--resource-group "$RG" --nsg-name valkey-nsg \
--name allow-valkey-vnet --priority 110 \
--source-address-prefixes 10.30.0.0/16 \
--destination-port-ranges 6379 --access Allow --protocol Tcp
az vm create \
--resource-group "$RG" \
--name "$VM_NAME" \
--image "$GALLERY_IMAGE_ID" \
--size Standard_B2s \
--storage-sku StandardSSD_LRS \
--admin-username "$ADMIN_USER" \
--ssh-key-values "$SSH_KEY" \
--vnet-name valkey-vnet --subnet valkey-subnet \
--nsg valkey-nsg \
--public-ip-address "" \
--os-disk-size-gb 32
The --public-ip-address "" flag keeps the server off the public internet. Use a bastion host or your existing private connectivity to reach it.
Step 3: Connect via SSH
After deployment, find the private IP of the new virtual machine. From a host inside the same virtual network:
ssh valkeyops@<private-ip>
The first login may take a few seconds while cloud init finalises. Once you have a shell, the server has already been started by systemd and the first boot oneshot has already generated the per VM password.
Step 4: Retrieve the Valkey Credentials
The admin password is written by the valkey-firstboot.service systemd oneshot the very first time the virtual machine boots. It lives in a single file, readable only by root:
sudo cat /stage/scripts/valkey-credentials.log
You will see something like:
username=default
password=4f8aB2dQ9pK1xVnE7sT3uYrL6cZw0jHm
port=6379
sample_connect=valkey-cli -a "4f8aB2dQ9pK1xVnE7sT3uYrL6cZw0jHm" PING
These credentials are unique to this virtual machine. Store them somewhere safe immediately, because the next thing you do should be to rotate them per Section 10. The first boot oneshot does not run again on subsequent reboots, so this file is your only readable copy of the generated password until you rotate it.
Step 5: Server Components
The deployed image contains the following components:
| Component | Version | Purpose |
|---|---|---|
| Valkey | 9.0.3 | Single node in memory key value store |
| Ubuntu | 22.04 LTS | Base operating system |
| systemd units | valkey-server.service, valkey-firstboot.service | Process supervision and first boot password generation |
The Valkey server process is /usr/bin/valkey-server /etc/valkey/valkey.conf running under the dedicated valkey system user, with no login shell, started by the cloudimg authored valkey-server.service unit with Type=notify for tight systemd integration. Valkey is compiled from the official upstream 9.0.3 source release with USE_SYSTEMD=yes and BUILD_TLS=yes, which means customers can enable native TLS on port 6380 per Section 14 without recompiling. The source tree lives at /usr/src/valkey-9.0.3 so operators who want to rebuild with different flags can do so in place. A companion valkey-firstboot.service oneshot runs before valkey-server.service on the customer's first boot and is responsible for replacing the placeholder requirepass in valkey.conf with a VM unique password.
Step 6: Filesystem Layout
| Path | Owner | Purpose |
|---|---|---|
/usr/bin/valkey-server |
root:root | Valkey server binary (compiled from upstream source, installed via make install PREFIX=/usr) |
/usr/bin/valkey-cli |
root:root | Command line client |
/usr/src/valkey-9.0.3/ |
root:root | Upstream source tree retained for in place rebuilds |
/etc/valkey/valkey.conf |
valkey:valkey 0640 | Server configuration including requirepass |
/etc/systemd/system/valkey-server.service |
root:root 0644 | cloudimg authored systemd unit with Type=notify and firstboot ordering |
/etc/systemd/system/valkey-firstboot.service |
root:root 0644 | First boot oneshot unit |
/usr/local/sbin/valkey-firstboot.sh |
root:root 0750 | First boot logic (password generation, config substitution) |
/usr/local/sbin/valkey-start.sh |
root:root 0755 | Customer helper wrapping systemctl start |
/usr/local/sbin/valkey-stop.sh |
root:root 0755 | Customer helper wrapping systemctl stop |
/var/lib/valkey/ |
valkey:valkey 0750 | Data directory: dump.rdb plus appendonlydir/ |
/var/log/valkey/ |
valkey:valkey 0750 | Server logs |
/run/valkey/valkey-server.pid |
valkey:valkey | PID file, tmpfs, regenerated at each start |
/stage/scripts/valkey-credentials.log |
root:root 0600 | Generated admin password, readable by root only |
Step 7: Start, Stop, and Check Status
The server is started by systemd at boot. Manage it as follows:
# Status
sudo systemctl status valkey-server.service
# Stop
sudo systemctl stop valkey-server.service
# Start
sudo systemctl start valkey-server.service
# Restart
sudo systemctl restart valkey-server.service
# Tail live logs
sudo journalctl -u valkey-server.service -f
To check the first boot oneshot:
sudo systemctl status valkey-firstboot.service
sudo journalctl -u valkey-firstboot.service
The oneshot is expected to remain in the active (exited) state after a successful first boot. It is gated by /var/lib/valkey/.firstboot-done and will not run again.
Step 8: Connect with valkey-cli
Export the generated password once per shell session, then use valkey-cli:
export VALKEY_PASSWORD="$(sudo awk -F= '/^password=/ {print $2}' /stage/scripts/valkey-credentials.log)"
valkey-cli -a "$VALKEY_PASSWORD" PING
Expected output:
PONG
Inspect the server's self reported identity:
valkey-cli -a "$VALKEY_PASSWORD" INFO server | head -20
The output confirms the Valkey server version (valkey_version:9.0.3), the Redis wire protocol compatibility version (redis_version:7.2.4), and later lines expose the process id, the operating system, and the uptime in seconds. The server_mode line should report standalone, confirming this is a single node deployment rather than a Valkey Cluster shard.
Step 9: Basic Key Value Operations
Valkey implements the full Redis command family. The following round trip exercises strings, counters, and lists:
valkey-cli -a "$VALKEY_PASSWORD" SET mykey "Hello Valkey"
valkey-cli -a "$VALKEY_PASSWORD" GET mykey
valkey-cli -a "$VALKEY_PASSWORD" INCR counter
valkey-cli -a "$VALKEY_PASSWORD" INCR counter
valkey-cli -a "$VALKEY_PASSWORD" INCR counter
valkey-cli -a "$VALKEY_PASSWORD" LPUSH mylist "item1" "item2" "item3"
valkey-cli -a "$VALKEY_PASSWORD" LRANGE mylist 0 -1
valkey-cli -a "$VALKEY_PASSWORD" DEL mykey counter mylist
Expected output:
OK
"Hello Valkey"
(integer) 1
(integer) 2
(integer) 3
(integer) 3
1) "item3"
2) "item2"
3) "item1"
(integer) 3
Confirm that append only file persistence is active and that the configured maxmemory policy matches your expectation:
valkey-cli -a "$VALKEY_PASSWORD" CONFIG GET appendonly
valkey-cli -a "$VALKEY_PASSWORD" CONFIG GET maxmemory-policy
valkey-cli -a "$VALKEY_PASSWORD" CONFIG GET maxmemory
Expected output (the maxmemory byte value varies with virtual machine RAM):
1) "appendonly"
2) "yes"
1) "maxmemory-policy"
2) "allkeys-lru"
1) "maxmemory"
2) "<bytes>"
Step 10: Connect from Application Servers
Because Valkey speaks the Redis wire protocol, existing Redis client libraries connect without code changes. The examples below assume the virtual machine's private IP is 10.30.1.4 and you have loaded the generated password into the environment variable VALKEY_PASSWORD on your application server.
Python using redis:
import os, redis
r = redis.Redis(
host="10.30.1.4",
port=6379,
password=os.environ["VALKEY_PASSWORD"],
decode_responses=True,
)
r.set("greeting", "hello from redis-py")
print(r.get("greeting"))
Node.js using ioredis:
import Redis from "ioredis";
const client = new Redis({
host: "10.30.1.4",
port: 6379,
password: process.env.VALKEY_PASSWORD,
});
await client.set("greeting", "hello from ioredis");
console.log(await client.get("greeting"));
Java using Jedis:
import redis.clients.jedis.JedisPooled;
JedisPooled client = new JedisPooled(
"10.30.1.4", 6379, null,
System.getenv("VALKEY_PASSWORD")
);
client.set("greeting", "hello from jedis");
System.out.println(client.get("greeting"));
From a shell on your application server, a quick connectivity test:
valkey-cli -h 10.30.1.4 -p 6379 -a "$VALKEY_PASSWORD" PING
Expected output:
PONG
If you do not have the Valkey APT package on the application server, the identical redis-cli binary from the Redis distribution also works, as does any other Redis client.
Step 11: Rotate the Admin Password
The generated requirepass lives in /etc/valkey/valkey.conf. To rotate it:
NEW_PW="$(tr -dc 'A-Za-z0-9' </dev/urandom | head -c 32)"
sudo valkey-cli -a "$VALKEY_PASSWORD" CONFIG SET requirepass "$NEW_PW"
sudo sed -i "s|^requirepass .*|requirepass ${NEW_PW}|" /etc/valkey/valkey.conf
sudo install -o root -g root -m 0600 /dev/null /stage/scripts/valkey-credentials.log
sudo tee /stage/scripts/valkey-credentials.log >/dev/null <<EOF
username=default
password=${NEW_PW}
port=6379
sample_connect=valkey-cli -a "${NEW_PW}" PING
EOF
export VALKEY_PASSWORD="$NEW_PW"
valkey-cli -a "$VALKEY_PASSWORD" PING
The CONFIG SET requirepass call changes the running password without restarting the server, so in flight client connections are not dropped. The sed call updates the on disk configuration so the new password survives a service restart. Confirm with a fresh PING, then distribute the new password to every application that connects to this server.
Step 12: Create Named ACL Users for Application Access
The default user carries the master password and full command access. For any application that connects to this server, create a named ACL user with only the commands and keyspaces it needs:
APP_USER="orders_api"
APP_PW="$(tr -dc 'A-Za-z0-9' </dev/urandom | head -c 32)"
valkey-cli -a "$VALKEY_PASSWORD" ACL SETUSER "$APP_USER" on ">$APP_PW" ~orders:* +@read +@write +@string +@hash +@list -@admin -@dangerous
valkey-cli -a "$VALKEY_PASSWORD" ACL LIST
sudo cp /etc/valkey/valkey.conf /etc/valkey/valkey.conf.bak
sudo tee -a /etc/valkey/valkey.conf >/dev/null <<EOF
# ACL users added by cloudimg Section 12
user ${APP_USER} on >${APP_PW} ~orders:* +@read +@write +@string +@hash +@list -@admin -@dangerous
EOF
Test the new user:
valkey-cli -h 127.0.0.1 -p 6379 --user "$APP_USER" -a "$APP_PW" SET orders:42 '{"status":"pending"}'
valkey-cli -h 127.0.0.1 -p 6379 --user "$APP_USER" -a "$APP_PW" GET orders:42
Expected output:
OK
"{\"status\":\"pending\"}"
Attempts to read or write keys outside the orders: prefix, or to run administrative commands, will be rejected with a NOPERM error. This is the recommended pattern for production: every application gets its own ACL user, its own password, and the minimum command and keyspace scope it needs.
Step 13: Move Data to an Attached Premium Disk
If you attached a Premium SSD data disk at deploy time, format and mount it, then move /var/lib/valkey onto it:
# List disks to find the new device, typically /dev/sdc on Azure
lsblk
# Format and mount
sudo mkfs.ext4 -L valkey-data /dev/sdc
sudo mkdir -p /mnt/valkey-data
sudo mount /dev/sdc /mnt/valkey-data
echo "LABEL=valkey-data /mnt/valkey-data ext4 defaults,nofail 0 2" | sudo tee -a /etc/fstab
# Stop the server, copy the data, replace the directory with a bind mount
sudo systemctl stop valkey-server.service
sudo rsync -aHAX --numeric-ids /var/lib/valkey/ /mnt/valkey-data/
sudo mv /var/lib/valkey /var/lib/valkey.prev
sudo mkdir -p /var/lib/valkey
sudo chown valkey:valkey /var/lib/valkey
sudo mount --bind /mnt/valkey-data /var/lib/valkey
echo "/mnt/valkey-data /var/lib/valkey none bind 0 0" | sudo tee -a /etc/fstab
# Restart
sudo systemctl start valkey-server.service
valkey-cli -a "$VALKEY_PASSWORD" PING
sudo rm -rf /var/lib/valkey.prev
The append only file and RDB snapshots now live on the Premium SSD, with the server still finding them at /var/lib/valkey because of the bind mount. This pattern keeps the Valkey configuration path stable while letting you scale storage independently of the OS disk.
Step 14: Enable TLS Before Production
Valkey supports native TLS on a separate port (conventionally 6380). For production traffic outside a trusted private network, terminate TLS on Valkey itself rather than adding a proxy. The rough shape:
# Place your server certificate, private key, and the CA bundle in /etc/valkey/tls/
sudo mkdir -p /etc/valkey/tls
sudo chown valkey:valkey /etc/valkey/tls
sudo chmod 0750 /etc/valkey/tls
# Copy valkey.crt, valkey.key, ca.crt into /etc/valkey/tls/ with 0640 valkey:valkey
sudo tee -a /etc/valkey/valkey.conf >/dev/null <<'EOF'
# TLS configuration added by cloudimg Section 14
tls-port 6380
tls-cert-file /etc/valkey/tls/valkey.crt
tls-key-file /etc/valkey/tls/valkey.key
tls-ca-cert-file /etc/valkey/tls/ca.crt
tls-auth-clients no
EOF
sudo systemctl restart valkey-server.service
valkey-cli -a "$VALKEY_PASSWORD" --tls \
--cert /etc/valkey/tls/valkey.crt \
--key /etc/valkey/tls/valkey.key \
--cacert /etc/valkey/tls/ca.crt \
-h <hostname-matching-cert> -p 6380 PING
Once TLS is verified working, open NSG rule 6380 to your application subnets, then either disable port 6379 entirely by setting port 0 in valkey.conf, or restrict it with tls-auth-clients yes and client certificate verification. Do not leave 6379 open on any interface that external traffic can reach once TLS is enabled.
Step 15: Troubleshooting
Cannot connect remotely. Verify the server is bound to 0.0.0.0 rather than 127.0.0.1 by running sudo ss -tlnp | grep :6379. Verify your network security group allows the source subnet on TCP 6379. Verify the client is using the correct password by running the same command locally with valkey-cli -a "$VALKEY_PASSWORD" PING. A common misconfiguration is leaving protected-mode yes but also leaving the password blank — the cloudimg image ships with both a bind of 0.0.0.0 and a non blank password, so this combination works out of the box.
High memory pressure or OOM kills. Check valkey-cli -a "$VALKEY_PASSWORD" INFO memory. The used_memory_human line tells you current footprint, the maxmemory_human line tells you the configured ceiling, and the maxmemory_policy line tells you how Valkey evicts when it hits the ceiling. The default allkeys-lru policy evicts the least recently used keys across all databases. If you need different behaviour, set a different policy with CONFIG SET maxmemory-policy <policy> and persist it to /etc/valkey/valkey.conf. Consult the Valkey documentation for the full policy list.
Append only file grows unbounded. The AOF rewrites automatically when it doubles in size past the auto-aof-rewrite-min-size threshold of 64 megabytes. If disk pressure is building, check valkey-cli -a "$VALKEY_PASSWORD" INFO persistence. You can force a rewrite with valkey-cli -a "$VALKEY_PASSWORD" BGREWRITEAOF. If AOF rewrites repeatedly fail, check /var/log/valkey/valkey-server.log for the underlying reason, typically disk full or permission issues on /var/lib/valkey/appendonlydir/.
Server refuses connections after reboot. Confirm the first boot oneshot completed by running sudo systemctl status valkey-firstboot.service and checking that the status is active (exited) and the sentinel /var/lib/valkey/.firstboot-done exists. If the oneshot failed, check sudo journalctl -u valkey-firstboot.service for the reason, fix the cause, sudo rm /var/lib/valkey/.firstboot-done, then sudo systemctl start valkey-firstboot.service followed by sudo systemctl start valkey-server.service.
Step 16: Security Recommendations
- Never expose TCP 6379 or 6380 to the public internet. Unauthenticated or weakly authenticated Valkey and Redis instances are a top target for opportunistic scanners, and even a strong password does not substitute for network segmentation.
- Rotate the generated admin password on first login per Section 10. The file
/stage/scripts/valkey-credentials.logis your only readable copy of the generated password, so read it, rotate it, and store the new value in your secrets manager. - Create named ACL users per application per Section 11 rather than sharing the
defaultuser's credentials. Grant only the commands and keyspaces each application needs. - Enable TLS per Section 13 before exposing Valkey to anything other than a tightly scoped private network. Prefer Valkey native TLS on 6380 over a sidecar proxy for the lowest latency path.
- Restrict dangerous commands for application ACL users.
-@dangerousblocksKEYS,FLUSHALL,FLUSHDB,DEBUG,SHUTDOWN,CONFIG, and related commands. Thedefaultuser retains access to these for operator use. - Enable Azure Monitor alerting on the virtual machine's memory pressure and on the
/var/lib/valkeydisk fill percentage. An AOF that cannot rewrite because the disk is full is a silent outage waiting to happen.
Step 17: Support and Licensing
Valkey is distributed under the three clause BSD license by the Linux Foundation, and the cloudimg image bundles it unmodified from the official packages.valkey.io APT repository. Ubuntu 22.04 LTS is distributed by Canonical under the terms visible via dpkg -L base-files on the running server.
The cloudimg image itself is distributed under the Microsoft Azure Marketplace standard contract terms, with PAYG pricing visible on the Partner Center plan. cloudimg provides best effort image level support: questions about the image, the first boot mechanism, the systemd drop ins, the ACL user pattern, or the TLS configuration in Section 14 go to https://www.cloudimg.co.uk/support. Upstream Valkey bug reports go to https://github.com/valkey-io/valkey/issues.