Solana Localnet

Solana has three public clusters: Mainnet, Testnet and Devnet. All of these require coordination with other parties in one way or another, and it becomes hard for devs to reset state when developing locally.

The Hayek Validator Toolkit includes a fourth cluster named Localnet, which runs 100% inside a Docker container, and spins up a fully functioning Solana network with multiple coordinating validators in seconds.

Benefits

Exploration: The main benefit of the Solana Localnet is that it promotes exploration without fear. It is a disposable environment that is easy to setup, take down, change, and break as much as you want.

Speed: Launching the Solana Localnet in your workstation takes seconds. Coordinating between nodes is instant.

State: Resetting state of a validator node, spining a new validator that can join Localnet, or turning off validators is near instant. Even better, you can completely delete the docker localnet cluster, and spin it again if you feel you corrupted the state doing something.

Automation: One of the nodes of Localnet is an Ansible Control node, which will let you run ansible scripts against your Localnet nodes as well as Mainnet and Testnet.

Workstation Setup

Before running the Hayek Validator Kit Solana Localnet, you must setup your workstation for success.

All the configurations related to the Hayek Validator Kit are in this GitHub repo, which you will have to clone locally if you have not done so already:

git clone https://github.com/team-supersafe/hayek-validator-kit.git

You should get familiar with the contents of the repo. The Localnet cluster is defined in the Dockerfile and docker-compose.yml files under the solana-localnet folder.

The Localnet Cluster

Pre-Provisioned Demo Key Set

We have pre-provisioned a sets of keys called demo1 only for demonstration and debugging purposes.

Name
Address
Explorer links

validator identity

demoneTKvfN3Bx2jhZoAHhNbJAzt2rom61xyqMe5Fcw

vote account

demo52s9s1foFXgnbVa8vYQM8GS9XRsJ3aMpus1rNnb

user stake account

demoMwLKQwfPZpjrbGG7Ed6vbXizxFDCp5srVd1Hqky

The demo1 identity key will be running with 200k SOL staked in Localnet every time you start the cluster. These 200k SOL represents roughly ~16% of all cluster stake.

Host Inventory

The Localnet cluster consist of the following containers:

Container Node
Key Features

gossip-entrypoint - SSH port binding: localhost:9022

The cluster's Gossip protocol entry point node. Any validator can use this to join the network and synchronize with other validators.

  • It provides Genesis block for Solana Localnet

  • Kick-starts POH

  • Epoch = 750 slots (~5 min)

  • Mostly for cluster boilerplate and not meant to be modified

host-alpha

- SSH port binding: localhost:9122

Running the demo1 validator key set with:

  • 200K delegated SOL (~16% of all cluster stake)

host-bravo - SSH port binding: localhost:9222

A validator-ready container without a validator key set. It does not have any validator running, but the tooling is already installed.

host-charlie - SSH port binding: localhost:9322

A naked Ubuntu 24.04. This guy is not ready for anything. This is good to test bare-bone provisioning scripts.

ansible-control - Not SSH bound

- See how to connect

Your official sysadmin automation environment:

  • Solana CLI and Ansible installed

  • Access Solana Mainnet, Testnet and Localnet

# For Mainnet Connectivity
solana -um ***
#For Testnet Connectivity
solana -ut ***
For Localnet Connectivity
solana -ul ***
or also "solana -url localhost (-ul)"
  • Connect to any Localnet container via SSH.

After the cluster is provisioned, the staked SOL delegated to the demo1 key set will be active at the beginning of Epoch 1 (after ~5 minutes). Then the demo1 validator will start voting and move from delinquent to not-delinquent at the beginning of Epoch 2.

Using Explorers

You can use the Solana Explorer and Solscan apps to explore any accounts in your localnet cluster using these addresses:

Running Localnet

We use Docker to run Localnet:

  1. IDE Option (recommended) Open the repo in VSCode or another popular IDE and select "Reopen in Container" or similar option. This will use the Dev Containers extension to automatically run the services containers defined in docker-compose.yml and trigger the build process of the images in the Dockerfile if needed.

  2. Terminal Option Another option, for those VSCode haters, is the run Localnet directly from the terminal by running:

    cd solana-localnet
    ./start-localnet-from-outside-ide.sh

Congratulations! You are now running Solana Localnet, connected to your Ansible Control and ready to make a mess of your Localnet playground.

Resetting Localnet

At times, and as you corrupt the state of your docker containers running in Localnet, you may need to reset your docker Localnet cluster to start fresh. You can accomplish this by selecting the options of "Reopen in Container" or "Rebuild Container" within VSCode.

You can also stop the cluster from docker with

cd solana-localnet
docker compose down

SSH into nodes

From Workstation

ssh -p 9122 sol@localhost # ssh into alpha host
ssh -p 9222 sol@localhost # ssh into bravo host
ssh -p 9322 sol@localhost # ssh into charlie host

Ports are mapped from your localhost to each container:

  • localhost:9022gossip-entrypoint:22

  • localhost:9122host-alpha:22

  • localhost:9222host-bravo:22

  • localhost:9322host-charlie:22

From Ansible Control

ssh sol@host-alpha # ssh into host-alpha node
ssh sol@host-bravo # ssh into host-bravo node
ssh sol@host-charlie # ssh into host-charlie node

Using the Solana CLI in localnet

We will use the solana gossip and solana validators command to illustrate how to correctly configure the RPC url depending on from where we are running the commands.

Directly from our workstation

solana --url localhost gossip # or just solana -ul gossip
solana --url localhost validators # or just solana -ul validators

From ansible-control

solana --url localhost gossip # or just solana -ul gossip
solana --url localhost validators # or just solana -ul validators

From a validator node

After the first login into one of the validator hosts (host-alpha and host-bravo), we set the RPC_URL environment variable pointing to the gossip-entrypoint host, so we can use that variable when using the Solana CLI, like so:

# This varibale is already set when provisioning the cluster
# RPC_URL=http://gossip-entrypoint:8899

solana --url $RPC_URL gossip # or just solana -u $RPC_URL gossip
solana --url $RPC_URL validators # or just solana -u $RPC_URL validators

Other common validator CLI commands can be found HERE.

Cluster Example

Entrypoint Node

An entrypoint container can use this command to run:

solana-test-validator \
    --slots-per-epoch 750 \
    --limit-ledger-size 500000000 \
    --dynamic-port-range 8000-8020 \
    --rpc-port 8899 \
    --bind-address 0.0.0.0 \
    --gossip-host $(hostname -i | awk '{print $1}') \
    --gossip-port 8001 \
    --reset

... and it will output the following:

2025-04-01 10:04:00 Notice! No wallet available. `solana airdrop` localnet SOL after creating one
2025-04-01 10:04:00 
2025-04-01 10:04:00 Ledger location: test-ledger
2025-04-01 10:04:00 Log: test-ledger/validator.log
2025-04-01 10:04:00 Initializing...
2025-04-01 10:04:05 Waiting for fees to stabilize 1...
2025-04-01 10:04:05 Connecting...
2025-04-01 10:04:05 Identity: 3jHsYXrWP7GrmBhzkGHp84EEwAvLtKnD6SZC9r6LM3Ji
2025-04-01 10:04:05 Genesis Hash: 2d6eCexwpnhp66pcKidbTDaczqnnG6zBiHRK196MoFvn
2025-04-01 10:04:05 Version: 2.1.16
2025-04-01 10:04:05 Shred Version: 64483
2025-04-01 10:04:05 Gossip Address: 172.21.0.3:8001
2025-04-01 10:04:05 TPU Address: 172.21.0.3:8003
2025-04-01 10:04:05 JSON RPC URL: http://172.21.0.3:8899
2025-04-01 10:04:05 WebSocket PubSub URL: ws://172.21.0.3:8900

# ENTRYPOINT_IDENTITY_PUBKEY=3jHsYXrWP7GrmBhzkGHp84EEwAvLtKnD6SZC9r6LM3Ji

If --gossip-host <IP_ADDRESS> is not provided here, any agave-validator client trying to connect through gossip will try hard for a while...

Searching for an RPC service with shred version 36796 (Retrying: Wait for known rpc peers)...
[2025-03-29T18:02:26.010433513Z INFO  agave_validator::bootstrap] Total 0 RPC nodes found. 0 known, 0 blacklisted

... and eventually die with this message:

[2025-03-29T18:05:00.275887418Z ERROR agave_validator::bootstrap] Failed to get RPC nodes: Unable to find any RPC peers. Consider checking system clock, removing `--no-port-check`, or adjusting `--known-validator ...` arguments as applicable

Validator Nodes

ENTRYPOINT_IDENTITY_PUBKEY=3jHsYXrWP7GrmBhzkGHp84EEwAvLtKnD6SZC9r6LM3Ji

# primary validator node
agave-validator \
    --identity /home/sol/keys/demo1/identity.json \
    --vote-account demo52s9s1foFXgnbVa8vYQM8GS9XRsJ3aMpus1rNnb \
    --authorized-voter /home/sol/keys/demo1/primary-target-identity.json \
    --log agave-validator.log \
    --ledger /mnt/ledger \
    --accounts /mnt/accounts \
    --snapshots /mnt/snapshots \
    --allow-private-addr \
    --rpc-port 9999 \
    --no-os-network-limits-test \
    --known-validator 3jHsYXrWP7GrmBhzkGHp84EEwAvLtKnD6SZC9r6LM3Ji \
    --only-known-rpc

Troubleshooting

If your validator doesn't show up as a running process or the process is running but it never catches up of falls behind, make sure to check the logs before anything else:

tail ~/logs/agave-validator.log

References

Reference credits for the Dockerfile for ubuntu-ansible:

Last updated

Was this helpful?