This repository contains the infrastructure code used to setup all dev/testnets. A lot of the code uses reusable components either provided by our ansible collection or our helm charts for kubernetes.
Status | Network | Links | Ansible | Terraform | Kubernetes |
---|---|---|---|---|---|
🟢Template🔴 | devnet-0 | Network config / Inventory / Validator ranges | 🔗 | 🔗 | 🔗 |
We're using asdf to make sure that we all use the same versions across tools. Our repositories should contain versions defined in .tools-versions.
You can then use ./setup.sh
to install all dependencies.
Make sure you select either hetzner or digitalocean (default is digitialocean), if you want to use hetzner rename digitalocean.tf to digitalocean.tf.disabled and rename hetzner.tf.disabled to hetzner.tf and vice versa.
terraform init
terraform apply
Be sure to set the correct terraform backend at main.tf
and provide correct S3 credentials.
To install the nodes according to the inventory file that is generated by the terraform template run the following commands from ./ansible/
./install_dependencies.sh
ansible-playbook -i inventories/devnet-0/inventory.ini playbook.yaml
In order to clean up the deployment
ansible-playbook -i inventories/devnet-0/inventory.ini cleanup_ethereum.yaml
To create a new testnet using the infrastructure code, follow these steps:
-
Open the node.tf file located in the terraform/devnet-0/ directory.
-
This file contains variables for each different nodes, containing node count, size, and validator ranges.
-
Adjust the validator indexes in the terraform/node.tf file based on your desired allocation of validators to nodes.
For example, let's say you want to assign validator index 0-24 to a single lodestar-besu node and validator index 25-224 to two lighthouse-nethermind nodes. Update the node.tf file as follows:
variable "lodestar-besu" {
default = {
name = "lodestar-besu"
count = 1
validator_start = 0
validator_end = 25
}
}
variable "lighthouse-nethermind" {
default = {
name = "lighthouse-nethermind"
count = 2
validator_start = 25
validator_end = 225
}
}
This will create loadstar-besu-1 with validator index 0-24, and lighthouse-nethermind-1 and lighthouse-nethermind-2 with validator indexes 25-124 and 125-224, respectively.
If you want to generate the validator indexes for multiple clients easily, use scripts/split-calculator.py, adjusting the total_validators
, validator_per_machine
, and cl/el split ratios according to your needs.
Make sure to adjust the validator ranges according to your requirements and the number of validators in your network. This configuration ensures that validators within the specified ranges will be allocated to the corresponding nodes during the deployment. By customizing the validator indexes in the Terraform configuration, you can allocate validators to specific nodes in your network according to your desired configuration.
-
terraform apply
will create a the machines and the inventory file for you. The inventory file will be located in the ansible/inventories/devnet-0 directory. -
The inventory.ini file will have the list of all the nodes that were created by Terraform. The inventory file will also have the validator ranges that were specified in the Terraform configuration. The validator ranges will be used by the Ansible playbook to allocate validators to the corresponding nodes.
[lodestar_besu]
lodestar-besu-1 ansible_host=167.99.34.241 cloud=digitalocean cloud_region=ams3 validator_start=0 validator_end=25
...
- Adjust the total number of validators in the ansible/inventories/devnet-0/group_vars/all.yaml file (ethereum_genesis_generator_config_files.values.env.NUMBER_OF_VALIDATORS) to match with your total number of validators that you are running.. This will be used by the Ansible playbook to generate the validator keys and deposit data for the network.
- ansible/inventories/devnet-0/group_vars/all.yaml has all the network configuration parameters. Adjust the parameters according to your requirements. Most likely you will not need to adjust these, unless you would like to use a custom setup. The default configuration will work for most networks.
There are multiple secrets used throughout the ansible files, defined in ansible/inventories/devnet-0/group_vars/all.sops.yaml.
If you want to modify the secrets,
- Install sops
- Add your pgp key to .sops.yaml like:
creation_rules:
- pgp: >-
ABDCE15BD6212E579B3813B78DC0A30AA95F49A1
- Create a
secrets.yaml
file like:
secret_zerossl:
ACME_EAB_KID:
ACME_EAB_HMAC_KEY:
secret_prometheus_remote_write:
username:
password:
secret_loki:
endpoint:
username:
password:
secret_nginx_shared_basic_auth:
name:
password:
secret_ethstats:
secret_genesis_mnemonic:
- Encrypt it to
all.sops.yaml
sops encrypt secrets.yaml --output ansible/inventories/devnet-0/group_vars/all/all.sops.yaml
This will encrypt the elements in the yaml file and put it as ansible yaml file. Each element in the yaml file will be decrypted and used as variables using the configured PGP key.
- Run
ansible-playbook -i inventories/devnet-0/inventory.ini playbook.yaml
from the ansible/ directory to deploy the network. This will generate the genesis file, validators and deploy the network according to the configuration parameters specified in the ansible/inventories/devnet-0/group_vars/all.yaml file. The generated files are located at network-configs/devnet-0/metadata.
Don't forget the following gotchas:
- Change the
ethereum_genesis_chain_id
value in ansible/inventories/devnet-0/group_vars/all.yaml to avoid clashing with an existing network - Ensure you have
docker
running on your local machine, this is essential for generating some post-testnet files - Make sure you add the github usernames to
bootstrap_default_user_authorized_keys_github_...
, otherwise ansible will fail on the bootstrap step
After deploying the network with ansible, you can access the nodes via ssh-ing into the host adresses listed in inventory.ini
and the ansible_user
defined in ansible/inventories/devnet-0/group_vars/all/00-defaults.yaml. You will need to have your github username or ssh public key added to bootstrap_default_user_authorized_keys
before the bootstrap step.
ssh devops@167.99.34.241
Most client software or other plugins are deployed via docker:
docker inspect execution // for execution client
docker inspect beacon // for consensus client
- Run
ansible-playbook -i inventories/devnet-0/inventory.ini cleanup_ethereum.yaml
from the ansible/ directory to clean up the network. This will delete the genesis file, validators and clean up the network on all the nodes.
- Run
ansible-playbook -i inventories/devnet-0/inventory.ini ansible-playbook playbook.yaml -t ethereum_genesis -e ethereum_genesis_cleanup=true
from the ansible/ directory to clean up the network-configs and validators directories on your local machine. This step is required if you would like to reuse the nodes but with a different genesis configuration. (For example, if you would like to change the validator indexes assigned to the nodes, due to a relaunch).
- Run
terraform destroy
from the terraform/devnet-0/ directory to delete the nodes. This will remove all the virtual machines and the inventory file. Be careful when running this command, as it will delete all the nodes and the inventory file. You will need to runterraform apply
again to create the nodes and the inventory file.
- The tooling for the different test networks is managed by our Kubernetes stack. These tools utilize the ethereum-helm-charts repository. The deployment of the tooling is handled by ArgoCD, a continuous delivery and GitOps tool for Kubernetes. (Warning, this will not work, unless ArgoCD is configured to monitor the repository).
- Place any custom tooling in the kubernetes/devnet-0 directory. The tooling will be deployed to the Kubernetes cluster by ArgoCD.
- Keep the format of kubernetes/devnet-name/tool-name/ as this will be used by ArgoCD to deploy the tooling to the Kubernetes cluster.
- To update a kubernetes helm chart, remove Chart.lock and run
helm dependency update
from the tool directory. This will update the dependencies for the helm chart. Commit the changes to the repository and ArgoCD will automatically deploy the updated tooling to the Kubernetes cluster. - To add a new tool, create a new directory in the kubernetes/devnet-0 directory. The directory name will be used as the tool name. Place the helm chart in the tool directory. Commit the changes to the repository and ArgoCD will automatically deploy the new tooling to the Kubernetes cluster.
- To modify the configuration of a tool, modify the values.yaml file in the tool directory. Commit the changes to the repository and ArgoCD will automatically deploy the updated tooling to the Kubernetes cluster.
- To delete a tool, delete the tool directory or move the devnet to kubernetes-archive directory, as this will not be monitored by ArgoCD. Commit the changes to the repository and ArgoCD will automatically delete the tooling from the Kubernetes cluster.
- To get the IP addresses of the nodes, run
terraform output
from the terraform/devnet-0/ directory. - To get the validator ranges run
curl -s https://bootnode-1.devnet-0.ethpandaops.io/meta/api/v1/validator-ranges.json
- To get which validator proposed a specific block run
ethdo --connection=https://user:password@bn.lighthouse-nethermind-1.devnet-0.ethpandaops.io block info --blockid 100 --json | jq -r .message.proposer_index | ./whose_validator.zsh
from the ansible/ directory.
- Getting execution layer client enodes
curl -s https://config.devnet-0.ethpandaops.io/api/v1/nodes/inventory | jq -r '.ethereum_pairs[] | .execution.enode'
- Getting conseus layer client ENRs
curl -s https://config.devnet-0.ethpandaops.io/api/v1/nodes/inventory | jq -r '.ethereum_pairs[] | .consensus.enr'
- Update all sops files
# Find all .sops.* and *.enc.* files and update their keys
find . -type d -name "vendor" -prune -o \( -type f \( -name "*.sops.*" -o -name "*.enc.*" \) \) -exec sops updatekeys {} -y \;
Here's a table of where the keys are used
Account Index | Component Used In | Private Key Used | Public Key Used | Comment |
---|---|---|---|---|
0 | tx_fuzz blobs | ✅ | Spams blobs on the network | |
1 | tx_fuzz_txs | ✅ | Spams tx on the network | |
2 | mev_flood_signing_key | ✅ | Spams mev-able txs on the network | |
3 | mev_flood_user_key | ✅ | Spams mev-able txs on the network | |
4 | faucet-1 | ✅ | Faucet 1 | |
5 | faucet-2 | ✅ | Faucet 2 | |
6 | mev_flood_private_key | ✅ | Spams mev-able txs on the network | |
7 | manual-deposits | ✅ | Used to make manual deposits | |
8 | Marius is rich | |||
9 | goomy | ✅ | Spams blobs on the network | |
10 | assertoor | ✅ | Runs various test scenarios | |
11-29 | available |