Setting up a dynamic raft based Quorum network for development and testing using Helm.
Following this informative article provided by my colleague Majd, in which you can learn the basics about Quorum and how to set up a minimal Quorum network using docker containers, we decided to take things a bit further and use our knowledge to provide a dynamic Quorum setup deployable to Kubernetes. In our GitHub repository, we provide a Helm chart and some scripts which aim to make developing and testing on Quorum a lot faster and easier. This article will provide some insights regarding the setup and the functionalities of the project.
Tooling & Prerequisites
For our setup, we need a running Kubernetes cluster and Helm. Helm enables us to deploy a preconfigured network to a running cluster using the concept of charts. This, in combination with some scripts, gives us the ability to dynamically add and remove nodes to and from the network.
Minikube is one of the multiple tools which you can use to spin up a Kubernetes cluster locally. Other viable options are k3s or kind. Those tools are widely used to develop and test applications on local infrastructure, before deploying them to the target infrastructure.
Here is a pretty good explanation of what Helm does and how it is connected with Kubernetes:
In simple terms, Helm is a package manager for Kubernetes. Helm is the K8s equivalent of yum or apt. Helm deploys charts, which you can think of as a packaged application. It is a collection of all your versioned, pre-configured application resources which can be deployed as one unit. You can then deploy another version of the chart with a different set of configuration.
Now that we know the technical requirements we can take a closer look at the actual repository. The intention for this setup was to use Quorum on Kubernetes. As we did not find any solutions for this besides Qubernetes — the officially supported way to deploy Quorum to Kubernetes — we decided to create our own deployments using Helm. This in particular has the benefit of being more flexible than the officially supported way where Kubernetes deployments get generated once and have to be regenerated and redeployed everytime you need another setup. Using our dynamic approach you will be able to keep the network running while adding or removing nodes.
The following code shows the values.yaml file in which most of the action will take place. Those values are used to dynamically fill a set of templates which you can then reuse for the number of nodes you want in your network. The setup also allows you to modify some additional configurations regarding the deployment such as input parameters for Quorum and Geth.
The values file additionally takes input for the nodes which are going to be deployed to the cluster. Some of those values are needed for adding a node to a raft based Quorum cluster and others give some additional functionality like enabling or disabling endpoints.
Under endpoints, RPC, as well as WebSocket endpoints, can be turned on and off. This in particular is used for communication inside the cluster. Additionally — if needed — you can enable the ingress controller to easily access nodes from outside the cluster. If you have Ingress enabled you can access the nodes at
http://<cluster-ip>/quorum-node<n>-ws. Nodekey and enode represent the key material for a raft node, they can be generated using the bootnode command provided by Geth. The key value is a Geth Keystore file which holds the account credentials for a Geth account. Only with the combination of bootnode credential and Geth account, the node will be able to function properly. Note that example credentials as provided in the repository should never be reused in any production environment.
The above values will then be used to fill in the provided templates. Those templates usually take values for exactly one node/deployment.
By looping through the nodes object of the values file at the top of each template, we can reuse these for every new node added to the values file. Here’s an example for a PersistentVolumeClaim:
Deploying and Updating the Chart
Now, let’s start with deploying the network to a running Kubernetes infrastructure.
Upgrades, after changing the configuration of your network can be installed by using:
Adding New Nodes
Besides adding and removing nodes by modifying the values file manually, we also implemented a more convenient way to do this. For this, we provide multiple scripts that allow us to add and remove specific as well as multiple nodes.
In the following, I will use the addNodes.sh script to dynamically add new nodes to a running cluster.
The addNodes.sh script now allows me to decide how many nodes I want to add to the cluster. I choose to add 2 additional nodes and the script automatically generates the corresponding credentials, adds them to the values file, and upgrades the deployed Helm chart.
Note the additionally generated value raftId. This value is required for every additional node as it enables the flag “ — raftjoinexisting <raftId>” which is needed to properly add a new node to the initial 3 nodes cluster.
After updating the chart you can see that the new nodes have been added to the cluster. To confirm that the cluster is running and is properly synchronized you can run the following command, which will open a shell to Geth running inside the container. You can then use this open shell to inspect the state of raft (see below) or execute different geth commands. If the resulting nodeActive value is true, the node is properly synced with the cluster and ready to go.
All in all this project should help people to improve testing and development of Quorum networks running on Kubernetes infrastructure. Besides that, it is also a simple and convenient way to gather some experience using Quorum and experiment with various settings, especially if you have problems figuring out the right configuration for your cause. So if you want to take a look at the repository feel free to do so. If you have any suggestions or improvements you might want to create a PR which we will happily review.