LastNode commands

The Makefile provide different commands to help you operate your LastNode.

There are two types of make commands, READ and WRITE.

Run make help at any stage to get an exhaustive list of help options and how to interact with the system.

READ COMMANDS

Read commands simply read your node state and doesn’t commit any transactions.

To get information about your node on how to connect to services or its IP, run the command below. You will also get your node address and the vault address where you will need to send your bond.

make status

WRITE COMMANDS

Write commands actually build and write transactions into the underlying statechain. They cost CUBE from your bond, currently 0.02, but you can check this on the /constants endpoint “CLICOSTINCUBE”. This will post state in the chain which will be now updated globally. The CUBE fee is to prevent DDoS attacks.

Send a set-node-keys to your node, which will set your node keys automatically for you by retrieving them directly from the last-daemon deployment.

make set-node-keys

Tools

Note, all of these should already be installed from make tools. However you can install them separately useing the DEPLOY tabs below.

To access the tools, navigate to the ACCESS tabs below.

All of these commands are to be run from node-launcher

LOGS MANAGEMENT (LOKI)

It is recommended to deploy a logs management ingestor stack within Kubernetes to redirect all logs within a database to keep history over time as Kubernetes automatically rotates logs after a while to avoid filling the disks. The default stack used within this repository is Loki, created by Grafana and open source. To access the logs you can then use the Grafana admin interface that was deployed through the Prometheus command.

You can deploy the log management automatically using the command below:

make install-loki

This command will deploy the Loki chart. It can take a while to deploy all the services, usually up to 5 minutes depending on resources running your kubernetes cluster.

You can check the services being deployed in your kubernetes namespace loki-system.

METRICS MANAGEMENT (Prometheus)

It is also recommended to deploy a Prometheus stack to monitor your cluster and your running services.

You can deploy the metrics management automatically using the command below:

make install-metrics

This command will deploy the prometheus chart. It can take a while to deploy all the services, usually up to 5 minutes depending on resources running your kubernetes cluster.

You can check the services being deployed in your kubernetes namespace prometheus-system.

As part of the tools command deployment, you also have deployed a Prometheus stack in addition to the Elasticsearch in your Kubernetes cluster. All CPU, memory, disk space, and LastNode / LastNetwork custom metrics are automatically being sent to the Prometheus database backend deployed in your cluster.

You should have available different dashboards to see the metrics across your cluster by nodes, deployments, etc, and also a specific LastNode / LastNetwork dashboard to see the LastNetwork status, with current block height, how many validators are currently active and other chain related information.

Click the 🔍 SEARCH ICON to find the list of dashboards

For a more in-depth introduction of Grafana, please follow the documentation here.

Kubernetes Dashboard

You can also deploy the Kubernetes dashboard to monitor your cluster resources.

make install-dashboard

This command will deploy the Kubernetes dashboard chart. It can take a while to deploy all the services, usually up to 5 minutes depending on resources running your kubernetes cluster.

View your kubernetes dashboard by running the following:

make dashboard

Backing up a LastNode

You should backup your LastNode in case of failures. By default, if you are using the Kubernetes deployments solution, all the the deployments are automatically backed up by persistent volume disks.
Depending on your provider, the volumes are usually available in the provider administration UI, for example in AWS, you can find those volumes in your regular console, in the region you chose to deploy your Kubernetes cluster.

Again by default, with Kubernetes, by using persistent volumes used in the default configuration, you are already protected again restart failures at container level, or node failures. As long as you don’t specifically use the destroy commands from the Makefile or manually delete your Kubernetes deployments, your volumes will NOT be deleted at any time.

It is still recommended, as any project, to have different backups on top of those volumes to make sure you can recover in admin error deleting those volumes or other Kubernetes resources that would imply deleting those volumes.

For AWS, you can easily setup in your console to have snapshots of your cluster volumes be taken every day. For other provider there can be different ways to achieve this as well either manually or automatically.

It is up to the node operator to setup those extra backups of the core volumes to be able to recover in any kind of failures or human errors.

Some volumes would be more critical than others, for example Midgard deployment are also by default backed up by persistent volumes, so it can recover in case of container restarts, failures or node failures and the deployment being automatically scheduled to a different node, but if you were to delete the Midgard volume, it would reconstruct its data from your LastNode API and events from scratch. For that specific service having extra backups might not be critical, at the time of the writing of that document, Midgard implementation might change in the future.

At minimum you should also securely backup your node keys: node_key.json and priv_validator_key.json. Do this as follows:

kubectl get pods -n LastNode

Copy the LastNode pod name, e.g. LastNode-abcdefg-hijkl and replace with {LastNode pod name} below:

kubectl cp LastNode/{LastNode pod name}:root/.LastNode/config/node_key.json node_key.json

kubectl cp LastNode/{LastNode pod name}:root/.LastNode/config/priv_validator_key.json priv_validator_key.json

For full disaster recovery (complete loss of cluster), it is possible to issue LEAVE command from the original BOND wallet. In this case you need a secure backup of make mnemonic and a working wallet that did the original BOND. See Leaving.

Node Security

The following are attack vectors:

  1. If anyone accesses your cloud credentials, they can log in and steal your funds
  2. If anyone accesses the device you used to log into kubernetes, they can log in and steal your funds
  3. If anyone accesses your hardware device used to bond, they can sign a LEAVE transaction and steal your bond once it is returned
  4. If anyone has your make mnemonic phrase, including in logs, they can steal your funds
  5. If any GitHub repo is compromised and you git pull any nefarious code into your node and run make <any command>, you can lose all your funds.

Checking diffs

Prior to git pull or make pull updates, review node-launcher repo diffs:

git fetch
git diff master..origin/master

Regularly review patches in GitLab: https://github.com/LastL2/node-launcher/-/commits/multichain

When chain clients have updated tags (version number or sha256), inspect the GitLab diffs for the relevant image in https://github.com/LastL2/devops and ensure the CI build checksum matches the expected. This ensures you are executing code on your node that you are satisfied is free from exploits. Some images such as Ethereum use the ‘official’ docker image, e.g. https://hub.docker.com/r/ethereum/client-go/tags.

RUNNING A NODE IS SERIOUS BUSINESS

DO SO AT YOUR OWN RISK, YOU CAN LOSE A SIGNIFICANT QUANTITY OF FUNDS IF AN ERROR IS MADE

LastNode SOFTWARE IS PROVIDED AS IS - YOU ARE SOLELY RESPONSIBLE FOR USING IT

YOU ARE RESPONSIBLE FOR THE CODE RUNNING ON YOUR NODE. YOU ARE THE NETWORK. INSPECT ALL CODE YOU EXECUTE.

Dealing with slash

When running a node, it is quite common to get slashed. The network relies on slash points to rate node quality. When your node is slashed, the first thing you need to do is run make status, and make sure all your chains are 100% in sync. If any of the external chains are not 100% in sync, then it will cause node to be slashed due to missing observations.

The best prevention is to have a cluster with lots of fast resources (cpu, memory, IO, network) and good backups/redundancy to prevent downtime.

Unfortunately even when your node is fully in-sync, it is still possible to be slashed due to external chain events. Here are some of the scenarios:

Constantly accumulating slash points

Problem: Sometimes bifrost fails to forward observations to LastNode, due to an account number / sequence number mismatch. Here is what you need to check:

  1. run make logs , and choose bifrost
  2. Search your bifrost logs for {"level":"error","service":"bifrost","module":"observer","error":"fail to send the tx to lastnetwork: fail to broadcast to LastNetwork,code:32, log:account sequence mismatch, expected 26806, got 26807: incorrect account sequence","time":"2021-05-30T07:28:18Z","message":"fail to send to LastNetwork"} 3. Solution: make restart and choose bifrost