Setting up a Development Environment¶
This document describes how to get a CTA+EOS instance, with CTA built from source, running in a standalone VM (Virtual Machine) with Alma9 as operating system.
Installing the Minikube Environment¶
We are going to start by bootstrapping the CI machine.
This will install mhvtl
tape library and the various scripts needed for the CI user: cirunner
. Note that the following commands should be execute as the root
user.
# Install minimal bootstrap toolbox
yum -y install git screen
# One should already be in this directory on startup
cd ~
# Clone this repository using *node installer* deploy token
git clone https://gitlab+deploy-token-3204:gldt-zPCHBii5fCS4Q4u7gnBR@gitlab.cern.ch/cta/minikube_cta_ci.git
cd minikube_cta_ci
# Launch the rest in screen
screen bash -c 'bash ./01_bootstrap_minikube.sh'
# reboot the machine to have everything started
reboot
If you need additional information on this installation process, you can take a look at the minikube_cta_ci repository. For example, to be able to pull image from the CTA GitLab registry, you will need to set up some credentials. Note that the existing 02_ctaws2024.sh
credentials example contains outdated examples
The cirunner
User¶
The above script created the cirunner
user. This user is responsible for the deployment of the containers. One can switch to the cirunner
when logged in as root
using:
In case you want to ssh into the VM directly as cirunner
:
- ssh into the VM as
root
-
Create the directory
/home/cirunner/.ssh
: -
Copy the ssh-related
authorized_keys
file from theroot
to thecirunner
: -
Change the ownership of the
cirunner/.ssh
directory tocirunner
: -
Exit the ssh session and you should be able to ssh into your VM using:
Starting Kubernetes¶
Kubernetes is automatically started at boot time for user cirunner
. While logged in as the cirunner
, you should now be able to run kubectl
commands.
Running kubectl get namespaces
should output something along the lines of:
Running kubectl get pv
should check that the standard persistent volumes are there and output something along the lines of:
log00 100Gi RWX Recycle Available <unset> 2m14s
sg0 1Mi RWO Recycle Available librarydevice <unset> 2m15s
stg00 2Gi RWX Recycle Available <unset> 2m14s
Test everything is running by issuing the above commands and verifying that they work. If kubernetes is not running or in good shape, as cirunner
, you can check the logs to try and understand the problem:
Running start_minikube.sh
manually might also help.
Using ssh Keys on the VM¶
During development it is convenient to be able to use ssh keys from the host machine on the VM. For example to execute git related actions. To do this, start by adding the ssh key you want to use on the VM to the ssh agent:
After doing this, you can ssh into the VM with the -A
flag enabled and you should be able to use this key:
Compilation and Deployment¶
To start, ssh into the machine as cirunner
and navigate to the shared/
directory in /home/cirunner
. Then clone the repository:
cd /home/cirunner/shared
git clone ssh://git@gitlab.cern.ch:7999/cta/CTA.git
cd CTA
git submodule update --init --recursive
You should now have a fully initialized repository on the VM. To compile and deploy CTA on the local minikube cluster, execute the following script:
This will take quite a few minutes (especially the first time), but after this script has finished, you should see a number of pods running in the dev
namespace:
Instance dev successfully created:
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 60s
ctacli 1/1 Running 0 59s
ctaeos 1/1 Running 0 59s
ctafrontend 1/1 Running 0 59s
init 0/1 Completed 0 78s
kdc 1/1 Running 0 59s
postgres 1/1 Running 0 80s
tpsrv01 2/2 Running 0 59s
tpsrv02 2/2 Running 0 59s
That's it; you have a working dev environment now. If you make any changes, simply rerun the script above and it will recompile and redeploy the changes. You can run the command with the --help
flag to see additional options.
Once a local environment is running, you can run e.g. a stress test to verify that everything works correctly:
Note that this takes quite a few minutes to complete. For more information on testing, have a look at the Debugging
section.
How It Works¶
The above should be sufficient to get you started, but it hides all of the details of what is happening under the hood. This section will explain some of these details to give a better overview of what is happening during the build and deploy steps.
The Build Process¶
The build process of the CTA source code produces a collection of RPMs. This build process is done in a separate compilation container. The compilation container mounts the /home/cirunner/shared/
directory as a volume so that it has access to the source code. There are two main reasons for building the RPMs in a container:
- To ensure that any build dependencies do not pollute the VM
- To ensure that the build process is reproducible
The compilation process consists of two main stages:
- Building the Source RPMs (SRPMs).
- Building the RPMs.
The SRPMs only need to be build once, but the RPMs need to be rebuild anytime a change is made. The building of the SPRMs is done by the script continuousintegration/ci_helpers/build_srpm.sh
. Essentially all this script does is install some prerequisites (if the --install
flag is provided) and execute the cmake
and make
commands. Likewise, the building of the RPMs is done by the continuousintegration/ci_helpers/build_rpm.sh
script, which functions in a similar way.
The Deploy Process¶
The deploy process is handled by the script redeploy.sh
in continuousintegration/ci_runner/
. In order to deploy CTA locally, a minikube cluster is used. This is what was done in the first few commands of this page; it ensures that minikube is started for the cirunner
user when the VM starts.
When deploying a new test instance, the redeploy script will delete any existing pods in the designated namespace. Next, it will build an image using the RPMs we built in the build step. This image is then loaded into minikube and all the containers are then spawned and configured by the create_instance.sh
script.
Useful Kubernetes Commands¶
As the local cluster is a kubernetes cluster deployed via minikube, it is useful to know a few commands to do different things in the cluster (e.g. troubleshooting). See the following (non-exhaustive) list for a few useful ones:
kubectl get namespaces
: Lists the namespaces in the clusterkubectl get pods <namespace>
: Lists the pods in a given namespacekubectl logs <pod-name> -n <namespace>
: Print the logs for the given pod residing in the given namespacekubectl exec -it <pod-name> -n <namespace> -- /bin/bash
: Opens a shell in the given pod residing in the given namespace.kubectl describe ...
: Useful for finding out what went wrong with a particular resource-type if it did not start (seekubectl describe --help
).