Setting up a Development Environment¶
This document describes how to get a CTA+EOS instance, with CTA built from source, running in a standalone VM (Virtual Machine) with Alma9 as operating system.
Installing the Minikube Environment¶
We are going to start by bootstrapping the CI machine.
This will install mhvtl
tape library and the various scripts needed for the CI user: cirunner
. Note that the following commands should be execute as the root
user.
# Install minimal bootstrap toolbox
yum -y install git screen
# clone this repository using *node installer* deploy token
git clone https://gitlab+deploy-token-3204:gldt-zPCHBii5fCS4Q4u7gnBR@gitlab.cern.ch/cta/minikube_cta_ci.git
cd minikube_cta_ci
# Launch the bootstrap in screen mode
screen -L bash -c 'bash ./01_bootstrap_minikube.sh'
# Inspect screenlog.* for the logs in case something went wrong here
# Install the credentials for the container registry (only necessary if you want to run the development setup)
# see the section on container registry credentials below
# This step can also be done later if execute with the -g flag or followed by a reboot
bash ./02_setup_credentials.sh --registry gitlab-registry.cern.ch/cta/ctageneric --password <access_token1>
# These credentials are only needed for a select few tests; this can be skipped at first
bash ./02_setup_credentials.sh --registry gitlab-registry.cern.ch/cta/eoscta-operations --password <access_token2>
# Install helm (optional: only necessary if you want to run the development setup)
bash ./03_install_helm.sh
# reboot the machine to complete the setup
reboot
If you need additional information on this installation process, you can take a look at the minikube_cta_ci repository. For example, to be able to pull image from the CTA GitLab registry, you will need to set up some credentials. The README.md
of said repository contains more details on how to do this.
The cirunner
User¶
The above script created the cirunner
user. This user is responsible for the deployment of the containers. One can switch to the cirunner
when logged in as root
using e.g. su cirunner
.
However, it is much easier to directly ssh into the VM as cirunner
. To do this, make sure you are logged in as root
and execute the following:
mkdir /home/cirunner/.ssh
# Copy authorized keys from root to cirunner
cp /root/.ssh/authorized_keys /home/cirunner/.ssh/authorized_keys
# Ensure cirunner owns the .ssh dir and its contents
chown -R cirunner:cirunner /home/cirunner/.ssh/
Exit the ssh session and you should be able to ssh into your VM using:
Starting Kubernetes¶
Kubernetes is automatically started at boot time for user cirunner
. While logged in as the cirunner
, you should now be able to run kubectl
commands.
Running kubectl get namespaces
should output something along the lines of:
Running kubectl get pv
should check that the standard persistent volumes are there and output something along the lines of:
log00 100Gi RWX Recycle Available <unset> 2m14s
sg0 1Mi RWO Recycle Available librarydevice <unset> 2m15s
stg00 2Gi RWX Recycle Available <unset> 2m14s
Test everything is running by issuing the above commands and verifying that they work. If kubernetes is not running or in good shape, as cirunner
, you can check the logs to try and understand the problem:
Running start_minikube.sh
manually might also help.
Using ssh Keys on the VM¶
During development it is convenient to be able to use ssh keys from the host machine on the VM. For example to execute git related actions. To do this, start by adding the ssh key you want to use on the VM to the ssh agent:
After doing this, you can ssh into the VM with the -A
flag enabled and you should be able to use this key:
Containerised Compilation and Deployment¶
To start, ssh into the machine as cirunner
and navigate to the shared/
directory in /home/cirunner
. Then clone the repository:
cd /home/cirunner/shared
git clone ssh://git@gitlab.cern.ch:7999/cta/CTA.git
cd CTA
git submodule update --init --recursive
You should now have a fully initialized repository on the VM.
To compile and deploy CTA on the local minikube cluster, execute the following script:
This will take quite a few minutes (especially the first time), but after this script has finished, you should see a number of pods running in the dev
namespace:
Instance dev successfully created:
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 60s
ctacli 1/1 Running 0 59s
ctaeos 1/1 Running 0 59s
ctafrontend 1/1 Running 0 59s
init 0/1 Completed 0 78s
kdc 1/1 Running 0 59s
postgres 1/1 Running 0 80s
tpsrv01 2/2 Running 0 59s
tpsrv02 2/2 Running 0 59s
That's it; you have a working dev environment now. If you make any changes, simply rerun the script above and it will recompile and redeploy the changes. You can run the command with the --help
flag to see additional options.
Once a local environment is running, you can run e.g. a stress test to verify that everything works correctly:
Note that this takes quite a few minutes to complete. For more information on testing, have a look at the Debugging
section.
If you only want to build or only deploy, you can use the --skip-deploy
or --skip-build
flags respectively.
Local Compilation¶
The build_deploy.sh
script performs the compilation using a Kubernetes pod. However, this is not always desirable as it requires a Kubernetes environment such as minikube to have been set up.
If you want to build locally instead, you can use the build_local.sh
script:
Which should produce all the RPMs in the directory build_rpm/RPM/RPMS
. Note that on repeated builds, the SRPMs do not need to be rebuild. As such, you can add the --skip-srpms
option to speed things up. Use the --help
option to see all available options.
How It Works¶
The above should be sufficient to get you started, but it hides all of the details of what is happening under the hood. This section will explain some of these details to give a better overview of what is happening during the (containerised) build and deploy steps.
The Build Process¶
The build process of the CTA source code produces a collection of RPMs. This build process is done in a separate compilation container. The compilation container mounts the /home/cirunner/shared/
directory as a volume so that it has access to the source code. There are two main reasons for building the RPMs in a container:
- To ensure that any build dependencies do not pollute the VM
- To ensure that the build process is reproducible
The compilation process consists of two main stages:
- Building the Source RPMs (SRPMs).
- Building the RPMs.
The SRPMs only need to be build once, but the RPMs need to be rebuild anytime a change is made. The building of the SPRMs is done by the script continuousintegration/ci_helpers/build_srpm.sh
. Essentially all this script does is install some prerequisites (if the --install
flag is provided) and execute the cmake
and make
commands. Likewise, the building of the RPMs is done by the continuousintegration/ci_helpers/build_rpm.sh
script, which functions in a similar way.
The Deploy Process¶
The deploy process is handled by the script redeploy.sh
in continuousintegration/ci_runner/
. In order to deploy CTA locally, a minikube cluster is used. This is what was done in the first few commands of this page; it ensures that minikube is started for the cirunner
user when the VM starts.
When deploying a new test instance, the redeploy script will delete any existing pods in the designated namespace. Next, it will build an image using the RPMs we built in the build step. This image is then loaded into minikube and all the containers are then spawned and configured by the create_instance.sh
script.
Useful Kubernetes Commands¶
As the local cluster is a kubernetes cluster deployed via minikube, it is useful to know a few commands to do different things in the cluster (e.g. troubleshooting). See the following (non-exhaustive) list for a few useful ones:
kubectl get namespaces
: Lists the namespaces in the clusterkubectl get pods <namespace>
: Lists the pods in a given namespacekubectl logs <pod-name> -n <namespace>
: Print the logs for the given pod residing in the given namespacekubectl exec -it <pod-name> -n <namespace> -- /bin/bash
: Opens a shell in the given pod residing in the given namespace.kubectl describe ...
: Useful for finding out what went wrong with a particular resource-type if it did not start (seekubectl describe --help
).