Run a CTA test instance from ci runner in an independent virtual machine.¶
This document describes how to get a CTA+EOS instance, with CTA built from source, running in a standalone VM with Alma9 as operating system.
Install Minikube environment¶
This repository, minikube_cta_ci, contains all the necessary packages for setting up a Kubernetes cluster within a virtual machine running Alma9. Follow the instructions in the repository to get started.
Bootstrap ci machine¶
This will install mhvtl
tape library and the various scripts needed for the CI user: cirunner
:
# as root on newly installed machine
# Install minimal bootstrap toolbox
yum -y install git screen
# clone this repository using *node installer* deploy token
git clone https://gitlab+deploy-token-3204:gldt-zPCHBii5fCS4Q4u7gnBR@gitlab.cern.ch/cta/minikube_cta_ci.git
cd minikube_cta_ci
# Launch the rest in screen
screen bash -c 'bash ./01_bootstrap_minikube.sh'
# Optionally install some ws credentials
bash ./02_ctaws2024.sh
reboot # reboot the machine to have everything started
# mhvtl, sshd,...
Start kubernetes¶
Kubernetes should automatically start at boot time for user cirunner
: test everything is running by issuing things like:
Note: these commands should be run as user cirunner
- so remember to do su cirunner
first
# Check there are some namespaces already (base health check)
kubectl get namespaces
# Check that the standard persistent volumes are there
kubectl get pv
If kubernetes is not running or in good shape, as cirunner
user launch start_minikube.sh
and try to understand/debug the problem.
Compile CTA¶
To get started, download the CTA repository from GitLab.
SRPM creation¶
This guide will walk you through the process of setting up your environment. Here are the steps followed to create the SRPM from the CTA source code:
- Copy the repository files to your system's yum repository directory:
cp -f ~/CTA/continuousintegration/docker/ctafrontend/alma9/repos/*.repo /etc/yum.repos.d/
- Copy the version lock list to your system's yum plugin configuration directory:
cp -f ~/CTA/continuousintegration/docker/ctafrontend/alma9/yum/pluginconf.d/versionlock.list /etc/yum/pluginconf.d/
- Install the necessary packages:
yum -y install epel-release almalinux-release-devel git yum -y install git wget gcc gcc-c++ cmake3 make rpm-build yum-utils
- Update the git submodules:
git submodule update --init --recursive
- Run the Oracle 21 installation script:
~/CTA/continuousintegration/docker/ctafrontend/alma9/installOracle21.sh
- Create a new directory for building the SRPM:
mkdir CTA_srpm cd CTA_srpm
- Run the cmake command to configure the build:
cmake3 -DPackageOnly:Bool=true ../CTA
- Build the SRPM:
make cta_srpm
RPM creation¶
Follow these steps:
- Install the necessary packages:
yum -y install epel-release almalinux-release-devel git yum -y install git wget gcc gcc-c++ cmake3 make rpm-build yum-utils yum -y install yum-plugin-versionlock
-
In the CTA git repository update the git submodules:
git submodule update --init --recursive
-
Install the necessary build dependencies:
yum-builddep --nogpgcheck -y CTA_srpm/RPM/SRPMS/*
- Create a new directory for building the RPM:
mkdir CTA_rpm cd CTA_rpm
- Run the cmake command to configure the build:
CTA_VERSION=5 cmake3 ../CTA
- Build the RPM:
make cta_rpm
Create the Podmam image for CTA and load it into Minikube¶
To create the Podmam image for CTA and load it into Minikube, follow these steps:
-
Create the Podmam image:
cd ~/CTA/continuousintegration/ci_runner ./prepareImage.sh [rpms folder] [podman image tag]
In the
rpms folder
put the path to where theCTA rpms
are saved (You can include in that folder the EOS rpms as well, as long as you update theversionlock.list
file). Thepodman image tag
is the tag image that will be generated. The name of the generated image will bectageneric:<podman image tag>
. -
Save your Podman image to a tar file
podman save -o ctageneric.tar localhost/ctageneric:<podman image tag>
-
Load the image into Minikube:
minikube image load ctageneric.tar​
-
Check that the image is loaded:
minikube image ls
Run a CTA instance test¶
To run a CTA instance test, follow these steps:
cd ~/CTA/continuousintegration/orchestration
./create_instance.sh -n stress -i $image_tag -D -O -d internal_postgres.yaml
cd tests
./test_client.sh -n stress
In $image_tag
put the tag of the docker image generated in the previous step. The -D
and -O
flags are to enable the database and object store respectively. The -d
flag is to specify the database yaml file.
Dev lifecycle¶
After recompilation of the CTA code, to run the system, one needs to:
# if the old image exists under the same name as new one
podman images
podman rmi ctageneric:[podman image tag]
# prepare the new image
./prepareImage.sh [rpms folder] [podman image tag]
# save the image in a tar file (if the same name remove the old one)
podman save -o ctageneric.tar localhost/ctageneric:<podman image tag>
# remove the old image from minikube
minikube image rm localhost/ctageneric:<podman image tag>
# load a new image
minikube image load ../ci_runner/ctageneric.tar localhost/ctageneric:jarodev