Skip to content

Run a CTA test instance from ci runner in an independent virtual machine.

This document describes how to get a CTA+EOS instance, with CTA built from source, running in a standalone VM with Alma9 as operating system.

Install Minikube environment

This repository, minikube_cta_ci, contains all the necessary packages for setting up a Kubernetes cluster within a virtual machine running Alma9. Follow the instructions in the repository to get started.

Bootstrap ci machine

This will install mhvtl tape library and the various scripts needed for the CI user: cirunner:

# as root on newly installed machine

# Install minimal bootstrap toolbox
yum -y install git screen

# clone this repository using *node installer* deploy token
git clone https://gitlab+deploy-token-3204:gldt-zPCHBii5fCS4Q4u7gnBR@gitlab.cern.ch/cta/minikube_cta_ci.git

cd minikube_cta_ci

# Launch the rest in screen
screen bash -c 'bash ./01_bootstrap_minikube.sh'

# Optionally install some ws credentials
bash ./02_ctaws2024.sh

reboot # reboot the machine to have everything started
# mhvtl, sshd,...

Start kubernetes

Kubernetes should automatically start at boot time for user cirunner: test everything is running by issuing things like:

# Check there are some namespaces already (base health check)
kubectl get namespaces

# Check that the standard persistent volumes are there
kubectl get pv

If kubernetes is not running or in good shape, as cirunner user launch start_minikube.sh and try to understand/debug the problem.

Compile CTA

To get started, download the CTA repository from GitLab.

SRPM creation

This guide will walk you through the process of setting up your environment. Here are the steps followed to create the SRPM from the CTA source code:

  1. Copy the repository files to your system's yum repository directory:
    cp -f ~/CTA/continuousintegration/docker/ctafrontend/alma9/repos/*.repo /etc/yum.repos.d/
    
  2. Copy the version lock list to your system's yum plugin configuration directory:
    cp -f ~/CTA/continuousintegration/docker/ctafrontend/alma9/yum/pluginconf.d/versionlock.list /etc/yum/pluginconf.d/
    
  3. Install the necessary packages:
    yum -y install epel-release almalinux-release-devel git
    yum -y install git wget gcc gcc-c++ cmake3 make rpm-build yum-utils
    
  4. Update the git submodules:
    git submodule update --init --recursive
    
  5. Run the Oracle 21 installation script:
    ~/CTA/continuousintegration/docker/ctafrontend/alma9/installOracle21.sh
    
  6. Create a new directory for building the SRPM:
    mkdir CTA_srpm
    cd CTA_srpm
    
  7. Run the cmake command to configure the build:
    cmake3 -DPackageOnly:Bool=true ../CTA
    
  8. Build the SRPM:
    make cta_srpm
    

RPM creation

Follow these steps:

  1. Install the necessary packages:
    yum -y install epel-release almalinux-release-devel git
    yum -y install git wget gcc gcc-c++ cmake3 make rpm-build yum-utils
    yum -y install yum-plugin-versionlock
    
  2. In the CTA git repository update the git submodules:

    git submodule update --init --recursive
    

  3. Install the necessary build dependencies:

    yum-builddep --nogpgcheck -y CTA_srpm/RPM/SRPMS/*
    

  4. Create a new directory for building the RPM:
    mkdir CTA_rpm
    cd CTA_rpm
    
  5. Run the cmake command to configure the build:
    CTA_VERSION=5 cmake3 ../CTA
    
  6. Build the RPM:
    make cta_rpm
    

Create the Podmam image for CTA and load it into Minikube

To create the Podmam image for CTA and load it into Minikube, follow these steps:

  1. Create the Podmam image:

    cd ~/CTA/continuousintegration/ci_runner
    ./prepareImage.sh [rpms folder] [podman image tag]
    

    In the rpms folder put the path to where the CTA rpms are saved (You can include in that folder the EOS rpms as well, as long as you update the versionlock.list file). The podman image tag is the tag image that will be generated. The name of the generated image will be ctageneric:<podman image tag>.

  2. Save your Podman image to a tar file

    podman save -o ctageneric.tar localhost/ctageneric:<podman image tag> 
    

  3. Load the image into Minikube:

    minikube image load ctageneric.tar​
    

  4. Check that the image is loaded:

    minikube image ls
    

Run a CTA instance test

To run a CTA instance test, follow these steps:

cd ~/CTA/continuousintegration/orchestration
./create_instance.sh -n stress -i $image_tag -D -O -d internal_postgres.yaml
cd tests
./test_client.sh -n stress

In $image_tag put the tag of the docker image generated in the previous step. The -D and -O flags are to enable the database and object store respectively. The -d flag is to specify the database yaml file.

Dev lifecycle

After recompilation of the CTA code, to run the system, one needs to:

# if the old image exists under the same name as new one

podman images 
podman rmi ctageneric:[podman image tag]

# prepare the new image

./prepareImage.sh [rpms folder] [podman image tag]

# save the image in a tar file (if the same name remove the old one)

podman save -o ctageneric.tar localhost/ctageneric:<podman image tag> 

# remove the old image from minikube 

minikube image rm localhost/ctageneric:<podman image tag> 

# load a new image 

minikube image load ../ci_runner/ctageneric.tar localhost/ctageneric:jarodev