Skip to content

Deprecated

This page is deprecated and may contain information that is no longer up to date.

Running a CTA Test Instance on an Alma9 VM

This document describes how to get a CTA+EOS instance, with CTA built from source, running in a standalone VM (Virtual Machine) with Alma9 as operating system.

Installing the Minikube Environment

We are going to start by bootstrapping the CI machine. This will install mhvtl tape library and the various scripts needed for the CI user: cirunner. Note that the following commands should be execute as the root user.

# Install minimal bootstrap toolbox
yum -y install git screen

# One should already be in this directory on startup
cd ~
# Clone this repository using *node installer* deploy token
git clone https://gitlab+deploy-token-3204:gldt-zPCHBii5fCS4Q4u7gnBR@gitlab.cern.ch/cta/minikube_cta_ci.git
cd minikube_cta_ci

# Launch the rest in screen
screen bash -c 'bash ./01_bootstrap_minikube.sh'

# Optionally install some ws credentials
bash ./02_ctaws2024.sh

# reboot the machine to have everything started
reboot

If you need additional information on this installation process, you can take a look at the minikube_cta_ci repository.

The cirunner User

The above script created the cirunner user. This user is responsible for the deployment of the containers. One can switch to the cirunner when logged in as root using:

su cirunner

In case you want to ssh into the VM as cirunner:

  1. ssh into the VM as root
  2. Create the directory /home/cirunner/.ssh:

    mkdir /home/cirunner/.ssh
    
  3. Copy the ssh-related authorized_keys file from the root to the cirunner:

    cp /root/.ssh/authorized_keys /home/cirunner/.ssh/authorized_keys
    
  4. Change the ownership of the cirunner/.ssh directory to cirunner:

    chown -r cirunner:cirunner /home/cirunner/.ssh/
    
  5. Exit the ssh session and you should be able to ssh into your VM using:

    ssh cirunner@<yourvm>
    

Starting Kubernetes

Kubernetes is automatically started at boot time for user cirunner. While logged in as the cirunner, you should now be able to run kubectl commands.

Running kubectl get namespaces should output something along the lines of:

default           Active   2m16s
kube-node-lease   Active   2m16s
kube-public       Active   2m16s
kube-system       Active   2m16s

Running kubectl get pv should check that the standard persistent volumes are there and output something along the lines of:

log00   100Gi      RWX            Recycle          Available                           <unset>                          2m14s
sg0     1Mi        RWO            Recycle          Available           librarydevice   <unset>                          2m15s
stg00   2Gi        RWX            Recycle          Available                           <unset>                          2m14s

Test everything is running by issuing the above commands and verifying that they work. If kubernetes is not running or in good shape, as cirunner, you can check the logs to try and understand the problem:

cat /tmp/start_minikube.sh.log

Running start_minikube.sh manually might also help.

Compiling CTA

Ensure that you are logged in as root user before continuing. To get started, clone the CTA repository from GitLab:

cd ~
git clone https://gitlab.cern.ch/cta/CTA.git
cd CTA
git submodule update --init --recursive
cd ..

There are other ways of setting this up. For example, for development purposes it might be useful to clone the repository on your local machine and sync the resulting CTA to /root/CTA on the VM. This way, you can make changes locally and test them out on the VM.

SRPM Creation

This section will walk you through the process of building the source RPM (RPM Package Manager) packages and setting up your environment.

Note for developers: these SRPMs do not need to be rebuilt when making changes to the CTA source code. Only when dependencies change do these need to be rebuilt.

Below are the steps followed to create the SRPM from the CTA source code.

  1. Ensure you are in the /root directory:

    cd ~
    

    Running ls should now show at least the directory:

    CTA
    
  2. Copy the repository files to your system's yum repository directory:

    cp -f CTA/continuousintegration/docker/ctafrontend/alma9/repos/*.repo /etc/yum.repos.d/
    
  3. Copy the version lock list to your system's yum plugin configuration directory:

    cp -f CTA/continuousintegration/docker/ctafrontend/alma9/yum/pluginconf.d/versionlock.list /etc/yum/pluginconf.d/
    
  4. Install the necessary packages:

    yum -y install epel-release almalinux-release-devel git
    yum -y install git wget gcc gcc-c++ cmake3 make rpm-build yum-utils
    yum -y install yum-plugin-versionlock
    
  5. Run the Oracle 21 installation script:

    CTA/continuousintegration/docker/ctafrontend/alma9/installOracle21.sh
    
  6. Create a new directory for building the SRPM:

    mkdir CTA_srpm
    cd CTA_srpm
    
  7. Run the cmake command to configure the build:

    cmake3 -DPackageOnly:Bool=true ../CTA
    
  8. Build the SRPM:

    make cta_srpm -j 4
    
  9. Build the yum dependencies:

    yum-builddep --nogpgcheck -y ~/CTA_srpm/RPM/SRPMS/*
    

At this point, the root user has set up every dependency needed for the CTA software to be compiled.

RPM Creation

This step details building the packages that contain the compiled binaries, configuration files, and other necessary components for the CTA project itself. From this point onward, we are executing everything as the cirunner user.

  1. ssh into the VM with the cirunner user:

    ssh cirunner@<yourvm>
    
  2. Clone the CTA repository again (this time it should end up in /home/cirunner as you are logged in as cirunner):

    cd ~
    git clone https://gitlab.cern.ch/cta/CTA.git
    cd CTA
    git submodule update --init --recursive
    cd ..
    
  3. Create a new directory for building the RPM:

    mkdir CTA_rpm
    cd CTA_rpm
    
  4. Run the cmake command to configure the build:

    cmake3 ../CTA
    
  5. Build the RPM:

    make cta_rpm  -j 4
    

Whenever changes are made to the CTA source code, steps 4 and 5 should be executed to update the RPMs. Of course, depending on the changes it might be sufficient to only execute step 5.

podman CTA Setup

To create the podman image for CTA and load it into Minikube, follow the steps below. Once again, ensure that you are logged in as cirunner.

  1. Create the podman image:

    cd ~/CTA/continuousintegration/ci_runner
    ./prepareImage.sh [RPMS location] [image-tag]
    

    The image-tag is the tag of the image that will be generated. The name of the generated image will be ctageneric:[image-tag].

    In the CTA RPMs location, you can include the EOS RPMs as well, as long as you update the versionlock.list file. However, you do not need to do this explicitly. If you followed the CTA build instructions exactly, the RPMS location should be ~/CTA_rpm/RPM/RPMS/x86_64. Note that the the entire path needs to be provided, up to the x86_64 part.

  2. Save your Podman image to a tar file

    podman save -o ctageneric.tar localhost/ctageneric:[image-tag]
    
  3. To load the image into Minikube, execute:

    minikube image load ctageneric.tar
    
  4. Check that the image is loaded:

    minikube image ls
    

Running a CTA Instance Test

Everything is now set up to run a CTA instance test.

  1. Navigate to the following directory:

    cd ~/CTA/continuousintegration/orchestration
    
  2. Create the containers using:

    ./create_instance.sh -n stress -D -O -d internal_postgres.yaml -i [image-tag]
    

    In [image-tag] put the tag of the docker image generated in the previous step. The -D and -O flags are to enable the database and object store respectively. The -d flag is to specify the database yaml file.

  3. If everything went correctly, you should now see a number of containers running.

    kubectl get pods -n stress
    

    Running the above command should yield something like this:

    NAME          READY   STATUS      RESTARTS   AGE
    client        1/1     Running     0          61s
    ctacli        1/1     Running     0          61s
    ctaeos        1/1     Running     0          60s
    ctafrontend   1/1     Running     0          60s
    init          0/1     Completed   0          80s
    kdc           1/1     Running     0          60s
    postgres      1/1     Running     0          88s
    tpsrv01       2/2     Running     0          61s
    tpsrv02       2/2     Running     0          60s
    
  4. If all the containers are up and running, you can execute e.g. a stress test:

    cd tests
    ./test_client.sh -n stress
    

    Note that these tests can take (quite) a few minutes to complete.

Development Lifecycle

The development lifecycle typically consists of a few stages. For the development part, everything is executed as the cirunner user.

  1. Recompiling CTA:

    cd ~/CTA_rpm
    # Note that running cmake is not always necessary
    cmake3 ../CTA
    make cta_rpm
    
  2. Cleaning the old images:

    kubectl delete namespace stress
    podman rmi ctageneric:[image-tag]
    minikube image rm localhost/ctageneric:[image tag]
    
  3. Creating and loading the new images:

    # Prepare the new image
    cd ~/CTA/continuousintegration/ci_runner
    ./prepareImage.sh ~/CTA_rpm/RPM/RPMS/x86_64 [image-tag]
    
    # Save the image in a tar file
    rm ctageneric.tar -f
    podman save -o ctageneric.tar localhost/ctageneric:[image-tag]
    
    # Load thew new image
    minikube image load ctageneric.tar localhost/ctageneric:[image-tag]
    
  4. Now the containers can be redeployed:

    cd ~/CTA/continuousintegration/orchestration
    ./create_instance.sh -n stress -i [image-tag] -D -O -d internal_postgres.yaml
    

    Once again, remember to replace [image-tag] accordingly.

The above steps are also available in the script (except for the compilation step) redeploy.sh in CTA/continuousintegration/ci_runner:

# Recompile your code first manually...
cd ~/CTA/continuousintegration/cirunner
# Providing no image tag defaults the image tag to "dev"
./redeploy.sh [image-tag]

After this, happy testing.