Deprecated
This page is deprecated and may contain information that is no longer up to date.
Running a CTA Test Instance on an Alma9 VM¶
This document describes how to get a CTA+EOS instance, with CTA built from source, running in a standalone VM (Virtual Machine) with Alma9 as operating system.
Installing the Minikube Environment¶
We are going to start by bootstrapping the CI machine.
This will install mhvtl
tape library and the various scripts needed for the CI user: cirunner
. Note that the following commands should be execute as the root
user.
# Install minimal bootstrap toolbox
yum -y install git screen
# One should already be in this directory on startup
cd ~
# Clone this repository using *node installer* deploy token
git clone https://gitlab+deploy-token-3204:gldt-zPCHBii5fCS4Q4u7gnBR@gitlab.cern.ch/cta/minikube_cta_ci.git
cd minikube_cta_ci
# Launch the rest in screen
screen bash -c 'bash ./01_bootstrap_minikube.sh'
# Optionally install some ws credentials
bash ./02_ctaws2024.sh
# reboot the machine to have everything started
reboot
If you need additional information on this installation process, you can take a look at the minikube_cta_ci repository.
The cirunner
User¶
The above script created the cirunner
user. This user is responsible for the deployment of the containers. One can switch to the cirunner
when logged in as root
using:
In case you want to ssh into the VM as cirunner
:
- ssh into the VM as
root
-
Create the directory
/home/cirunner/.ssh
: -
Copy the ssh-related
authorized_keys
file from theroot
to thecirunner
: -
Change the ownership of the
cirunner/.ssh
directory tocirunner
: -
Exit the ssh session and you should be able to ssh into your VM using:
Starting Kubernetes¶
Kubernetes is automatically started at boot time for user cirunner
. While logged in as the cirunner
, you should now be able to run kubectl commands
.
Running kubectl get namespaces
should output something along the lines of:
Running kubectl get pv
should check that the standard persistent volumes are there and output something along the lines of:
log00 100Gi RWX Recycle Available <unset> 2m14s
sg0 1Mi RWO Recycle Available librarydevice <unset> 2m15s
stg00 2Gi RWX Recycle Available <unset> 2m14s
Test everything is running by issuing the above commands and verifying that they work. If kubernetes is not running or in good shape, as cirunner
, you can check the logs to try and understand the problem:
Running start_minikube.sh
manually might also help.
Compiling CTA¶
Ensure that you are logged in as root
user before continuing.
To get started, clone the CTA repository from GitLab:
cd ~
git clone https://gitlab.cern.ch/cta/CTA.git
cd CTA
git submodule update --init --recursive
cd ..
There are other ways of setting this up. For example, for development purposes it might be useful to clone the repository on your local machine and sync the resulting CTA
to /root/CTA
on the VM. This way, you can make changes locally and test them out on the VM.
SRPM Creation¶
This section will walk you through the process of building the source RPM (RPM Package Manager) packages and setting up your environment.
Note for developers: these SRPMs do not need to be rebuilt when making changes to the CTA source code. Only when dependencies change do these need to be rebuilt.
Below are the steps followed to create the SRPM from the CTA source code.
-
Ensure you are in the
/root
directory:Running
ls
should now show at least the directory: -
Copy the repository files to your system's yum repository directory:
-
Copy the version lock list to your system's yum plugin configuration directory:
-
Install the necessary packages:
-
Run the Oracle 21 installation script:
-
Create a new directory for building the SRPM:
-
Run the cmake command to configure the build:
-
Build the SRPM:
-
Build the yum dependencies:
At this point, the root
user has set up every dependency needed for the CTA software to be compiled.
RPM Creation¶
This step details building the packages that contain the compiled binaries, configuration files, and other necessary components for the CTA
project itself. From this point onward, we are executing everything as the cirunner
user.
-
ssh into the VM with the
cirunner
user: -
Clone the
CTA
repository again (this time it should end up in/home/cirunner
as you are logged in ascirunner
): -
Create a new directory for building the RPM:
-
Run the
cmake
command to configure the build: -
Build the RPM:
Whenever changes are made to the CTA source code, steps 4 and 5 should be executed to update the RPMs. Of course, depending on the changes it might be sufficient to only execute step 5.
podman CTA Setup¶
To create the podman image for CTA and load it into Minikube, follow the steps below. Once again, ensure that you are logged in as cirunner
.
-
Create the podman image:
The
image-tag
is the tag of the image that will be generated. The name of the generated image will bectageneric:[image-tag]
.In the CTA RPMs location, you can include the EOS RPMs as well, as long as you update the
versionlock.list
file. However, you do not need to do this explicitly. If you followed the CTA build instructions exactly, the RPMS location should be~/CTA_rpm/RPM/RPMS/x86_64
. Note that the the entire path needs to be provided, up to thex86_64
part. -
Save your Podman image to a tar file
-
To load the image into Minikube, execute:
-
Check that the image is loaded:
Running a CTA Instance Test¶
Everything is now set up to run a CTA instance test.
-
Navigate to the following directory:
-
Create the containers using:
In
[image-tag]
put the tag of the docker image generated in the previous step. The-D
and-O
flags are to enable the database and object store respectively. The-d
flag is to specify the database yaml file. -
If everything went correctly, you should now see a number of containers running.
Running the above command should yield something like this:
-
If all the containers are up and running, you can execute e.g. a stress test:
Note that these tests can take (quite) a few minutes to complete.
Development Lifecycle¶
The development lifecycle typically consists of a few stages. For the development part, everything is executed as the cirunner
user.
-
Recompiling CTA:
-
Cleaning the old images:
-
Creating and loading the new images:
# Prepare the new image cd ~/CTA/continuousintegration/ci_runner ./prepareImage.sh ~/CTA_rpm/RPM/RPMS/x86_64 [image-tag] # Save the image in a tar file rm ctageneric.tar -f podman save -o ctageneric.tar localhost/ctageneric:[image-tag] # Load thew new image minikube image load ctageneric.tar localhost/ctageneric:[image-tag]
-
Now the containers can be redeployed:
cd ~/CTA/continuousintegration/orchestration ./create_instance.sh -n stress -i [image-tag] -D -O -d internal_postgres.yaml
Once again, remember to replace
[image-tag]
accordingly.
The above steps are also available in the script (except for the compilation step) redeploy.sh
in CTA/continuousintegration/ci_runner
:
# Recompile your code first manually...
cd ~/CTA/continuousintegration/cirunner
# Providing no image tag defaults the image tag to "dev"
./redeploy.sh [image-tag]
After this, happy testing.