Run a CTA test instance from build tree in an independent virtual machine¶
Info
These instructions work and have been tested on a CentOS 7 / CC7 that has access to the CERN's repositories.
This document describes how to get a CTA+EOS instance, with CTA built from source, running in a standalone VM.
User environment¶
The vmBootstrap directory contains all the necessary script to go from minimal Centos 7 or CC7 installation to running kubernetes with CTA checked out and compiled.
A full CTA source tree should be cloned or copied in the target system, and scripts should be run from .../CTA/continuousintegration/ci_runner/vmBootstrap:
This will create a new user (default is cta
if no <user>
specified) and prompt for the password. The user will be a sudoer (no password).
The CTA sources will again be cloned to user's home by the script.
CTA RPMs¶
The user should then login as the
su - cta
cd ~/CTA/continuousintegration/ci_runner/vmBootstrap
./bootstrapCTA.sh [cern] [xrootd version]
This will check out CTA from git, install the necessary build RPMs and compile. Add the cern
parameter if you would like to install CTA dependencies from CERN repositories. This will only work if you are inside CERN network. Otherwise run the script without arguments to get packages from public repos. Also add the xrootd version
, by default it's version 5.
It will generate the CTA rpms in the folder:
Install MHVTL¶
Warning
As of 21/07/2021, the version of mhvtl in the cern repos is too outdated and will not work with recreate_buildtree_running_environment.sh
, so the command below should be ran without any arguments.
The script will install the virtual tape library services to imitate real tape environment. Similarly, use cern
parameter if you want to install mhvtl v1.5.2 from CASTOR repository. By default (no argument) will get v1.6.3 from the official github project.
Kubernetes setup¶
The user should then run the script to setup kubernetes:
A reboot is currently required at that point.
Docker image¶
The system tests run with a single image with all the non CTA rpms pre-loaded. The image should be generated once for all:
In the rpms folder
put the path to where the CTA rpms
are saved (You can include in that folder the EOS rpms as well). The docker image tag
is the tag image that will be generated. The name of the generated image will be ctageneric:<docker image tag>
.
Preparing the environment (MHVTL, kubernetes volumes...)¶
MHVTL should then be setup by running:
Preparing the CTA instance¶
Create database.yaml
to describe your db endpoint. See sample yml files in:
- CTA/continuousintegration/orchestration
- Database section
- How to run a local postgress instance in a container.
The CTA instance can be created by running (typically):
cd ~/CTA/continuousintegration/orchestration
sudo ./create_instance.sh -n stress -i $image_tag -D -O -d database.yaml
In $image_tag
put the tag of the docker image generated in the previous step. The -D
and -O
flags are to enable the database and object store respectively. The -d
flag is to specify the database yaml file.
In order to see your resources, run the following:
You can now run the tests to verify your CTA instance.
The tests use the ./tests/prepare_tests.sh
script to perform an initial configuration of the deployment, you can run this standalone if preferred.
Helper Script¶
This script will delete the instance, prepare the image, recreate the environment and create the instance again. It is useful for quick testing.
#!/bin/bash
image_tag="dev"
cd ~/CTA/continuousintegration/orchestration
./delete_instance.sh -n stress
cd ~/CTA/continuousintegration/ci_runner
./prepareImage.sh ~/CTA-build/RPM/RPMS/x86_64 $image_tag || die "Failed to prepare image"
sudo ./recreate_ci_running_environment.sh
cd ~/CTA/continuousintegration/orchestration
sudo ./create_instance.sh -n stress -i $image_tag -D -O -d internal_postgres.yaml