Deprecated
This page is deprecated and may contain information that is no longer up to date.
CTA Continuous Integration¶
The CTA
repository contains all the tooling required for the gitlab CI system. This can be found in the continuousintegration
directory and in the .gilab-ci.yml
file.
This document describes how to use these tools interactively.
Launching a CTA test instance¶
A CTA test instance is a kubernetes namespace.
Basically it is a cluster of pods on its own DNS sub-domain, this means that inside a CTA namespace ping ctafrontend
will ping ctafrontend.ctaeos
and other services defined in the namespace.
Before going further, if you are completely new to kubernetes
, you can have a look at this CS3 workshop presentation.
The web based presentation is available here.
Setting up the CTA kubernetes
infrastructure¶
All the needed tools are self contained in the CTA
repository.
It allows to version the system tests and all the required tools with the code being tested.
Therefore setting up the system test infrastructure only means to checkout CTA
repository on a kubernetes cluster: a ctadev
system for example.
Everything in one go aka the Big Shortcut¶
This is basically the command that is run by the gitlab CI
in the CI pipeline executed at every commit during the test
stage in the client
build.
Here is an example of successfully executed archieveretrieve
build.
Only one command is run in this build:
$ cd continuousintegration/orchestration/; \
./run_systemtest.sh -n ${NAMESPACE} -p ${CI_PIPELINE_ID} -s tests/test_client.sh -D
CI_PIPELINE_ID
is not needed to run this command interactively: you can just launch:
[root@ctadevjulien CTA]# cd continuousintegration/orchestration/
[root@ctadevjulien orchestration]# ./run_systemtest.sh -n mynamespace -s tests/test_client.sh -D
But be careful: this command instantiate a CTA test instance, runs the tests and immediately deletes it.
If you want to keep it after the test script run is over, just add the -k
flag to the command.
By default, this command uses local VFS for the objectstore and the oracle
database associated to your system you can add -O
flag to use the Ceph account associated to your system.
The following sections just explain what happens during the system test step and gives a few tricks and useful kubernetes commands.
List existing test instances¶
This just means listing the current kubernetes namespaces:
Here we just have the 2 kubernetes system namespaces, and therefore no test instance.
Create a kubernetes
test instance¶
For example, to create ctatest
CTA test instance, simply launch ./create_instance.sh
from CTA/continuousintegration/orchestration
directory with your choice of arguments.
By default it will use a file based objectstore and an sqlite database, but you can use an Oracle database and/or Ceph based objectstore if you specify it in the command line.
[root@ctadevjulien CTA]# ./create_instance.sh
Usage: ./create_instance.sh -n <namespace> [-o <objectstore_configmap>] [-d <database_configmap>]
Objectstore configmap files and database configmap files are respectively available on cta/dev/ci
hosts in /opt/kubernetes/CTA/[database|objectstore]/config
, those directories are managed by Puppet and the accounts configured on your machine are yours.
YOU ARE RESPONSIBLE FOR ENSURING THAT ONLY 1 INSTANCE USES 1 EXCLUSIVE REMOTE RESOURCE. RUNNING 2 INSTANCES WITH THE SAME REMOTE RESOURCE WILL CREATE CONFLICTS IN THE WORKFLOWS AND IT WILL BE YOUR FAULT
After all those WARNINGS, let's create a CTA test instance that uses your Oracle database and your Ceph objectstore.
[root@ctadevjulien CTA]# cd continuousintegration/orchestration/
[root@ctadevjulien orchestration]# git pull
Already up-to-date.
[root@ctadevjulien orchestration]# ./create_instance.sh -n ctatest \
-o /opt/kubernetes/CTA/objectstore/config/objectstore-ceph-cta-julien.yaml \
-d /opt/kubernetes/CTA/database/config/database-oracle-cta_devdb1.yaml -O -D
Creating instance for latest image built for 40369689 (highest PIPELINEID)
Creating instance using docker image with tag: 93924git40369689
DB content will be wiped
objectstore content will be wiped
Creating ctatest instance namespace "ctatest" created
configmap "init" created
creating configmaps in instance
configmap "objectstore-config" created
configmap "database-config" created
Requesting an unused MHVTL librarypersistentvolumeclaim "claimlibrary" created
.OK
configmap "library-config" created
Got library: sg35
Creating services in instance
service "ctacli" created
service "ctaeos" created
service "ctafrontend" created
service "kdc" created
Creating pods in instance
pod "init" created
Waiting for init.........................................................OK
Launching pods
pod "ctacli" created
pod "tpsrv" created
pod "ctaeos" created
pod "ctafrontend" created
pod "kdc" created
Waiting for other pods.....OK
Waiting for KDC to be configured..........................OK
Configuring KDC clients (frontend, cli...) OK
klist for ctacli:
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: admin1@TEST.CTA
Valid starting Expires Service principal
03/07/17 23:21:49 03/08/17 23:21:49 krbtgt/TEST.CTA@TEST.CTA
Configuring cta SSS for ctafrontend access from ctaeos.....................OK
Waiting for EOS to be configured........OK
Instance ctatest successfully created:
NAME READY STATUS RESTARTS AGE
ctacli 1/1 Running 0 1m
ctaeos 1/1 Running 0 1m
ctafrontend 1/1 Running 0 1m
init 0/1 Completed 0 2m
kdc 1/1 Running 0 1m
tpsrv 2/2 Running 0 1m
This script starts by creating the ctatest
namespace. It runs on the latest CTA docker image available in the gitlab registry. If there is no image available for the current commit it will fail. Then it creates the services in this namespace so that when the pods implementing those services create the network and DNS names are defined.
For convenience, we can export NAMESPACE
, set to ctatest
in this case, so that we can simply execute kubectl
commands in our current instance with kubectl --namespace=${NAMESPACE} ...
.
The last part is the pods creation in the namespace, it is performed in 2 steps:
- run the
init
pod, which created db, objectstore and label tapes - launch the other pods that rely on the work of the
init
pod when its status isCompleted
which means that the init script exited correctly
Now the CTA instance is ready and the tests can be launched.
Gitlab CI integration¶
Configure the Runners for cta
project and add some specific tags for tape library related jobs. I chose mhvtl
and kubernetes
for ctadev runners.
This allows to use those specific runners for CTA tape library specific tests, while others can use shared runners.
A small issue: by default, gitlab-runner
service runs as gitlab-runner
user, which makes it impossible to run some tests as root
inside the pods has not the privileges to run all the commands needed.