GitLab Pipelines¶
As seen in the CI overview, developers can easily interact and iterate while their changes are in their local machines or in the GitLab pipelines. In this section we cover the versatility and possible uses of said pipelines to make the most out of them.
Pipelines are composed of stages and jobs. Stages are a logical grouping of jobs. Our pipeline does not rely on stages to sequence the jobs. Instead, it uses the Directed Acyclic Graph feature from GitLab. This way, we can start running jobs as soon as its prerequisites are satisfied. This improves parallelism and reduces the runtime of our pipelines. The DAG looks as follows:
Commit Pipeline¶
The pipeline is composed of the following stages:
-
Analysis
In the analysis stage a series of static analysis tests take place:
- Catalogue Schema Version Check: checks that the Catalogue version specified in the CTA repository matches the version of the Catalogue repo.
- Clang format report: checks the format of the code is compliant to our needs. The clang format in based on the .clang-format file. This part includes two jobs: one that generates the report and one that can be manually triggered to fix the clang-format issues.
- Cpp Check: lightweight static analysis tool. For cppcheck, a number of errors are suppressed based on the .cppcheck-supression file.
1.1. SonarCloud (external)
To complement this stage, we also analyze the project with SonarCloud, the analysis results can be found here. Due to technical reasons it is not directly integrated in GitLab pipelines. The main reason is that it is a heavy analysis taking too long to be integrated into the developer workflow. To run it we use a GitHub mirror of the CTA repository that does the analysis on a daily basis. This analysis only happens for the main branch.
As it is not directly integrated in the pipeline, and as part of the development workflow, it is recommended to check the files you are modifying, to see if there are some issues that can be fixed without much effort in the same merge request. You should also check the results of the analysis run after your commits reach the main branch to check if the committed code generated any new issues.
-
Build
This stage has two jobs that take care of building the
srpms
andrpms
respectively; there is a dependency between both and they get executed sequentially. The resulting artifacts get uploaded and reused in the next stage. -
Build Image
A "base" container image is built with common software used by all the containers in the kubernetes cluster spawned for testing, including the rpms generated in the previous stage. After the image is built it is uploaded to a private registry.
-
System Tests
The tests run in this stage require the existence of a working CTA deployment, a virtualized environment is the best option as it can be recreated quickly. They are mainly used to test workflows, compliance of APIs and integration of external software.
The system test jobs definitions are located in
tests.gitlab.ci
andtests-kubernetes.gitlab.ci
files; the first one triggers jobs related to unit tests against a real DB backend and valgrind tests; the second one will trigger jobs that require the full CTA environment to run. These jobs are prefixed byk8s-
and run our dedicated cirunner machines with a custom minikube and mhvtl setup.Currently the set of tests enabled by default on the commit pipeline are:
k8s-test-client
: rest API compliance test; file immutability; archival, retrieval, eviction, retrieval abort and deletion of 10.000 files; multiple retrieve test; idempotent prepare; deletion onclosew
errors; eviction before archival; EOS evict command; ObjectStore queue cleanup.k8s-test-client-gfal2
: archival, retrieval, eviction and deletion of 10.000 files. Using the gfal2 library, core library for FTS, 5.000 files are tested against the XRootD protocol and the other 5.000 against the HTTP protocol. It also checks for activity passing through the gfal2 stack.k8s-test-repack
: tests of repacking workflows.k8s-test-cta-admin
: exercises the execution and tests ofcta-admin
commands.k8s-unit-test-oracle
: series of CTA Catalogue unit tests run against a live Oracle DB. As opposed to the postgres tests, these tests need to be run using an actual CTA deployment.unit-test-postgres
: series of CTA Catalogue unit tests run against a live Postgres DB.system-test-cta
: tests executable invocation and CTA's threading code.
For commit pipelines the following tests can be triggered manually through the UI:
test-cta-valgrind
: runs valgrind tests to check for memory leaks.k8-stest-liquibase-update
: tests the upgrade and downgrade of the different schema versions of the Catalogue.k8s-test-external-tape-format
: tests the support of tapes configured by other tape software.
All these tests are run for nightly scheduled pipelines.
4.1. System tests organization and design constraints
For system tests we have at our disposal 3 runners, each runner can only run one test at a time. The current run time of the system tests is around 25 minutes using the 3 available runners. Whenever a test is run the virtual environment gets created and it is destroyed after the tests. The creation of the environment has an overhead of ~4 minutes, the destruction is much faster.
Ideally the tests should be grouped logically together, this means that related workflows should be tested in the creation of the same environment, this helps to better understand the source of the failure, nevertheless the logs produced by the tests should be clear enough about what was being tested and the failure reason.
This ideal cannot always be achieved as it is of utter importance to find the right balance of number of tests and execution length to minimize execution time. Having a single test containing everything leads to resource under-utilization and longer pipelines, specially when there are not many developers pushing to the repository at the same time; and splitting them too much will create an excessive amount of overhead which leads to wasted time, specially when many pipelines are being executed at the same time.
-
Regressions [This is currently broken] Check regressions in newer versions of EOS that have not been tested in other CI steps. Only run in nightly pipelines to detect if a next release of EOS will cause some sort of problem with the current tested workflows in our CI.
Configuring the Commit Pipeline¶
By default the commit pipelines (and any pipeline), runs with a set of preset variables, the defaults are defined in .gitlab-ci.yml
. These can be modified either when doing a push to the repository by means of git push options, i.e., git push -o ci.variable="ORACLE_SUPPORT=OFF"
or by triggering a manual pipeline from the GitLab Pipelines web UI and specifying the desired flags.
Variable | Options | Default | Description |
---|---|---|---|
SCHED_TYPE | objectstore | pgsched | objectstore | The scheduler backend |
ORACLE_SUPPORT | ON | OFF | ON | Catalogue backend; ON: uses an oracle database; OFF: uses a postgresql database |
CTA_VERSION | -- | 5 | The CTA version. Historically, this is tied to the XRootD version, which is v5. |
UNIT_TESTS | ON | OFF | ON | Configure wether or not to run unit tests in the pipeline during the build rpm step |
SYSTEMTESTS_ONLY | TRUE | FALSE | FALSE | Run only the system tests |
BUILD_GENERATOR | Unix Makefiles | Ninja | Unix Makefiles | Which build generator to use for the binaries |
Additionally, when pushing, it is possible to entirely skip the execution of the CI. This is useful when a developer wants to synchronize their local changes, but does not want to trigger the pipeline because they are not ready for testing yet. To do so, you can run: git push -o ci.skip
. Merge requests which last commit skipped the pipeline cannot be merged into the main
branch.
!!! info SYSTEMTESTS_ONLY
is experimental
This feature is experimental there is more work in progress to expand the functionality. See: https://gitlab.cern.ch/cta/CTA/-/issues/665
The SYSTEMTESTS_ONLY
flag can be useful when working directly on the system tests themselves as it avoids running the entire pipeline, when this flag is set, the latest generic image generated from main will be used to create the virtual environment for the tests. It can also be helpful to debug some situations like race conditions that are hard to reproduce manually and require several executions, see https://gitlab.cern.ch/cta/CTA/-/issues/662.
Local testing and modifications¶
Waiting for the GitLab pipeline to reach the systemtests in an iterative way is too time consuming, specially when modifying the system tests themselves. For this use case, or when there is heavy resource contention in the pipelines, developers can use the development setup to speedup the process by running the CI tests locally.
To modify the tests, and make the modifications effective, one must take into account the following considerations:
- Modifying the container configuration: all the container configuration files are injected during the image build process, so, the image must be rebuilt for changes done to the initialization scripts or anything else that goes into the container.There is no need to rebuild the CTA binaries, only the image.
- Modifying the test behaviour: this is more trivial to do as they are run from the VM and not integrated in the container image. For this, you just need to modify the desired scripts. Sometimes it might be even useful to trim down the test set and only run the parts of the test that are of interest. Or even move into the container and run the specific commands used by the test.
While testing this way one must check if the cluster is in a clean state after a test failure or even after a clean execution, inconsistent states can misled you when debugging problems. Currently the tests are not designed to leave cluster in the same state as it was before being launched, this is why in CI we reinitialize the cluster for every set of tests.
Nightly Scheduled Pipelines¶
The main developer workflow is based on pushing and merging into main
. This workflow only triggers the default configuration of the pipeline. This default configuration matches the production state, or, at most, the version upgrades for the upcoming release. At night, pipelines for all the possible supported configurations are executed, i.e. different scheduler backends, compilation and testing for different OS versions when OS migrations are ongoing, etc.
These scheduled pipelines help keep the developer workflow as streamlined as possible while checking new changes don't break compatibility with different configurations. There is drawback for this approach, which is that the developer must check if its changes caused some of the nightly pipelines to fail.
Release Pipeline¶
This pipeline runs the analysis and build stages and it enables a number of extra jobs to be triggered manually:
- Changelog preview: produces a preview of the changelog based on all the commits between two commits (the latest commit and the latest tag by default)
- Changelog update: generates a merge request with an update to the
CHANGELOG.md
andcta.spec.in
file - Release internal: publishes the RPMs to a CTA internal repo, making them available to be deployed in the stress tests and later stages
- Release CTA testing: publishes the rPMs to the testing repo
- Release CTA stable: publishes the RPMs to the stable repo
More details on Tagging Releases.