Building and Deploying¶
At a very basic level, the pipeline for development is as follows:
build binaries -> run unit tests -> build image -> deploy test instance -> do tests
For CTA development, you are highly advised to you use the one script that can do this all: build_deploy.sh
. This script is essentially a wrapper script around all the various subparts. Below we highlight a few common scenarios and how the build_deploy.sh
script can help there. For further details on the capabilities of build_deploy.sh
, please run build_deploy.sh --help
.
Note that at the end of build_deploy.sh
there will only be a barebones instance. Things like tape pools, VOs, certain authentication parts and other things have not yet been setup. If you are not relying on automated tests, you are advised to run (or at least look into) tests/prepare_tests.sh
to get some things to play around with.
Default workflow¶
Running ./build_deploy.sh
without any arguments, will give you the default workflow. The first time this command is executed, it will spawn a dedicated build container called cta-build
in the build
namespace. It will then install all the necessary dependencies to build the SRPMs, build the SRPMs, install all the necessary dependencies to build the RPMs, and finally build the RPMs. On subsequent builds, only the RPMs are rebuilt (green starting point in the figure below). Unit tests are run as part of the RPM build process.
Once the RPMs are built, a container image is built, which is then uploaded to the local Minikube registry. Finally, a CTA instance is deployed using this image. The full pipeline is illustrated below:
flowchart TB
spawn_container["Spawn build container"]
install_srpm["Install SRPM deps"]
cmake_srpm["cmake SRPMs"]
make_srpm["make SRPMs"]
install_rpm["Install RPM deps"]
cmake_rpm["cmake RPMs"]
make_rpm["make RPMs"]
unit_tests["Run Unit Tests"]
subgraph build_deploy.sh
direction LR
subgraph build_srpm.sh
direction TB
install_srpm
install_srpm --> cmake_srpm
cmake_srpm --> make_srpm
end
subgraph build_rpm.sh
direction TB
install_rpm
install_rpm --> cmake_rpm
cmake_rpm --> make_rpm
make_rpm --> unit_tests
end
spawn_container -- enter container --> build_srpm.sh
build_srpm.sh --> build_rpm.sh
build_rpm.sh -- exit container --> build_image.sh
build_image.sh --> create_instance.sh
end
Changing the Scheduler Type¶
The default scheduler type in use is the objectstore. With ongoing development on a new scheduler using Postgres, it is useful to switch between them. This can be done using the --scheduler-type
flag:
./build_deploy.sh --scheduler-type pgsched
The supported options are objectstore
, and pgsched
. Setting the scheduler type to pgsched
, will also make the deployment use a Postgres scheduler, instead of the Virtual File System (VFS) objectstore scheduler.
Changing the Scheduler Configuration for Deployment¶
There are a few separate backends that can be used depending on the scheduler type. For objectstore
, there is Ceph and VFS, while for pgsched
there is postgres
. By default, the backend will be VFS when the scheduler type is objectstore
, and postgres
for pgsched
. However, it is possible to overwrite this manually using --scheduler-config
:
./build_deploy.sh --scheduler-config presets/dev-scheduler-ceph-values.yaml
This path can be absolute or relative to the orchestration
directory. This allows you to connect to e.g. a remote or pre-configured scheduler setup.
Changing the Catalogue Configuration for Deployment¶
By default, a Postgres catalogue database will be spawned. However, it might be desirable to connect to a centralized Oracle database (such as in CI). To do this, use the --catalogue-config
flag:
./build_deploy.sh --catalogue-config my-oracle-catalogue-config-values.yaml
Once again, this path can be absolute or relative to the orchestration
directory.
Skipping Unit Tests¶
Sometimes you are making changes to the source code that do not interact with the existing unit tests. To save yourself ~2 minutes, you can run the following:
./build_deploy.sh --skip-unit-tests
Using a Different Build Generator¶
Both Ninja
and Unix Makefiles
are supported as build generators for the CTA project. This can be specific using the --build-generator
flag:
./build_deploy.sh --build-generator "Unix Makefiles"
The default build generator is Ninja
.
Starting with a Fresh Build Container (please let me start from scratch)¶
If the container used to build CTA is acting up for whatever reason, you can delete the existing container and spawn a new one using the --reset
flag:
./build_deploy.sh --reset
Cleaning CMake Cache/Build Directory¶
Sometimes CMake is a mess and you want to clear the cache and the build directory. For this, use the --clean-build-dir
flag:
./build_deploy.sh --clean-build-dir
This removes the build directory for the RPMs (not the SRPMs).
Disabling CCache¶
If you wish to build CTA without having CCache enabled, you can use the --disable-ccache
flag:
./build_deploy.sh --disable-ccache
Disabling Oracle Support¶
To build CTA without requiring any Oracle dependencies, you can use the --disable-oracle-support
flag:
./build_deploy.sh --disable-oracle-support
Force Installing/Updating Packages¶
By default, the SRPMs are installed only once after a fresh build container has been spawned. If you want to explicitly re-run this installation step for the RPMs (without resetting the build container), then you can use the --force-install
flag:
./build_deploy.sh --force-install
Skipping the RPM Building¶
When you make changes to e.g. the container image, the system test or the Helm deployment, then the source code doesn't change. In these cases it is not necessary to rebuild the RPMs, so you can skip this using the --skip-build
flag:
./build_deploy.sh --skip-build
Skipping the Container Image Building¶
Similar to the item above, when there are only changes to e.g. the system tests, it is also not necessary to rebuild the container image. In this case, use the --skip-image-reload
flag. Note that in these cases it doesn't make sense to run the build step, as it won't get used in the deployment. As such, combine it with --skip-build
:
./build_deploy.sh --skip-build --skip-image-reload
Skipping the Deployment¶
Finally, sometime you just want to build the RPMs, in which case you can skip the deployment step using --skip-deploy
:
./build_deploy.sh --skip-deploy
This will only build the RPMs and the container image.
Upgrading the Instance instead of Spawning One from Scratch¶
It is possible to upgrade the existing Helm deployment, instead of tearing down the current deployment and spawning a new one. Doing so, can save some time. To do this, use the --upgrade
flag:
./build_deploy.sh --upgrade
Note that this feature is considered WIP until the Helm setup has reached a stable version, so it might not always work as expected.
I want more Flexibility¶
The script also provides further flags to handle use-cases not explicitly covered by the flags detailed above:
--build-options
: additional flags to pass verbatim to thebuild_image.sh
script--spawn-options
: additional flags to pass verbatim to thecreate_instance.sh
/upgrade_instance.sh
scripts--tapeservers-config
: custom configuration for the tape servers.