Useful Commands for Development¶
At a very basic level, the pipeline for development is as follows:
For CTA development, you are highly advised to you use the one script that can do this all: build_deploy.sh
. This script is essentially a wrapper script around all the various subparts. Below we highlight a few common scenarios and how the build_deploy.sh
script can help there. For further details on the capabilities of build_deploy.sh
, please run build_deploy.sh --help
.
Note that at the end of build_deploy.sh
there will only be a barebones instance. Things like tape pools, VOs, certain authentication parts and other things have not yet been setup. If you are not relying on automated tests, you are advised to run (or at least look into) tests/prepare_tests.sh
to get some things to play around with.
Default workflow¶
Running ./build_deploy.sh
without any arguments, will give you the default workflow. The first time this command is executed, it will spawn a dedicated build container called cta-build
in the build
namespace. It will then install all the necessary dependencies to build the SRPMs, build the SRPMs, install all the necessary dependencies to build the RPMs, and finally build the RPMs. On subsequent builds, only the RPMs are rebuilt (green starting point in the figure below). Unit tests are run as part of the RPM build process.
Once the RPMs are built, a container image is built, which is then uploaded to the local Minikube registry. Finally, a CTA instance is deployed using this image. The full pipeline is illustrated below:
flowchart TB
classDef primary fill:#8f8
classDef secondary fill:#f8f
spawn_container["Spawn \nbuild container"]
install_srpm["Install SRPM deps"]
cmake_srpm["cmake SRPMs"]
make_srpm["make SRPMs"]
install_rpm["Install RPM deps"]
cmake_rpm["cmake RPMs"]
make_rpm["make RPMs"]
unit_tests["Run Unit Tests"]
subgraph build_deploy.sh
direction LR
subgraph build_srpm.sh
direction TB
install_srpm
install_srpm --> cmake_srpm
cmake_srpm --> make_srpm
end
subgraph build_rpm.sh
direction TB
install_rpm
install_rpm --> cmake_rpm:::primary
cmake_rpm --> make_rpm
make_rpm --> unit_tests
end
spawn_container:::secondary -- enter \ncontainer --> build_srpm.sh
build_srpm.sh --> build_rpm.sh
build_rpm.sh -- exit \n container --> build_image.sh
build_image.sh --> create_instance.sh
end
style build_srpm.sh color:#900
style build_rpm.sh color:#900
Changing the Scheduler Type¶
The default scheduler type in use is the objectstore. With ongoing development on a new scheduler using Postgres, it is useful to switch between them. This can be done using the --scheduler-type
flag:
The supported options are objectstore
, and pgsched
. Setting the scheduler type to pgsched
, will also make the deployment use a Postgres scheduler, instead of the Virtual File System (VFS) objectstore scheduler.
Changing the Scheduler Configuration for Deployment¶
There are a few separate backends that can be used depending on the scheduler type. For objectstore
, there is Ceph and VFS, while for pgsched
there is postgres
. By default, the backend will be VFS when the scheduler type is objectstore
, and postgres
for pgsched
. However, it is possible to overwrite this manually using --scheduler-config
:
This path can be absolute or relative to the orchestration
directory. This allows you to connect to e.g. a remote or pre-configured scheduler setup.
Changing the Catalogue Configuration for Deployment¶
By default, a Postgres catalogue database will be spawned. However, it might be desirable to connect to a centralized Oracle database (such as in CI). To do this, use the --catalogue-config
flag:
Once again, this path can be absolute or relative to the orchestration
directory.
Skipping Unit Tests¶
Sometimes you are making changes to the source code that do not interact with the existing unit tests. To save yourself ~2 minutes, you can run the following:
Using a Different Build Generator¶
Both Ninja
and Unix Makefiles
are supported as build generators for the CTA project. This can be specific using the --build-generator
flag:
The default build generator is Ninja
.
Starting with a Fresh Build Container¶
If the container used to build CTA is acting up for whatever reason, you can delete the existing container and spawn a new one using the --reset
flag:
Note that this will not remove the build directories. It does, however clear CCache
.
Cleaning CMake Cache/Build Directory¶
Sometimes CMake is a mess and you want to clear the cache and the build directory. For this, use the --clean-build-dir
flag:
This removes the build directory for the RPMs (not the SRPMs). There is also --clean-build-dirs
(note the s
), which cleans the build directory for both the SRPMs and the RPMs. However, as the SRPMs are not built again unless the build container is fresh, this should only be used in combination with --reset
.
Disabling CCache¶
If you wish to build CTA without having CCache enabled, you can use the --disable-ccache
flag:
Disabling Oracle Support¶
To build CTA without requiring any Oracle dependencies, you can use the --disable-oracle-support
flag:
Force Installing/Updating Packages¶
By default, the installation of the packages in the build container is done only during the first run. If you want to explicitly re-run this installation step for the RPMs (without resetting the build container), then you can use the --force-install
flag:
Skipping the RPM Building¶
When you make changes to e.g. the container image, the system test or the Helm deployment, then the source code doesn't change. In these cases it is not necessary to rebuild the RPMs, so you can skip this using the --skip-build
flag:
Skipping the Container Image Building¶
Similar to the item above, when there are only changes to e.g. the system tests, it is also not necessary to rebuild the container image. In this case, use the --skip-image-reload
flag. Note that in these cases it doesn't make sense to run the build step, as it won't get used in the deployment. As such, combine it with --skip-build
:
Skipping the Deployment¶
Finally, sometime you just want to build the RPMs, in which case you can skip the deployment step using --skip-deploy
:
This will only build the RPMs and the container image.
Upgrading the Instance instead of Spawning One from Scratch¶
It is possible to upgrade the existing Helm deployment, instead of tearing down the current deployment and spawning a new one. Doing so, can save some time. To do this, use the --upgrade
flag:
Note that this feature is considered WIP until the Helm setup has reached a stable version, so it might not always work as expected.
Please let me Start from Scratch¶
Sometimes things are just so messed up with your build that you don't know how to fix it anymore. At that point, run:
This will remove any possible caching from the build process. If things are still stuck, consider rebooting the VM
I want more Flexibility¶
The script also provides further flags to handle use-cases not explicitly covered by the flags detailed above:
--build-options
: additional flags to pass verbatim to thebuild_image.sh
script--spawn-options
: additional flags to pass verbatim to thecreate_instance.sh
/upgrade_instance.sh
scripts--tapeservers-config
: custom configuration for the tape servers.