Glossary
Archive ID¶
Identifies the version of the file that was archived. In practice, files should be immutable so there will be one Archive ID per file.
Archive Metadata¶
Optional json-formatted data that can be passed to give various . Implementation is under works.
ATRESYS¶
Automated Tape REpacking SYStem, see tools documentation.
CASTOR¶
The predecessor to CTA.
Catalogue¶
CTA Catalogue is a Relational DB which contains the tape file namespace, permanent system data configuration and its state changes.
Ceph¶
Open source distributed storage system used as the Scheduler in CTA.
CTA Frontend¶
The component of the CTA system which acknowledges user requests being received and mainly interacts with the Disk Instance.
CTA Instance [Name]¶
A deployment of CTA.
ctaproductionfrontend¶
Default frontend in production instanceName for user queues.
ctaproductionrepackfrontend¶
Alias that points to the relevant frontend for repack queues
ctacephproductionrepackfrontend¶
CTA frontend with cephProductionRepack schedulerDB backend
ctapostgresproductionrepackfrontend¶
CTA frontend with postgresProductionRepack backend
cta-admin¶
Operator tool that can perform various operator commands on a CTA system. Interacts via the frontend.
cta-taped¶
The tape daemon process running on the tape servers handling the drive interactions.
Disk Instance¶
A deployment of a disk buffer for CTA system which could be a dCache
of an EOS
storage system. In the documentation we use it also as a different term for “EOSCTA instance”, the ‘small EOS’ instance used to archive/retrieve files from tape. A Disk Instance can have one or multiple VOs assigned to it. At CERN we have created a dedicated Disk Instance for each of the large LHC experiments (eosctaatlas, ...) , in addition to instances shared by the small/medium sized experiments (eosctapublic, eosctapublicdisk), based on their archival needs.
EOS¶
Disk buffer system used at CERN. See EOS.
EOS Instance¶
A deployment of EOS.
File Storage Server¶
Part of EOS that handles the actual disk storage.
gRPC¶
See https://grpc.io/
Kerberos¶
Authentication protocol.
Liquibase¶
Liquibase is an open-source database-independent library for tracking, managing and applying database schema changes.
Mount Rule¶
Mount Rule is a group of Disk Instance name, user/group requester name, mount policy name and and activity regex expression which together point to a specific Mount Policy.
mhVTL¶
Virtual tape library used in CI. See https://github.com/markh794/mhvtl
Mount Policy¶
Named set of parameters (e.g. priority, minimum request age) assigned to each transfer type. It is one of the main input parameters for the Scheduler which determines the eligibility of a particular transfer queue to trigger a Tape Mount.
ObjectStore¶
ObjectStore is an object database, a specialized type of NoSQL database designed to handle data created by applications that use object-oriented programming techniques, avoiding the object–relational mapping overhead required when using object-oriented data with a relational database.
OStoreDB¶
OStoreDB is the CTA API to the ObjectStore serving as the SchedulerDB (i.e. used as a metadata backend for scheduling operations).
RelationalDB¶
RelationalDB is the CTA API to a relational database (PostgreSQL) serving as the SchedulerDB (i.e. used as a metadata backend for scheduling operations).
Puppet¶
Deployment automation engine, see https://www.puppet.com/
Protobuf¶
Mechanism for serializing structured data. See https://github.com/protocolbuffers/protobuf
readtp¶
Operator tool to sequently read tape
Repack¶
The act of copying/moving data from one magnetic tape to another, for purposes such as moving data away from potentially defect media, for data replication across multiple cartridges, or for moving to a more recent media generation.
Remote Media Changer Daemon¶
CTA process that handles interactor with the tape library robot arm
rsyslog¶
System for log processing. See https://www.rsyslog.com/
Rundeck¶
System to automate running jobs. Used for operations.
Scheduler¶
A component of the CTA system which is responsible for deciding when a tape shall be mounted into a tape drive.
SchedulerDB¶
Metadata backend for the Scheduler. This can be an Object Store, a File system or a Relational Database. Its purpose is to hold all transient transfer metadata and their changes.
SSS¶
Simple Shared Secret
Tape Cartridge¶
A physical unit containing magnetic tape
Tape Daemon¶
Process runing on a Tape Server which takes care of scheduling the mount and spawning several threads taking care of the process of writing or reading from a tape, maintenance and reporting of CTA workflows, etc..
Tape Mount¶
Assignment of a Tape Drive to a Tape for a set of existing transfers to be executed.
Tape Drive¶
The physical device used for reading or writing to tape. We can have several tape drives being operated by one tape server.
Logical Library¶
Logical library is used to partition resources (tape drives and cartridges) of a physical tape library. Normally, it is a combination of a physical library name and a tape drive type. Disabling a logical library will prevent new mounts on the tape drives defined in that particular logical library (running transfers will continue). Example - Name: IBMLIB1-LTO9, Disabled: false, Physical library: IBMLIB1. This parameter links the Tape Drives with the set of tapes which the tape drives can operate on, i.e. not all drives can read and write to all tapes.
Tape Library¶
Collection of magnetic tapes and drives.
Tape Pool¶
Logical collection of tapes used to manage (a) file ownership; (b) where the file should be physically stored.
Tape Server¶
A server which runs the necessary processes to perform efficient operations on a tape library, i.e. decides to mount tapes into drives, and participates in read/write processes as well as in all necessary maintanence operations.
Tape Slot¶
Slot of a cartridge in a tape library.
transfer request¶
One transfer request may contain more than one transfer job. This is true especially for archiving where multiple copies may have been requested to be stored on tape.
Virtual Organization¶
A grouping of users working with the same data/experiment. As an example, for CTA at CERN each LHC experiment (ATLAS, …), as well as small and medium sized experiments (AMS, NA62, …), is assigned to a dedicated VO with the same name. VOs are used to enforce quotas, such as an experiment’s number of dedicated drives, as well as to gather usage statistics and summaries for each experiments.
XRootD¶
XRootD SSI¶
Plugin for XRootD that allows for SSS authentication. Currently being phased out in favor of gRPC.