Skip to content

WIP

This page is still work in progress.

Glossary

Below is a collection of components in the system with a brief explanation accompanying them.

Archive ID

description

Archive Metadata

description

ATRESYS

description

CERN Tape Archive

description

CASTOR

description

Catalogue

CTA Catalogue is a Relational DB which contains the tape file namespace, permanent system data configuration and its state changes.

CEPH

description

CTA

description

CTA Frontend

The component of the CTA system which acknowledges user requests being received and mainly interacts with the Disk Instance.

CTA Instance [Name]

description

ctaproductionfrontend

Default frontend in production instanceName for user queues.

ctaproductionrepackfrontend

Alias that points to the relevant frontend for repack queues:

  • ctacephproductionrepackfrontend: frontend with cephProductionRepack schedulerDB backend
  • ctapostgresproductionrepackfrontend: frontend with postgresProductionRepack backend

cta-admin

description

cta-taped

description

Data Acquisition System

description

Disk Instance

A deployment of a disk buffer for CTA system which could be a dCache of an EOS storage system. In the documentation we use it also as a different term for “EOSCTA instance”, the ‘small EOS’ instance used to archive/retrieve files from tape. A Disk Instance can have one or multiple VOs assigned to it. At CERN we have created a dedicated Disk Instance for each of the large LHC experiments (eosctaatlas, ...) , in addition to instances shared by the small/medium sized experiments (eosctapublic, eosctapublicdisk), based on their archival needs.

EOS

description

EOSCTA

description

EOS Instance

description

EOS Space

description

File Archival

description

File Lifecycle

description

File Preparing

description

File Retrieval

description

File Storage Server

description

File Staging

description

File Transfer System

description

Fluentd

description

gRPC

description

Kerberos

description

Kerberos keytab

description

Liquibase

Liquibase is an open-source database-independent library for tracking, managing and applying database schema changes.

Media Type

description

mhVTL

Mount Policy

Named set of parameters (e.g. priority, minimum request age) assigned to each transfer type. It is one of the main input parameters for the Scheduler which determines the eligibility of a particular transfer queue to trigger a Tape Mount.

ObjectStore

description

Puppet

description

Protobuf

description

readtp

description

Repack

description

Remote Media Changer Daemon

description

RPM

description

rsyslog

description

Rundeck

description

Scheduler

The component of the CTA system which is responsible for deciding when a tape shall be mounted into a tape drive.

SchedulerDB

Metadata backend for the Scheduler. This can be an Object Store, a File system or a Relational Database. Its purpose is to hold all transient transfer metadata and their changes.

SSS Key

description

Storage Class

description

Tape Buffer

description

Tape Cartridge

description

Tape Mount

Assignment of a Tape Drive to a Tape for a set of existing transfers to be executed.

Tape Drive

The physical device used for reading or writing to tape. We can have several tape drives being operated by one tape server.

Tape Drive Logical Library

description

Tape Library

description

Tape Lifecycle

description

Tape Pool

Logical collection of tapes used to manage (a) file ownership; (b) where the file should be physically stored.

Tape Server

A server which runs the necessary processes to perform efficient operations on a tape library, i.e. decides to mount tapes into drives, and participates in read/write processes as well as in all necessary maintanence operations.

Tape Slot

description

Tape Supply

description

Tape Supply Pool

description

Tier-0 Site

description

Tier-1 Site

description

Tier-2 Site

description

Virtual Organization

A grouping of users working with the same data/experiment. As an example, for CTA at CERN each LHC experiment (ATLAS, …), as well as small and medium sized experiments (AMS, NA62, …), is assigned to a dedicated VO with the same name. VOs are used to enforce quotas, such as an experiment’s number of dedicated drives, as well as to gather usage statistics and summaries for each experiments.

XRootD

description

XRootD SSI

description