Introduction

Modern high-energy and heavy-ion experiments produce incredible amounts of data (several terabytes per week). In order to be able to process and analyze such large amounts of data, high performance computing systems are needed. This is also true for the analysis of the experimental data produced by the HADES experiment at GSI.

The Nuclear Physics Laboratory of the University of Cyprus has recently made a significant step towards high-performance computing by building a new modular Linux cluster that is based on the new generation Intel Xeon dual processors.  The cluster comprises of 1 master node and 25 computing (slave) nodes each consisting of two 3.0 GHz Intel Xeon processors, with a total RAM of 59 Gbytes (see figure 1.1). Four Easy RAID disk array systems are also attached to the master node. The nodes are interconnected by Gigabit Ethernet network for inter-processor communications. The current configuration of the system provides a peak performance of 150 Gflops and a total disk capacity of 12.5 Tbyte. 

This dedicated high performance Linux cluster enables the analysis and simulations of complex experimental data obtained by HADES and other experiments very efficiently. It provides also the possibility to compile and run jobs that are based on parallel algorithms. A fast Internet connection is available, which allows data transfer from and to the Nuclear Physics laboratory of the University of Cyprus for local analysis and storage.

 

Figure 1.1: High performance Linux cluster features

 

(click to enlarge)

Figure 1.2: Photograph of a part of  the High performance Linux cluster 

The operating system running is Linux (Red Hat Linux 8.0). The Red Hat Linux 8.0 operation system provides two versions of the GNU C/C++ compilers (gcc 3.2 and gcc 2.96) to pre-process, assemble, and link C/C++ language source files. The complete HADES analysis and simulation software packages have been installed and tested on this cluster. Most of them run under the compiler gcc 2.96, and part under the compiler gcc 3.2. More specifically the following packages have been installed under the compiler gcc 2.96:

1. Root versions: 302-07, 303-09

2. CERN library versions: 2000, 2002

3. Pluto versions: v3.58, v3.60

4. Hydra versions: v6_06, v6_08, nov02, nov02-lvl3, v6_13, v6_14, v6_14

5. UrQMD versions: v1.2, v1.3, convert-urqmd: 1.2, 1.3.6

6. HGeant versions: v3_13, v3_14

7. Oracle interface, version 8.1.7, 9.0.1

One of our goal is to produce 200 million simulation events of the 2xA GeV 12C+12C in order to compare the real data which were taken in 2002 with the HADES spectrometer. In order to analyze the data, we have performed simulations using the GEANT 3.21 code . As event generators we have used the UrQMD transport code of the Frankfurt group.  These event numbers will be then processed through HGeant and Hydra. Finally, the simulation events will be reconstructed in several steps:

1.   Read event information from some database with different reconstruction levels,

2.   Make pattern recognition in some or all of the HADES detectors and subdetectors;

3.   Identify particles,

4.   Fitting reconstructed tracks to obtained the particles momenta and provide information about reconstructed events.  

These calculations are extremely time-consuming and would require about 645 CPU days  on a  single Pentium 4 processor (see table 1.1). In our efficient cluster farm, this time is significantly reduced to about 16 CPU days. 

Table 1.1: The computing time (CPU) that is needed to generate 200 million simulation events of the 2xA GeV  12C+12C  collision system by using our cluster farm as compared to one single Pentium 4 processor.

Program

One Single Processor (CPU days)

Our Cluster farm

 (CPU days)

UrQMD

187.5

4.7

Geant

354

8.9

Hydra Analysis

104

2.6

Total

645.5

16.2