JINR computing for NOvA

From AstroNuWiki
Jump to: navigation, search

A computing infrastructure based on GRID and Cloud technologies was constructed at JINR and included in the NOvA computing resources, allowing JINR physicists the full scale participation in the software development and analysis of the NOvA experiment data.

Local General-Purpose VMs (GPVMs)

NOvA GPVMs have all the required software installed via cvmfs and slightly modified OSG-specific yum-packages allowing users to use them as original GPVMs at FNAL (except for the Grid submission).

The machines can be accessed via ssh by the IP address directly from the JINR network, or through the VPN when connecting from outside the JINR. Local JINR Kerberos accounts are used to log in via ssh, FNAL Kerberos is also available after logging in using JINR account. All the VMs have a 20 TB NFS storage to be used as a working directory and storing analysis data, and also a 24 TB office-class storage for storing miscellaneous files.

Currently available GPVMs
IP # of CPUs RAM Local storage
novacldvm01 4 8 GB 40 GB /home, 10 GB /tmp, 10 GB cvmfs cache
novacldvm02 4 8 GB 40 GB /home, 10 GB /tmp, 10 GB cvmfs cache
novacldvm03 4 8 GB 40 GB /home, 10 GB /tmp, 10 GB cvmfs cache
novacldvm04 4 8 GB 40 GB /home, 10 GB /tmp, 10 GB cvmfs cache
novacldvm05 4 8 GB 40 GB /home, 10 GB /tmp, 10 GB cvmfs cache
novacldvm06 4 8 GB 40 GB /home, 10 GB /tmp, 10 GB cvmfs cache
Total 24 48 GB 240 GB /home, 60 GB /tmp, 60 GB cvmfs cache

HTCondor cluster

The HTCondor cluster is capable of accepting NOvA production jobs from OpenScienceGrid (OSG) and process any other job submitted locally. There are a few kinds of worker nodes, listed in the table below.

Currently available GPVMs
Type # of nodes # of CPUs RAM Local storage
Type 1 14 5 11.67 GB 20 GB System, 11.5 GB Swap, 330 GB job working dir
Type 2 16 5 23.57 GB 20 GB System, 2.5 GB Swap, 330 GB job working dir
Type 3 40 6 31.47 GB 20 GB System, 15.7 GB Swap, 380 GB job working dir
Total 70 390 1.8 TB 26.6 TB disk space