HPC Infrastructure

The following infrastructure hardware is available at the research center:

MOISE Cloud(2022)

The infrastructure will consists of:


Hardware specifications:

Compute nodes: HPE ProLiant DL385 Gen10 (16 nodes)
Specification Description
CPU 128x AMD EPYC 7702 2,0Ghz cores
Memory (RAM) 1024 GB
Storage 2x 480GB SSD local harddrives
Inter-connect (storage and communications) 2x 25GbE adaptors
Other connectivity 10GbE Internet, 2x1Gbps management

Services nodes: HPE ProLiant DL385 Gen10 (3 nodes)
Specification Description
CPU 48x AMD EPYC 7352 2,3Ghz cores
Memory (RAM) 256 GB
Storage 2x 480GB SSD local harddrives
Inter-connect (storage and communications) using 2x 25GbE adaptors
Other connectivity 10GbE Internet, 2x1Gbps management

Dedicated storage: HPE PRIMERA C630
Specification Description
Storage capacity 214 TB hybrid (46TB SSD / 168TB SAS)
Storage Connectivity 8x 25GbE

Proiect cofinantat din Programul Operațional Competitivitate 2014-2020 (POC/398/1/1) - Modernizarea infrastructurii de calcul și stocare a Centrului de Cercetare în Informatică al Universității de Vest din Timișoara pentru oferirea de servicii de tip Cloud și servicii de calcul de înaltă performanță (MOISE) - https://moise.projects.uvt.ro

BID Power Cluster


Hardware specifications:

IBM Power System AC922 - hostname: kratos
Specification Description
CPU 160x Power9 3,6Ghz cores
GPU 4x NVidia V100 16GB GDDR5 with NVLink
Memory (RAM) 314 GB
Storage 2x 960GB SAS local harddrives
Remote storage using 2x 40Gbps QDR adaptors
Inter-connect 2x 40Gbps QDR Infiniband
Connectivity 10GbE Internet, 2x1Gbps management

IBM Power System AC922 - hostname: kraken
Specification Description
CPU 160x Power9 3,7Ghz cores
GPU 4x NVidia V100 32GB GDDR5 with NVLink
Memory (RAM) 633 GB
Storage 2x 960GB SAS local harddrives
Remote storage using 2x 40Gbps QDR adaptors
Inter-connect 2x 40Gbps QDR Infiniband
Connectivity 10GbE Internet, 2x1Gbps management

Spectrum Scale Storage
Specification Description
Storage capacity 1 PB (PetaByte)
Storage Connectivity 4x 100Gbps FDR Infiniband and 4x 10GbE
Software support: developers have access to IBM specific compilers for both CPU and GPU (CUDA, IBM PowerAI for distributed deep learning)

HOST GPU Cluster

The HOST Project aimed:

Notice: HOST GPU Cluster was decommissioned.

To accomplish these goals the current infrastructure was extended/upgraded with:


7x GPU Compute Nodes
Specification Description
CPU 2x Intel Xeon 3.46Ghz
GPU 1x NVidia Tesla M2070Q (448 cores, 6GB GDDR5)
Memory (RAM) 32GB
Storage 2x 250GB NL-SAS local harddrives
Remote storage using 2x40Gbps Infiniband;
Inter-connect 2x 40Gbps FDR Infiniband
Connectivity 10GbE Internet, 2x1Gbps management

Storage Server
Specification Description
Storage capacity 600x 300GB SAS harddrivers; 18TB total raw storage
Storage connectivity 8x 8Gbps FC (with the IO nodes and 2x 40Gbps Infiniband into the network)
Software support: developers have access to GNU Compilers, Intel XE Compilers (different versions) and different software libraries like MPICH2, OpenMP etc.
Software upgrades: GPFS licenses (for all new and existant nodes); LoadLeveler Workload Scheduler; 10Gbps Ethernet core switch; 40Gbps Infiniband core switch (subnet manager); Intel Cluster Studio XE (compilers, debuggers);
Data room upgrades: new cooling units, power-lines, fire detection and suppression system;

This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement FP7-REGPOT-2011-1 no. 284595

InfraGRID Cluster

The InfraGRID project aimed to setup a grid-services orientated infrastructure in order to support complex research projects and technological transfer.
One of the main objectives was the acquisition of the computational cluster.

Notice: InfraGRID Cluster was decommissioned.


IBM BlaceCenter-H hardware specifications:

Specification Description
CPU 112x Intel XEON 2.00Ghz, 4x cores/CPU (448 cores in total)
Memory (RAM) 1GB/core
Storage 145GB SAS local harddrives per blade server;
Remote storage using 4Gbps FiberChannel Fabrics;
Inter-connect 40Gbps QDR Infiniband
Connectivity 10GbE Internet, 2x1Gbps management
Software support: developers have access to GNU Compilers, Intel XE Compilers (different versions) and different software libraries like MPICH2, OpenMP etc.

This computing resource has been co-funded by the European Commission through the POSCCE program, contract no. 41 / 11.05.2009 (InfraGrid Infrastructura Grid Orientata spre Servicii). For more information regarding other programs co-financed by the European Commision please visit: www.fonduri-ue.ro

BlueGene/P SuperComputer

UVT’s BG/P consists of an fully loaded single BlueGene/P rack that has move than 1000 CPUs and 4TB of RAM memory. It can offer a 11.7 TFlops sustained performance.

Notice: BG/P Supercomputer was decommissioned.


BlueGene/P rack hardware specifications:

Specification Description
CPU 1024x Quad-Core PowerPC-450 850Mhz
Memory (RAM) 4GB/CPU and 4TB installed memory
Storage IBM DS3524 SAN with a total of 15TB installed storage;
Inter-connect 3D-Torus network at 40Gbps
Connectivity 10GbE Ethernet
Software support: developers have access to IBM XL Compilers for C, C++ and Fortran (different versions) and different software libraries like MPICH2 (modified to support BG/P programming model), OpenMP and MPIx (IBM MPI Library).

This computing resource has been co-funded by the European Commission through the POSCCE program, contract no. 260 / 28.09.2010 (Institutul de Cercetări Avansate de Mediu (ICAM)). For more information regarding other programs co-financed by the European Commision please visit: www.fonduri-ue.ro