The HPC center was founded and is operated by the Research Center in Computer Science
(Computer Science Department, Faculty of Mathematics and Computer Science, West University of Timisoara).
The information on this website is constantly updated.
Further questions regarding the HPC resources must be submitted by email: support [AT] info [DOT] uvt [DOT] ro
The following infrastructure hardware is available at the research center:
To receive access to the infrastructure please fill the sign-up form.
The infrastructure will consists of:
- state of the art container based data-center;
- redundant power supply to offer 99.(9)% availability (main power line plus a generator set as a backup);
- redundant cooling and fire protection systems.
- raw compute power at a glance: 2048 cores, 16TB RAM, 16TB local storage, 210TB dedicated storage.
Hardware specifications:
| Specification |
Description |
Compute nodes: HPE ProLiant DL385 Gen10 (16 nodes)
| CPU |
128x AMD EPYC 7702 2,0Ghz cores |
| Memory (RAM) |
1024 GB |
| Storage |
2x 480GB SSD local harddrives |
| Inter-connect (storage and communications) |
2x 25GbE adaptors |
| Other connectivity |
10GbE Internet, 2x1Gbps management |
| Specification |
Description |
Services nodes: HPE ProLiant DL385 Gen10 (3 nodes)
| CPU |
48x AMD EPYC 7352 2,3Ghz cores |
| Memory (RAM) |
256 GB |
| Storage |
2x 480GB SSD local harddrives |
| Inter-connect (storage and communications) |
using 2x 25GbE adaptors |
| Other connectivity |
10GbE Internet, 2x1Gbps management |
| Specification |
Description |
Dedicated storage: HPE PRIMERA C630
| Storage capacity |
214 TB hybrid (46TB SSD / 168TB SAS) |
| Storage Connectivity |
8x 25GbE |
Proiect cofinantat din Programul Operațional Competitivitate 2014-2020 (POC/398/1/1) - Modernizarea infrastructurii de calcul și stocare a Centrului de Cercetare în Informatică al Universității de Vest din Timișoara pentru oferirea de servicii de tip Cloud și servicii de calcul de înaltă performanță (MOISE) - https://moise.projects.uvt.ro
Hardware specifications:
| Specification |
Description |
IBM Power System AC922 - hostname: kratos
| CPU |
160x Power9 3,6Ghz cores |
| GPU |
4x NVidia V100 16GB GDDR5 with NVLink |
| Memory (RAM) |
314 GB |
| Storage |
2x 960GB SAS local harddrives |
| Remote storage |
using 2x 40Gbps QDR adaptors |
| Inter-connect |
2x 40Gbps QDR Infiniband |
| Connectivity |
10GbE Internet, 2x1Gbps management |
| Specification |
Description |
IBM Power System AC922 - hostname: kraken
| CPU |
160x Power9 3,7Ghz cores |
| GPU |
4x NVidia V100 32GB GDDR5 with NVLink |
| Memory (RAM) |
633 GB |
| Storage |
2x 960GB SAS local harddrives |
| Remote storage |
using 2x 40Gbps QDR adaptors |
| Inter-connect |
2x 40Gbps QDR Infiniband |
| Connectivity |
10GbE Internet, 2x1Gbps management |
| Specification |
Description |
Spectrum Scale Storage
| Storage capacity |
1 PB (PetaByte) |
| Storage Connectivity |
4x 100Gbps FDR Infiniband and 4x 10GbE |
Software support: developers have access to IBM specific compilers for both CPU and GPU (CUDA, IBM PowerAI for distributed deep learning)
The HOST Project aimed:
- to widen the research and development knowledge and experience of the team in HPC services and their applications,
- to raise the number of users of the services of the data centre; to achieve the status of EU-regional supercomputing centre and stakeholder in national and international e-Infrastructure initiatives;
- to become a centre for supporting innovation and regional development.
Notice: HOST GPU Cluster was decommissioned.
To accomplish these goals the current infrastructure was extended/upgraded with:
| Specification |
Description |
7x GPU Compute Nodes
| CPU |
2x Intel Xeon 3.46Ghz |
| GPU |
1x NVidia Tesla M2070Q (448 cores, 6GB GDDR5) |
| Memory (RAM) |
32GB |
| Storage |
2x 250GB NL-SAS local harddrives |
| Remote storage |
using 2x40Gbps Infiniband; |
| Inter-connect |
2x 40Gbps FDR Infiniband |
| Connectivity |
10GbE Internet, 2x1Gbps management |
| Specification |
Description |
Storage Server
| Storage capacity |
600x 300GB SAS harddrivers; 18TB total raw storage |
| Storage connectivity |
8x 8Gbps FC (with the IO nodes and 2x 40Gbps Infiniband into the network) |
Software support: developers have access to GNU Compilers, Intel XE Compilers (different versions) and different software libraries like MPICH2, OpenMP etc.
Software upgrades: GPFS licenses (for all new and existant nodes); LoadLeveler Workload Scheduler; 10Gbps Ethernet core switch; 40Gbps Infiniband core switch (subnet manager); Intel Cluster Studio XE (compilers, debuggers);
Data room upgrades: new cooling units, power-lines, fire detection and suppression system;
This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement FP7-REGPOT-2011-1 no. 284595
The InfraGRID project aimed to setup a grid-services orientated infrastructure in order to support complex research projects and technological transfer.
One of the main objectives was the acquisition of the computational cluster.
Notice: InfraGRID Cluster was decommissioned.
IBM BlaceCenter-H hardware specifications:
| Specification |
Description |
| CPU |
112x Intel XEON 2.00Ghz, 4x cores/CPU (448 cores in total) |
| Memory (RAM) |
1GB/core |
| Storage |
145GB SAS local harddrives per blade server; |
| Remote storage |
using 4Gbps FiberChannel Fabrics; |
| Inter-connect |
40Gbps QDR Infiniband |
| Connectivity |
10GbE Internet, 2x1Gbps management |
Software support: developers have access to GNU Compilers, Intel XE Compilers (different versions) and different software libraries like MPICH2, OpenMP etc.
This computing resource has been co-funded by the European Commission through the POSCCE program, contract no. 41 / 11.05.2009 (InfraGrid Infrastructura Grid Orientata spre Servicii). For more information regarding other programs co-financed by the European Commision please visit: www.fonduri-ue.ro
UVT’s BG/P consists of an fully loaded single BlueGene/P rack that has move than 1000 CPUs and 4TB of RAM memory. It can offer a 11.7 TFlops sustained performance.
Notice: BG/P Supercomputer was decommissioned.
BlueGene/P rack hardware specifications:
| Specification |
Description |
| CPU |
1024x Quad-Core PowerPC-450 850Mhz |
| Memory (RAM) |
4GB/CPU and 4TB installed memory |
| Storage |
IBM DS3524 SAN with a total of 15TB installed storage; |
| Inter-connect |
3D-Torus network at 40Gbps |
| Connectivity |
10GbE Ethernet |
Software support: developers have access to IBM XL Compilers for C, C++ and Fortran (different versions) and different software libraries like MPICH2 (modified to support BG/P programming model), OpenMP and MPIx (IBM MPI Library).
This computing resource has been co-funded by the European Commission through the POSCCE program, contract no. 260 / 28.09.2010 (Institutul de Cercetări Avansate de Mediu (ICAM)). For more information regarding other programs co-financed by the European Commision please visit: www.fonduri-ue.ro