High-Performance Computing in University Scientific Research
High-Performance Computing in University Scientific Research
Scientific Research
Santillán Salazar Ruben Sandro
Abstract-- This document contains the proper concepts for The software programs that encoders write to run on
the proper understanding of high-performance computing, supercomputers are divided into many smaller, independent
intended to explain the influence of it on the environment of a computational tasks, called “sub-processes”, that can run
professional career, as well as what it entails in student scientific simultaneously on these cores. For supercomputers to work
research today. effectively, their cores must be well designed to communicate
data efficiently; for modern supercomputers, it may consist of
more than 100,000 or more "cores" (For example, America's
I. INTRODUCTION: Titan Fig. 2, currently the world's second fastest
supercomputer, contains just under 300,000 cores, which are
High-performance computing has become indispensable capable of operating more than 6,000,000,000 threads
to the business environment, scientific researchers, simultaneously.
undergraduate and graduate studies at universities, and
government agencies to generate new discoveries and
innovate cutting-edge products and services.
That's why to take advantage of the capabilities of high-
performance computing systems, it's essential to understand
how they work and to adapt the design of programs to exploit
their full potential.
High-performance computing has transformed science,
making enormous contributions in a variety of fields,
including scientific research. In the following sections, this
type of research will be best viewed from a student
perspective.
II. DEFINITION:
High-performance computing involves the use of Fig. 1 Supercomputers.
"supercomputers" in Fig. 1 and massive parallel processing
techniques to solve complex computational problems through
computer modeling, simulation, and data analysis.
This type of computing brings together several
technologies, including computer architecture, software and
electronics, and application software under a single system to
solve advanced problems quickly and effectively. While a
common computer or workstation usually contains a single
central processing unit or CPU, an HPC essentially represents
a network of CPUs (e.g. microprocessors), each of which
contains multiple computational cores, as well as its own local
Fig. 2 Titan supercomputer
memory for running a wide range of software programs. [2]
1
III. RELEVANCE 2. Development of process planning algorithms aimed
at asymmetric processors to optimize overall
The ability to leverage high-performance computing has performance.
become indispensable not only for advanced manufacturing 3. Analysis at different levels: operating system,
industries, but also for a wide range of sectors. In fact, the use compilers, programming techniques.
of high-performance computers has led to great advances in 4. Cloud Computing. Basic software. Development of
electronic design, content management and delivery, and the HPC applications (mainly big data).
optimal development of power sources, among many others. 5. Distributed intelligent systems of real time taking
In particular, high-performance computing enables advantage of the computing power of the Cloud (Cloud
breakthrough discoveries that fuel innovation and provides a Robotics). Fig. 5
cost-effective tool for accelerating the research and 6. Use of ABMS (Agent-Based Modeling and
Simulation) [3] to develop an HPC Input/Output model
development process, as well as enabling advanced modeling,
[1] to predict how changes will be made to the different
simulation and data analysis that can help solve
components of the model affect the functionality and
manufacturing challenges and aid in decision making,
performance of the system.
optimize processes and design, improve quality, predict 7. Optimization of parallel algorithms for control the
performance and failures, and accelerate or even eliminate behavior of multiple robots that work collaboratively,
prototyping and testing. considering the distribution of its local' processing
As part of its importance, its own architecture Fig. 3 leads capacity and the coordination with the computing power
to the continuous search for greater efficiency due to and storage capacity (data and knowledge) of a Cloud.
technological advances. Therefore, it is necessary to
investigate the different components of the aforementioned
architecture.
.
Fig. 4 FPGA
Fig. 3 HPC architecture.
2
VI. EXPECTED RESULTS OF THIS RESEARCH VI. REFERENCES
To study complex models, which integrate real-time [1] Arquitecturas Multiprocesador en Computación de Alto Desempeño:
Software, Métricas, Modelos y Aplicaciones De Giusti Armando, Tinetti
sensor networks and computation parallel. Prediction
Fernando, Naiouf Marcelo, Chichizola Franco, De Giusti Laura, Villagarcía
strategies for catastrophes (floods, fires caused by example) Horacio, Montezanti Diego, Encinas Diego, Pousa Adrián, Rodriguez
are based on these models with high processing capacity and Ismael,
real-time signal monitoring. [1] Iglesias Luciano, Paniego Juan Manuel, Pi Puig Martín, Dell’Oso Matías,
Mendez Mariano.
Work is under way on techniques for recovery from
multiple system level checkpoints, which serve as to ensure
[2] The Vital Importance of HighPerformance Computing to U.S.
the correct completion of the scientific applications on
systems of HPC, which are affected by the occurrence of Competitiveness by Stephen J. Ezell and Robert D. Atkinson
external transient faults and integrating this solution with the
random detection tools developed. [3] [3] D. Encinas et al., “Modeling I/O System in HPC: An ABMS Approach”.
The development of applications linked to "Big Data", The Seventh International Conference on Advances in System Simulation
especially to solve in Cloud Computing y la optimization of (SIMUL), ISBN: 978-1-61208-442-8, 2015
parallel algorithms for control the behavior of multiple robots
that work collaboratively, considering the distribution of its
local processing capacity and the coordination with the
computing power and storage capacity (data and knowledge)
of a Cloud.
VI. CONCLUSIONS