PARALLEL COMPUTER MEMORY
ARCHITECTURE
Parallel Computer Architecture optimizes performance and
programmability by organizing resources within technology
and cost limits at any given time.
Shared memory system
Shared memory is a fundamental concept in computer
science and operating system that allows multiple process
of threads to access a common region of memory
concurrently. It is also a memory space accessible by
multiple processes or threads, enabling them to share and
communicate without the read for extensive data copying
or complex message passing.
Parallel Programming
Shared memory is vital in parallel computing where a
multiple threads or processes work on shared data set to
enhance performance.
Advantages Disadvantages
• Speed: Shared memory • Synchronization:
communication is fast Careful synchronization
since it avoids data mechanism are needed to
copying. prevent data corruption.
• Simplicity: It simplifies • Complexity: Debugging
communication as and managing shared
process can directly read memory can be
and write to shared challenging due to
memory locations. potential data issues.
Twoverso
i nsofsharedmemorysystemsavaa
libe
l today
Non-uniform memory access
Symmetric multiprocessors (SMP) (NUMA) architectures
is a computer system Multiprocessing is a
where processors common computer memory
multiple share a operating architecture where each
system and memory, with a processor has its own local
single copy of the OS memory between them,
controlling all processors. prrimarily used on servers.
Symmetric multi-processors (SMPs)
- All processors share the same physical main memory.
- Memory bandwidth per processor is limiting factor for this
type of architecture
- Typical size 2-32 processors.
What is a non-uniform memory
access [NUMA]?
Non-uniform memory access, or NUMA, is a method of
configuring a cluster of microprocessors in a
multiprocessing system so they can share memory locally.
The idea is to improve the system's performance and allow
it to expand as processing needs evolve.
How non-uniform memory access works
When a processor looks for data at a certain memory
address, it first looks in the L1 cache on the
microprocessor. Then it moves to the larger L2 cache chip
and finally to a third level of cache (L3). The NUMA
configuration provides this third level. If the processor still
cannot find the data, it will look in the remote memory
located near the other microprocessors.
NUMA and symmetric multiprocessing
NUMA is commonly used in a symmetric multiprocessing
system. An SMP system is a tightly coupled, share-
everything system in which multiple processors work under
a single operating system and access each other's memory
over a common bus or interconnect path. These
microprocessors work on a single motherboard connected
by a bus.
NUMA node architecture
The NUMA architecture is common in multiprocessing
systems. These systems include multiple hardware
resources including memory, input/output devices, chipset,
networking devices and storage devices (in addition to
processors). Each collection of resources is a node.
Multiple nodes are linked via a high-speed interconnect or
bus.
Advantages and disadvantages of NUMA
Advantages Disadvantages
• One of the biggest • One disadvantage of
advantages of NUMA is the NUMA is that it can be
fast movement of data and
lower latency in the
expensive. And the lack of
multiprocessing system. programming standards
Additionally, NUMA reduces for larger configurations
data replication and simplifies can make implementation
programming. And the parallel challenging.
computers in a NUMA
architecture are highly scalable
and responsive to data
allocation in local memories.
• Cybercrime is any criminal activity that involves a
computer, networked device or a network.
• Cybercrime hacks refer to the illegal and unauthorized
activities conducted by individuals or groups in the digital.
• Malware attack refers to the deliberate infiltration of
malicious software, commonly known as malware into a
computer system or network with the intent to cause
harm, steal information or gain unauthorized access.
Conclusion
• Shared memory is a fundamental concept in computer
science, offering efficient data sharing and communication
among process and threads. While it enhances
performances and simplifies communication, meticulous
synchronization and security measures are vital for its
effective use in various applications. Understanding
shared memory is essential for developers and system
administrators in concurrent computing environments.