DS Unit I
DS Unit I
DATA STRUCTURES
AND
ALGORITHMS
2014
1
Data Structures & Algorithms
The various Data Structures: There are various types of data structures exist, namely,
Arrays, Polynomials, Lists, Trees, Graphs, Sets, Records, Files, Data Bases and Objects etc.
ARRAYS: Arrays are among the oldest and most important data structures, and are used by
almost every program. They are also used to implement many other data structures, such as
lists and strings. An abstract data type that contains a group of similar types of data. An array
can be of one-dimensional (Vectors), two-dimensional, multi-dimensional and Sparse types.
An array data structure or simply an array is a data structure consisting of a collection of
similar elements (values or variables), each identified by at least one array index or key. The
simplest type of data structure is a linear array, also called one-dimensional array.
The elements of an array data structure are required to have the same size and should use the
same data representation. The set of valid index tuples and the addresses of the elements (and
hence the element addressing formula) are usually, but not always, fixed while the array is in
use.
Applications of Arrays:
Arrays are used to implement mathematical vectors and matrices, as well as other kinds of
rectangular tables. Many databases, small and large, consist of (or include) one-dimensional
arrays whose elements are records. Arrays are used to implement other data structures, such
as heaps, hash tables, deques, queues, stacks, strings, and VLists.
One or more large arrays are sometimes used to emulate in-program dynamic memory
allocation, particularly memory pool allocation. Historically, this has sometimes been the
only way to allocate "dynamic memory" portably. Arrays can be used to determine partial or
complete control flow in programs, as a compact alternative to (otherwise repetitive) multiple
IF statements. They are known in this context as control tables and are used in conjunction
with a purpose built interpreter whose control flow is altered according to values contained in
the array. The array may contain subroutine pointers (or relative subroutine numbers that can
be acted upon by SWITCH statements) that direct the path of the execution.
They effectively exploit the addressing logic of computers. An array is stored so that the
position of each element can be computed from its index tuple by a mathematical formula.
For example, an array of 10 32-bit integer variables, with indices 0 through 9, may be stored
as 10 words at memory addresses 2000, 2004, 2008, … 2036, so that the element with index i
has the address 2000 + 4 × i.
When data objects are stored in an array, individual objects are selected by an index that is
usually a non-negative scalar integer. Indices are also called subscripts. An index maps the
array value to a stored object. There are three ways in which the elements of an array can be
indexed:
Arrays can have multiple dimensions, thus it is not uncommon to access an array using
multiple indices. For example a two-dimensional array A with three rows and four columns
might provide access to the element at the 2nd row and 4th column by the expression A[1,
3] (in a row major language) or A[3, 1] (in a column major language) in the case of a zero-
based indexing system. Thus two indices are used for a two-dimensional array, three for a
three-dimensional array, and n for an n-dimensional array.
One-dimensional arrays
A one-dimensional array (or single dimension array) is a type of linear array. Accessing its
elements involves a single subscript which can either represent a row or column index.
In the given example the array can contain 10 elements of any value available to the int type.
In C, the array element indices are 0-9 inclusive in this case. For example, the expressions
anArrayName[0] and anArrayName[9] are the first and last elements respectively.
For a vector with linear addressing, the element with index i is located at the address B + c ×
i, where B is a fixed base address and c a fixed constant, sometimes called the address
increment or stride.
If the valid element indices begin at 0, the constant B is simply the address of the first
element of the array. For this reason, the C programming language specifies that array indices
always begin at 0; and many programmers will call that element "zeroth" rather than "first".
However, one can choose the index of the first element by an appropriate choice of the base
address B. For example, if the array has five elements, indexed 1 through 5, and the base
address B is replaced by B + 30c, then the indices of those same elements will be 31 to 35. If
the numbering does not start at 0, the constant B may not be the address of any element.
Multidimensional arrays
For a two-dimensional array, the element with indices i,j would have address B + c · i + d · j,
where the coefficients c and d are the row and column address increments, respectively.
More generally, in a k-dimensional array, the address of an element with indices i1, i2, …, ik is
B + c1 · i1 + c2 · i2 + … + ck · ik.
This means that array a has 3 rows and 2 columns, and the array is of integer type. Here we
can store 6 elements they are stored linearly but starting from first row linear then continuing
with second row. The above array will be stored as a11, a12, a21, a22, a31, a32.
This formula requires only k multiplications and k additions, for any array that can fit in
memory. Moreover, if any coefficient is a fixed power of 2, the multiplication can be
replaced by bit shifting. The coefficients ck must be chosen so that every valid index tuple
maps to the address of a distinct element.
If the minimum legal value for every index is 0, then B is the address of the element whose
indices are all zero. As in the one-dimensional case, the element indices may be changed by
changing the base address B. Thus, if a two-dimensional array has rows and columns indexed
from 1 to 10 and 1 to 20, respectively, then replacing B by B + c1 - − 3 c1 will cause them to
be renumbered from 0 through 9 and 4 through 23, respectively. Taking advantage of this
feature, some languages (like FORTRAN 77) specify that array indices begin at 1, as in
mathematical tradition; while other languages (like Fortran 90, Pascal and Algol) let the user
choose the minimum value for each index.
Dope vectors
The addressing formula is completely defined by the dimension d, the base address B, and the
increments c1, c2, …, ck. It is often useful to pack these parameters into a record called the
array's descriptor or stride vector or dope vector. The size of each element, and the minimum
and maximum values allowed for each index may also be included in the dope vector. The
dope vector is a complete handle for the array, and is a convenient way to pass arrays as
arguments to procedures. Many useful array slicing operations (such as selecting a sub-array,
swapping indices, or reversing the direction of the indices) can be performed very efficiently
by manipulating the dope vector.
Compact layouts
Often the coefficients are chosen so that the elements occupy a contiguous area of memory.
However, that is not necessary. Even if arrays are always created with contiguous elements,
some array slicing operations may create non-contiguous sub-arrays from them.
There are two systematic compact layouts for a two-dimensional array. For example, consider
the matrix
In the row-major order layout (adopted by C for statically declared arrays), the elements in
each row are stored in consecutive positions and all of the elements of a row have a lower
address than any of the elements of a consecutive row:
1 23456789
In column-major order (traditionally used by Fortran), the elements in each column are
consecutive in memory and all of the elements of a column have a lower address than any of the
elements of a consecutive column:
1 47258369
For arrays with three or more indices, "row major order" puts in consecutive positions any
two elements whose index tuples differ only by one in the last index. "Column major order"
is analogous with respect to the first index.
In systems which use processor cache or virtual memory, scanning an array is much faster if
successive elements are stored in consecutive positions in memory, rather than sparsely
scattered. Many algorithms that use multidimensional arrays will scan them in a predictable
order. A programmer (or a sophisticated compiler) may use this information to choose
between row- or column-major layout for each array. For example, when computing the
product A·B of two matrices, it would be best to have A stored in row-major order, and B in
column-major order.
Dynamic array
Static arrays have a size that is fixed when they are created and consequently do not allow
elements to be inserted or removed. However, by allocating a new array and copying the
contents of the old array to it, it is possible to effectively implement a dynamic version of an
array; see dynamic array. If this operation is done infrequently, insertions at the end of the
array require only amortized constant time. Some array data structures do not reallocate
storage, but do store a count of the number of elements of the array in use, called the count or
size. This effectively makes the array a dynamic array with a fixed maximum size or
capacity; Pascal strings are examples of this.
SWAMEGA PUBLICATIONS Page 5/33
Data Structures & Algorithms
LISTS: List is a collection of nodes kept in some patterns, i,e, (k-1)-th node precedes the k-
th node and the (k+1)-th node follows the k-th node. These lists can be classified as of the
types: Linear and Non-linear. The Non-linear lists can be thought of as Trees, Graphs and
Sets. The Linear lists can represented either sequentially or linked. The sequential
representation is of the form phrased in the first sentence of this paragraph. The linked
representation can be viewed and is different from the sequential representation. The linked
representation of a list will be discussed later on. In sequential representation, the lists are
classified as STACKS, QUEUES and DEQUE ( Double Ended QUEUE – where all
accessing at both the ends).
The various Operations: The various operations that we might want to perform on the
linear list (data structure) include, for example, the following:
1. Gain access to the k-th node of the linear list (data structure ) to examine and/or to
change the contents of its fields.
2. Insert a new node just before the k-th node.
3. Delete the k-th node.
4. Combine two or more linear lists into a single list.
5. Split a linear list into two or more lists.
6. Make a copy of a linear list.
7. Determine the number of nodes in a linear list.
8. Sort the nodes of the list into ascending/descending order based on certain fields of
the nodes.
9. Search the list for the existence of a node with a particular value in some field.
The simplest and most natural way to keep a linear list inside a computer
memory is to put the list items in sequential locations, one node after the other. We thus will
have
LOC(X[j+1]) = LOC(x[j]) + c
where c is the number of words per node. (Usually c=1. When c > 1, it is sometimes more
convenient to split a single list into c “parallel” lists,so that the k-th word of node X[j] is
stored a fixed distance from the location of the first word of X[j]). In general,
LOC(X[J]) = L0 + cj
Where L0 is the base address, the location of an arbitrarily assumed node X[0].
Top
STACK: In an ordered (linear) list in which all accessing ( insertions and deletions ) are
made at the one end only, called the top.
QUEUE: In an ordered (linear) list in which all insertions take place at one end, say the rear,
while all deletions take place at the other end, say the front.
DEQUE: In an ordered (linear) list in which all insertions and deletions at both the ends.
Structure STACK(item)
declare CREATE() → stack
ADD(item, stack) → stack
DELETE(stack) → stack
TOP(stack) → item
ISEMTS(stack) → boolean;
for all S € stack, i € item let
ISEMTS(CREATE) :: = true
ISEMTS(ADD(i, S)) ::= false
DELETE(CREATE) ::= error
DELETE(ADD(i, S)) ::= S
TOP(CREATE) ::= error
TOP(ADD(i, /S)) ::= i
end STACK.
Note:
// insert item into the STACK of maximum size n; top is the number of elements
currently in the STACK //
top ← top + 1
STACK(top) ← item
End ADD
// removes the top element from the STACK and stores it in item unless STACK is
empty //
item ← STACK(top)
top ← top – 1
End DELETE.
Structure QUEUE(item)
declare CREATEQ( ) → queue
ADDQ(item, queue) → queue
DELETEQ(queue) → queue
FRONT(queue) → item
ISEMTQ(queue) → Boolean;
for all Q in queue, i in item let
ISEMTQ(CREATEQ) :: = true
ISEMTQ(ADDQ(i, Q)) :: = queue
ISEMTQ(CREATEQ( )) :: = error
DELETEQ(ADD(i, Q)) :: = if ISEMTQ(Q) then CREATEQ
else ADDQ(i, DELETEQ(Q))
FRONTQ(CREATEQ( )) :: = error
FRONTQ(ADDQ(i, Q)) ::= if ISEMTQ(Q) then i else FRONTQ(Q)
End
End QUEUE
Note :
front ← front + 1
item ← Q(front)
end DELETEQ.
(A) Stack:
Definition of a Stack:
# define TURE 1;
# define FALSE 0;
structure STACK
{ S → top := 0 };
{ if (S → top ≥ SIZE)
return NULL; }
S → top += 1;
return 1; }
{ if ( S → top ≤ 0)
{ printf(“STACK UNDERFLOWS”);
return NULL; }
S → top --;
{ if ( S → top = 0 )
{ printf(“STACK UNDERFLOWS);
return NULL ; }
(B) Queue:
Definition of queue:
struct QUEUE
void Insert( int QUEUE[ ], int front, int rear, int item )
{ if rear ≥ SIZE
else
{ rear = rear + 1;
QUEUE[rear] := item;
if (front = 0) front := 1; }
return ; }
{ int item;
if (front = 0 )
else
{ item = QUEUE[front];
if (front = rear)
front = rear = 0;
else front += 1; }
return item; }
LINKED LISTS:
Instead of keeping a linear list in sequential memory locations, we can make use of
a much more flexible scheme in which each node contains a link to the next node of the list
as shown in the diagram below:
There are several obvious comparisons we can make between these two basic forms
of storage:
1) Linked allocation takes up additional memory space for the links. This can be the
dominating factor in some situations. However, we frequently find that the information in a
node does not take up a whole word anyway, so there is already space for a link field
present. But even more importantly, there is often an implicit gain in storage by the linked
approach, since tables can overlap, sharing common parts, and in many cases, sequential
allocation will not be as efficient as linked unless a rather large number of additional memory
locations are left vacant anyway.
2) It is easy to delete an item from within a linked list. For example, to delete item 3 we
need only change the link associated with item 2. But with sequential allocation such a
deletion generally implies moving a large part of the list up into different locations and
repack them after deletion.
3) It is easy to insert an item into the midst of a list when the linked scheme is being used.
To insert an item 5 into the list, we need change only two pointers ( one is the previous item’s
link and the next item’s link.
4) References to random parts of the list are much faster in the sequential case. To gain
access to the k-th node in the list, when k is a variable, takes a fixed time in the sequential
case, but it takes k iterations to march down to the right place in the linked case. Thus the
usefulness of linked memory is predicated on the fact that in the large majority of
applications we want to walk through lists sequentially, not randomly; if items in the middle
or at the bottom of the list are needed, we try to keep an additional link variable or list of link
variables pointing to the proper places.
5) The linked scheme lends itself immediately to more intricate structures than simple linear
lists. We can have a variable number of variable size list; any node of the list may be a
starting point for another list, the nodes may simultaneously be linked together in several
orders corresponding to different lists; and so on.
6) The linked scheme makes it easier to join two lists together or to break one apart.
7) The simple operations, like proceeding sequentially through a list, are slightly faster for
sequential lists on many computers, but not so in linked lists.
Note : Let us have a discussion on linked STACKs and linked QUEUEs, tough it is
meaningless to impose constraint on accessing them when the address of the nodes are
known.
Linked STACK:
{ S = NULL; }
if (new = NULL)
return NULL; }
new → next = S;
S = new;
return S; }
NODE *Pop(STACK S)
{ if ( S = NULL)
{ printf(“STACK UNDERFLOW”);
return NULL; }
S = S → next;
return S; }
int ShowTop(STACK S)
{ if ( S = NULL)
return NULL; }
return S → elt; }
LINKED QUEUE:
Create(Q)
{ struct queue Q;
{ struct queue Q;
int item;
Q.rear = new;
return; } } }
Delete (Q)
{ struct queue Q;
{ int item;
if (Q.front = NULL)
Exit; } }
if ( Q. Front = Q.rear )
Q.rear = Q.front;
free (temp);
Now let us discuss the linked lists with various operations that we can perform
on them in the linked way.
SWAMEGA PUBLICATIONS Page 16/33
Data Structures & Algorithms
struct Node
{ int data;
1. new ← getnode(NODE)
3. next(new) ← Head
4. Head ← new
5. retrun Head
1. new ← getnode(NODE)
3. next(new) ← NULL
4. if (Head = NULL)
then
Head ← new
return Head
else
temp ← Head
7. return Head
1. count ← 0
3. temp ← Head
count ← count + 1
temp ← next(temp)
5. return count
2. temp ← Head
print data(temp)
4. return
5. end
2. data(new) ← Elt
3. // If Head is Empty //
if (Head = NULL)
then
next(new) ← NULL
Head ← new
return Head
then
Head ← new
return Head
5. temp ← Head
temp ← next(temp)
7. next(new) ← next(temp)
8. next(temp) ← new
9. return Head
1. if(L = NULL)
2. temp ← L
5. L1 ← new
6. endL1 ← new
temp ← next(temp)
new ← getnode(NODE)
data(new) ← data(temp)
next(endL1) ← new
endL1 ← new
8. next(endL1) ← NULL
9. return L1
Concatenate( L1, L2 )
3. temp ← L1
temp ← next(temp)
5. next(temp) ← L2
6. return L1
Linked lists can be created with a pointer at the last node points the head node of
the list, thereby we can get a circularly linked list (simply circular list) which is immense
use in some applications where we may need to traverse back .
10 20 30 40
For even greater flexibility in the manipulation of linear lists, we can include two
links in each node, pointing to the items on either side of that node. Here prev (Leftlink)
and next (Rightlink) are two pointer variables to the left and right of the list. Each node of
the list includes the two links along with the data part in it. Diagrammatically a doubly
linked list may be view as follows:
Head / A B C D E /
{ int data;
Insertion in DLL:
2. data(new) ← 25
/ 10 20 30 /
3. prev(new) ← M
4. prev(next(M)) ← new
5. return L1
25
/ 10 20 25 30 /
Data Structures & Algorithms
Head
New node
Head
Head
Deletion in DLL:
2. next(prev(M)) ← next(M) M
4. return L1 / 10 20 30 /
Freed node M
Example
Coefficient Power Next 10 5
10X 5
(a) temp1 ← P1
(b) temp2 ← P2
result ← NULL
(ii) A2 ← Power(temp2)
(iii) B1 ← Coeff(temp1)
(iv) B2 ← Coeff(temp2)
(b) if ( A1 = A2)
then
temp1 ← next(temp1)
temp2 ← next(temp2)
else
then
else
endwhile
4. if (temp1 ≠ NULL)
endwhile
else
if(temp2 ≠ NULL)
temp2 ← next(temp2)
endwhile
5. return result.
C/D / C D /CD
(B-C/D) - B /CD -B/CD
A+(B-C/D) + A -B/CD +A-B/CD
A+(B-CE/D)-E - +A-B/CD E -+A-B/CDE
1. Create Stack(S)
2. X ← getnextchar(exp)
3. While ( X ≠ end of string )
(a) If Isoperand(X)
Then (i) Push (S, X)
INPUT STACK
ab*cd/+a-
Initial 12*42/+1-
cd/+a-
ab*cd/+a-
42/+1- 4
12*42/+1- 2
d/+a-
2
2/+1-
b*cd/+a-
4
2
2*42/+1-
2
1
/+a-
*cd/+a-
/+1- 2
2
*42/+1-
2
+a-
+1- 1
4
3
Data Structures & Algorithms
Final
Result
SPARSE MATRIX:
In numerical analysis, a sparse matrix is a matrix in which most of the elements are zero. By
contrast, if most of the elements are nonzero, then the matrix is considered dense. The
fraction of zero elements (non-zero elements) in a matrix is called the sparsity (density).
Conceptually, sparsity corresponds to systems which are loosely coupled. Consider a line of
balls connected by springs from one to the next: this is a sparse system as only adjacent balls
are coupled. By contrast, if the same line of balls had springs connecting each ball to all other
balls, the system would correspond to a dense matrix. The concept of sparsity is useful in
combinatorics and application areas such as network theory, which have a low density of
significant data or connections.
Large sparse matrices often appear in scientific or engineering applications when solving
partial differential equations.
When storing and manipulating sparse matrices on a computer, it is beneficial and often
necessary to use specialized algorithms and data structures that take advantage of the sparse
structure of the matrix. Operations using standard dense-matrix structures and algorithms are
slow and inefficient when applied to large sparse matrices as processing and memory are
wasted on the zeroes. Sparse data is by nature more easily compressed and thus require
significantly less storage. Some very large sparse matrices are infeasible to manipulate using
standard dense-matrix algorithms.
A matrix is typically stored as a two-dimensional array. Each entry in the array represents an
element ai,j of the matrix and is accessed by the two indices i and j. Conventionally, i is the
row index, numbered from top to bottom, and j is the column index, numbered from left to
right. For an m × n matrix, the amount of memory required to store the matrix in this format
is proportional to m × n (disregarding the fact that the dimensions of the matrix also need to
be stored).
In the case of a sparse matrix, substantial memory requirement reductions can be realized by
storing only the non-zero entries. Depending on the number and distribution of the non-zero
entries, different data structures can be used and yield huge savings in memory when
compared to the basic approach. The caveat is the accessing the individual elements becomes
more complex and additional structures are needed to be able to recover the original matrix
unambiguously.
Those that support efficient modification, such as DOK (Dictionary of keys), LIL
(List of lists), or COO (Coordinate list). These are typically used to construct the
matrices.
Those that support efficient access and matrix operations, such as CSR (Compressed
Sparse Row) or CSC (Compressed Sparse Column).
DOK consists of a dictionary that maps (row, column)-pairs to the value of the elements.
Elements that are missing from the dictionary are taken to be zero. The format is good for
incrementally constructing a sparse matrix in random order, but poor for iterating over non-
zero values in lexicographical order. One typically constructs a matrix in this format and then
converts to another more efficient format for processing.
LIL stores one list per row, with each entry containing the column index and the value.
Typically, these entries are kept sorted by column index for faster lookup. This is another
format good for incremental matrix construction.
COO stores a list of (row, column, value) tuples. Ideally, the entries are sorted (by row
index, then column index) to improve random access times. This is another format which is
good for incremental matrix construction.
Yale
The Yale sparse matrix format stores an initial sparse m × n matrix, M, in row form using
three (one-dimensional) arrays (A, IA, JA). Let NNZ denote the number of nonzero entries
in M. (Note that unlike in ordinary mathematics, zero-based indices shall be used here.)
The array A is of length NNZ and holds all the nonzero entries of M in left-to-right
top-to-bottom ("row-major") order.
The array IA is of length m + 1 and contains the index in A of the first element in
each row, followed by the total number of nonzero elements NNZ. IA[i] contains
the index in A of the first nonzero element of row i. Row i of the original matrix
extends from A[IA[i]] to A[IA[i + 1] − 1], i.e. from the start of one row to the last
index before the start of the next. The last entry, IA[m], must be the number of
elements in A.
The third array, JA, contains the column index in M of each element of A and hence
is of length NNZ as well.
A = [ 5 8 3 6 ]
IA = [ 0 0 2 3 4 ]
JA = [ 0 1 2 1 ]
So, in array JA, the element "5" from A has column index 0, "8" and "6" have index 1, and
element "3" has index 2.
In this case the Yale representation contains 16 entries, compared to only 12 in the original
matrix. The Yale format saves on memory only when NNZ < (m (n − 1) − 1) / 2. Another
example, the matrix
A = [ 10 20 30 40 50 60 70 80 ]
IA = [ 0 2 4 7 8 ]
JA = [ 0 1 1 3 2 3 4 5 ]
IA splits the array A into rows: (10, 20) (30, 40) (50, 60, 70) (80);
Note that in this format, the first value of IA is always zero and the last is always NNZ, so
they are in some sense redundant. However, they can make accessing and traversing the array
easier for the programmer.
CSR is effectively identical to the Yale Sparse Matrix format, except that the column array is
normally stored ahead of the row index array. I.e. CSR is (val, col_ind, row_ptr), where
val is an array of the (left-to-right, then top-to-bottom) non-zero values of the matrix;
col_ind is the column indices corresponding to the values; and, row_ptr is the list of value
indexes where each row starts. The name is based on the fact that row index information is
compressed relative to the COO format. One typically uses another format (LIL, DOK, COO)
for construction. This format is efficient for arithmetic operations, row slicing, and matrix-
vector products. See scipy.sparse.csr_matrix.
CSC is similar to CSR except that values are read first by column, a row index is stored for
each value, and column pointers are stored. I.e. CSC is (val, row_ind, col_ptr), where val
is an array of the (top-to-bottom, then left-to-right) non-zero values of the matrix; row_ind
is the row indices corresponding to the values; and, col_ptr is the list of val indexes where
each column starts. The name is based on the fact that column index information is
compressed relative to the COO format. One typically uses another format (LIL, DOK, COO)
for construction. This format is efficient for arithmetic operations, column slicing, and
matrix-vector products. See scipy.sparse.csc_matrix. This is the traditional format for
specifying a sparse matrix in MATLAB (via the sparse function).
Special structure
Band Matrix:
An important special type of sparse matrices is band matrix, defined as follows. The lower
bandwidth of a matrix A is the smallest number p such that the entry ai,j vanishes whenever i
> j + p. Similarly, the upper bandwidth is the smallest p such that ai,j = 0 whenever i < j −
p ( Golub & Van Loan 1996 ). For example, a tridiagonal matrix has lower bandwidth 1 and
upper bandwidth 1. As another example, the following sparse matrix has lower and upper
bandwidth both equal to 3. Notice that zeros are represented with dots for clarity.
Matrices with reasonably small upper and lower bandwidth are known as band matrices and
often lend themselves to simpler algorithms than general sparse matrices; or one can
sometimes apply dense matrix algorithms and gain efficiency simply by looping over a
reduced number of indices.
By rearranging the rows and columns of a matrix A it may be possible to obtain a matrix A′
with a lower bandwidth. A number of algorithms are designed for bandwidth minimization.
Diagonal
SWAMEGA PUBLICATIONS Page 31/33
Data Structures & Algorithms
A very efficient structure for an extreme case of band matrices, the diagonal matrix, is to
store just the entries in the main diagonal as a one-dimensional array, so a diagonal n × n
matrix requires only n entries.
Symmetric
A symmetric sparse matrix arises as the adjacency matrix of an undirected graph; it can be
stored efficiently as an adjacency list.
Both iterative and direct methods exist for sparse matrix solving.
Iterative methods, such as conjugate gradient method and GMRES utilize fast computations
of matrix-vector products , where matrix is sparse. The use of pre-conditioners can
significantly accelerate convergence of such iterative methods.
Questions:
2 M arks:
5 Marks:
1) State the operations that we can perform over any linear list.
(a) Singly Linked List, (b) Doubly Linked List, (c) Circularly Linked List
8) If an array of 10 32-bit integer values, with indices 0 through 9, is given then calculate the
memory addresses of the integer values stored with the starting address (ie the base address
of the array) 2000.
8 Marks:
1) Explain the address calculation of the elements in an array of n-dimension for the different
indexing schemes.
(a) 10 X8 – 3 X6 + 2 X5 – 9 X + 10
(b) 15 X4Y + 4 X3Y2 – 7 XY + 6
4) What are the different formats by which a sparse matrix can be stored. Illustrate.
16 Marks:
8) Illustrate the division of one polynomial by another using singly linked list.