1 (a)
Define Latency and Bandwidth of memory. Briefly explain different ways to
minimized latency and to increase bandwidth of memory?
7 M
1 (b)
Briefly explain different classification of parallel computers
7 M
2 (a)
Briefly explain static and dynamic interconnection networks for parallel
Computers.
7 M
2 (b)
Define following terms and explain the importance related to parallel algorithm
design
1. Decomposition.
2. Concurrency.
3. Granularity.
1. Decomposition.
2. Concurrency.
3. Granularity.
7 M
2 (c)
Briefly explain usefulness of task dependency and task interaction graph related
to parallel algorithm design. Draw task interaction graph for sparse matrix
vector multiplication.
7 M
3 (a)
Briefly explain following decomposition techniques used in parallel algorithm
design
1. Data decomposition
2. Exploratory decomposition
3. Speculative decomposition.
1. Data decomposition
2. Exploratory decomposition
3. Speculative decomposition.
7 M
3 (b)
Briefly explain one to all broadcast and all to one reduction on eight node
hypercube. Also find the cost of communication for one to all broadcast on eight
node hypercube.
7 M
3 (c)
Briefly explain all to all personalized communication and its applications. Briefly
explain an optimal algorithm of all to all personalized communication on eight
node hypercube.
7 M
3 (d)
Briefly explain loop splitting, self scheduling and chunk scheduling for task
mapping to achieve load balancing among processes.
7 M
4 (a)
Enlist various performance metrics for parallel systems. Explain Speedup,
Efficiency and Cost in brief.
7 M
4 (b)
Define Isoefficiency function and derive equation of it.
7 M
4 (c)
Briefly explain the relation between Speedup and efficiency as functions of the
number of processing elements. Derive the equation which relate speedup and
efficiency with processing elements.
7 M
4 (d)
Briefly explain Four different implementation of Send and Receive operations.
Briefly explain Send and receive functions of MPI .
7 M
5 (a)
Explain following MPI routines with arguments.
MPI_Gather.
MPI_Scatter.
MPI_Reduce.
MPI_Gather.
MPI_Scatter.
MPI_Reduce.
7 M
5 (b)
Briefly explain Cannon's algorithm for Matrix-Matrix multiplication. What are
the advantages of this algorithm over other parallel algorithm of matrix
Multiplication?
7 M
5 (c)
Briefly explain different synchronization primitives available in Pthread. Explain
three types of mutex (normal, recursive and error check) in context
to Pthread.
7 M
5 (d)
Briefly explain parallel algorithm of Quick sort with example for shared address
space parallel computer.
7 M
More question papers from Parallel Processing