Cs Important Questions by Ujjwal
Cs Important Questions by Ujjwal
Ans- Representation
2. Exponent : Encodes the magnitude of the number. The exponent is stored in a biased format, which means a certain value
(the bias) is subtracted from the actual exponent to get the stored exponent, and this bias is added back to interpret the stored
exponent. This allows for both positive and negative exponents.
3. Mantissa (or significand) : Represents the significant digits of the number. The actual value represented is 1.mantissa for
normalized numbers, where the leading 1 (before the decimal point) is implicit and not stored.
Usage in Computing
1. Arithmetic operations : Floating-point arithmetic operations like addition, subtraction, multiplication, and division are
performed using specialized hardware units in the CPU or GPU. These operations are more complex and slower than integer
operations due to the need to align exponents and normalize results.
2. Scientific computing : In fields like physics, engineering, and finance, where calculations involve very large or very small
numbers, floating point arithmetic is essential for maintaining precision over a wide range of values.
3. Graphics and gaming : Floating point calculations are crucial for rendering graphics, where precise calculations of light,
shadow, and textures require handling fractional values efficiently
4. Machine learning and AI : Floating point numbers, especially in formats like half-precision (16-bit) or mixed precision, are
used in neural network computations to balance between computational efficiency and the precision required for learning.
3.Write symbol, boolean equation and truth table for AND, OR, NOT, NAND, NOR, X-OR and X-NOR Gate?
Ans-
4.Explain Sum of Product (SOP) and Product of Sum (POS).
Ans- The Sum of Product Form
In the sum of the product form of representation, The product num is logical and operation of the
different input variables where the variables could be in the true form or in the complemented
form.
Example:
A.B, A.B
̅.C (example of product term)
In SOP sum refers to logical OR Operation. Therefore, in this sum of product form of expression,
we perform logical or operations on different product terms. Therefore it is known as the Sum of
Product form.
Example:
A+ ̅.C
B + A.C
as we can see in above example the product terms(A , B̅.C , A.C) which are created by multiplying
input variables are are summed with each other .
Block diagram
Circuit Diagram
Full Adder
Full adder is developed to overcome the drawback of Half Adder circuit. It can add two one-bit numbers A and B,
and carry c. The full adder is a three input and two output combinational circuit.
Block diagram
Circuit Diagram
7.Explain construction working and types of multiplaxer.
Ans-A multiplexer, often abbreviated as MUX, is a digital device that selects one of the several input signals and
forwards the selected input into a single line. The selection of the input signal is controlled by a set of selection lines
(also known as control lines). Multiplexers are widely used in communication systems, data routing, and various
digital circuits for efficient data management and signal routing.
Construction and Working Principle
A multiplexer's construction involves a set of inputs, a set of control or selection lines, and a single output.
Depending on the number of inputs, a multiplexer is described as 2^n-to-1, where \(n\) is the number of control lines
needed to select among the inputs. For example, a 4-to-1 MUX has 4 input lines and requires 2 control lines to
select among these 4 inputs.
The core of a multiplexer circuit is a set of AND gates, where each gate corresponds to one input line. The output of
these AND gates is then fed into an OR gate, whose output is the output of the multiplexer. Each AND gate is
controlled by the selection lines such that only one AND gate is enabled at a time, allowing only the selected input to
pass through to the output.
Types of Multiplexers
1. 2-to-1 Multiplexer : The simplest form, selecting between two inputs with a single control line.
2. 4-to-1 Multiplexer : Has four input lines and two control lines to select among the inputs.
3. 8-to-1 Multiplexer : Features eight inputs and requires three control lines for selection.
4. 16-to-1 Multiplexer : Comes with sixteen inputs and needs four control lines.
These types can be expanded further as needed, with the number of control lines increasing with the number of
inputs, following the formula \(2^n\), where \(n\) is the number of control lines.
Working
The operation of a multiplexer is governed by its selection lines. The binary value presented to the selection lines
determines which input line is connected to the output. For instance, in a 4-to-1 MUX with selection lines S1 and S0:
In the flip flop, with the help of preset and clear when the power is switched ON, the states of the
circuit keeps on changing, that is it is uncertain. It may come to set(Q=1) or reset(Q’=0) state. In
many applications, it is desired to initially set or reset the flip flop that is the initial state of the flip
flop that needs to be assigned. This thing is accomplished by the preset(PR) and the clear(CLR).
J-K Flip Flop:
In JK flip flops, the diagram over here represents the basic structure of the flip flop which consists
of Clock (CLK), Clear (CLR), Preset (PR).
A processor register may hold an instruction, a storage address, or any data (such as bit sequence or
individual characters).
The computer needs processor registers for manipulating data and a register for holding a memory
address. The register holding the memory location is used to calculate the address of the next instruction
after the execution of the current instruction is completed.
Ans- A Counter is a device which stores (and sometimes displays) the number of times a particular
event or process has occurred, often in relationship to a clock signal. Counters are used in digital
electronics for counting purpose, they can count specific event happening in the circuit. For
example, in UP counter a counter increases count for every rising edge of clock. Not only counting,
a counter can follow the certain sequence based on our design like any random sequence 0,1,3,2…
.They can also be designed with the help of flip flops. They are used as frequency dividers where
the frequency of given pulse waveform is divided. Counters are sequential circuit that count the
number of pulses can be either in binary code or BCD form. The main properties of a counter are
timing , sequencing , and counting. Counter works in two modes
Up counter
Down counter
Ans-Instruction Cycle
A program residing in the memory unit of a computer consists of a sequence of instructions. These
instructions are executed by the processor by going through a cycle for each instruction.
Input-Output Configuration
In computer architecture, input-output devices act as an interface between the machine and the user.
Instructions and data stored in the memory must come from some input device. The results are displayed to
the user through some output device.
12. Explain instruction Format in computer system.
Ans-Instruction includes a set of operation codes and operands that manage with the operation
codes. Instruction format supports the design of bits in an instruction. It contains fields including
opcode, operands, and addressing mode.
The instruction length is generally preserved in multiples of the character length, which is 8 bits.
When the instruction length is permanent, several bits are assigned to opcode, operands, and
addressing modes.
The function of allocating bits in the instruction can be interpreted by considering the following
elements −
The figure displayed the general IA-32 (Intel Architecture- 32 bits) instruction format. IA-32 is the
instruction format that can Intel’s most outstanding microprocessors. This instruction format
includes four fields, such as opcode field, addressing mode field, displacement field, and
immediate field.
The opcode field has 1 or 2 bytes. The addressing mode field also includes 1 or 2 bytes. In the
addressing mode field, an instruction needs only one byte if it uses only one register to generate
the effective address of an operand.
Implied Mode − In this mode, the operands are specified implicitly in the definition of the instruction. For
example, the instruction "complement accumulator" is an implied-mode instruction because the operand in
the accumulator register is implied in the definition of the instruction. All register reference instructions that
use an accumulator are implied-mode instructions.
Immediate Mode − In this mode, the operand is specified in the instruction itself. In other words, an
immediate-mode instruction has an operand field instead of an address field. The operand field includes the
actual operand to be used in conjunction with the operation determined in the instruction. Immediate-mode
instructions are beneficial for initializing registers to a constant value.
Register Mode − In this mode, the operands are in registers that reside within the CPU. The specific register
is selected from a register field in the instruction. A k-bit field can determine any one of the 2k registers.
Register Indirect Mode − In this mode, the instruction defines a register in the CPU whose contents provide
the address of the operand in memory. In other words, the selected register includes the address of the
operand rather than the operand itself.
Direct Address Mode − In this mode, the effective address is equal to the address part of the instruction.
The operand resides in memory and its address is given directly by the address field of the instruction. In a
branch-type instruction, the address field specifies the actual branch address.
Indirect Address Mode − In this mode, the address field of the instruction gives the address where the
effective address is stored in memory. Control fetches the instruction from memory and uses its address
part to access memory again to read the effective address.
Indexed Addressing Mode − In this mode, the content of an index register is added to the address part of
the instruction to obtain the effective address. The index register is a special CPU register that contains an
index value. The address field of the instruction defines the beginning address of a data array in memory.
Ans-Machine language is a low-level language made up of binary numbers or bits that a computer can
understand. It is also known as machine code or object code and is extremely tough to comprehend. The
only language that the computer understands is machine language. All programmes and programming
languages, such as Swift and C++, produce or run programmes in machine language before they are run on
a computer. When a specific task, even the smallest process executes, machine language is transported to
the system processor. Computers are only able to understand binary data as they are digital devices.
In the computer, all data like videos, programs, pictures are represented in binary. The CPU processes this
machine code or binary data as input. Then, an application or operating system gets the resulting output
from the CPU and displays it visually. For example, the ASCII code 01000001 represents the letter "A" in
machine language, yet it is shown on the screen as "A".
Different machine code is used by different processor architectures; however, machine code includes 1s
and 0s. For case, as compared to Intel x86 processor that contains a CISC architecture, a PowerPC
processor needs different code, which contains a RISC architecture. For the correct processor architecture,
in order to run a program correctly, a compiler must compile high-level source code.
Ans-Assembly language is a low-level language that helps to communicate directly with computer
hardware. It uses mnemonics to represent the operations that a processor has to do. Which is an
intermediate language between high-level languages like C++ and the binary language. It uses
hexadecimal and binary values, and it is readable by humans.
Assembly languages contain mnemonic codes that specify what the processor should do. The
mnemonic code that was written by the programmer was converted into machine language (binary
language) for execution. An assembler is used to convert assembly code into machine language.
That machine code is stored in an executable file for the sake of execution.
It enables the programmer to communicate directly with the hardware such as registers, memory
locations, input/output devices or any other hardware components. Which could help the
programmer to directly control hardware components and to manage the resources in an efficient
manner.
16.Explain Register Transfer Language.
A digital computer system exhibits an interconnection of digital modules such as registers, decoders,
arithmetic elements, and Control logic.
These digital modules are interconnected with some common data and control paths to form a complete
digital system.
Moreover, digital modules are best defined by the registers and the operations that are performed on the
data stored in them.
The operations performed on the data stored in registers are called Micro-operations.
The Register Transfer Language is the symbolic representation of notations used to specify the sequence
of micro-operations.
Ans-The computers which use Stack-based CPU Organization are based on a data structure called a stack. The stack is a
list of data words. It uses the Last In First Out (LIFO) access method which is the most popular access method in most of
the CPU. A register is used to store the address of the topmost element of the stack which is known as Stack pointer (SP).
In this organization, ALU operations are performed on stack data. It means both the operands are always required on the
stack. After manipulation, the result is placed in the stack. The main two operations that are performed on the operators of
the stack are Push and Pop. These two operations are performed from one end only. 1. Push –
This operation results in inserting one operand at the top of the stack and it increases the stack pointer register
2. Pop – This operation results in deleting one operand from the top of the stack and decreasing the stack pointer register
19.Explain RISC Architecture in computer.
The main idea behind this is to simplify hardware by using an instruction set composed of a few basic steps for loading,
evaluating, and storing operations just like a load command will load data, a store command will store the data.
Characteristics of RISC
Advantages of RISC
● Simpler instructions: RISC processors use a smaller set of simple instructions, which makes them easier to decode and
execute quickly. This results in faster processing times.
● Faster execution: Because RISC processors have a simpler instruction set, they can execute instructions faster than CISC
processors.
● Lower power consumption: RISC processors consume less power than CISC processors, making them ideal for portable
devices.
Disadvantages of RISC
● More instructions required: RISC processors require more instructions to perform complex tasks than CISC processors.
● Increased memory usage: RISC processors require more memory to store the additional instructions needed to perform
complex tasks.
● Higher cost: Developing and manufacturing RISC processors can be more expensive than CISC processors.
The main idea is that a single instruction will do all loading, evaluating, and storing operations just like a multiplication
command will do stuff like loading data, evaluating, and storing it, hence it’s complex.
Characteristics of CISC
● Instruction may take more than a single clock cycle to get executed.
● Reduced code size: CISC processors use complex instructions that can perform multiple operations, reducing the amount of
code needed to perform a task.
● More memory efficient: Because CISC instructions are more complex, they require fewer instructions to perform complex
tasks, which can result in more memory-efficient code.
● Widely used: CISC processors have been in use for a longer time than RISC processors, so they have a larger user base and
more available software.
Disadvantages of CISC
● Slower execution: CISC processors take longer to execute instructions because they have more complex instructions and
need more time to decode them.
● More complex design: CISC processors have more complex instruction sets, which makes them more difficult to design and
manufacture.
● Higher power consumption: CISC processors consume more power than RISC processors because of their more complex
instruction sets.
Ans-RISC vs CISC
RISC CISC
Uses only Hardwired control unit Uses both hardwired and microprogrammed control unit
Transistors are used for more registers Transistors are used for storing complex
Instructions
Can perform only Register to Register Arithmetic operations Can perform REG to REG or REG to MEM or MEM to MEM
An instruction executed in a single clock cycle Instruction takes more than one clock cycle
An instruction fit in one word. Instructions are larger than the size of one word
Simple and limited addressing modes. Complex and more addressing modes.
Here, Addressing modes are less. Here, Addressing modes are more.
Each part of the processor is kept busy most of the time, While throughput increases, the time to complete a single instruction
leading to more efficient use of the processor's from start to finish (latency) may also increase due to the added
Improves the rate at which instructions are executed, The design and implementation of a pipelined processor are more
potentially leading to performance improvements in complex than for a non-pipelined processor, requiring sophisticated
By continuously feeding instructions into the pipeline, The simultaneous operation of multiple stages can lead to increased
the CPU can achieve near-maximum utilization. power consumption compared to non-pipelined architectures.
Ans-Input Devices
Let’s take an example, summing the contents of an array of size N. For a single-core system, one
thread would simply sum the elements [0] . . . [N − 1]. For a dual-core system, however, thread
A, running on core 0, could sum the elements [0] . . . [N/2 − 1] and while thread B, running on
core 1, could sum the elements [N/2] . . . [N − 1]. So the Two threads would be running in
parallel on separate computing cores.
Task Parallelism
Task Parallelism means concurrent execution of the different task on multiple computing cores.
Consider again our example above, an example of task parallelism might involve two threads,
each performing a unique statistical operation on the array of elements. Again The threads are
operating in parallel on separate computing cores, but each is performing a unique operation.
Bit-level parallelism
Bit-level parallelism is a form of parallel computing which is based on increasing processor word
size. In this type of parallelism, with increasing the word size reduces the number of instructions
the processor must execute in order to perform an operation on variables whose sizes are greater
than the length of the word.
E.g., consider a case where an 8-bit processor must add two 16-bit integers. First the 8
lower-order bits from each integer were must added by processor, then add the 8 higher-order
bits, and then two instructions to complete a single operation. A processor with 16- bit would be
able to complete the operation with single instruction.
Instruction-level parallelism
Instruction-level parallelism means the simultaneous execution of multiple instructions from a
program. While pipelining is a form of ILP, we must exploit it to achieve parallel execution of the
instructions in the instruction stream.
Ans- M.J. Flynn proposed a classification for the organization of a computer system by the number of
instructions and data items that are manipulated simultaneously.
The operations performed on the data in the processor constitute a data stream.
Parallel processing may occur in the instruction stream, in the data stream, or both.
Flynn's classification divides computers into four major groups that are:
Ans-
Takes advantage of multi-core processors by executing multiple Writing multi-threaded applications is inherently more complex
threads simultaneously, leading to better utilization of CPU than single-threaded ones. Debugging and ensuring thread
resources. safety can be challenging.
In user interface applications, multi-threading can allow the Each thread consumes system resources, such as memory for
program to remain responsive to user input while performing stack space, which can lead to increased overall resource
other tasks in the background. consumption.
Can lead to a cleaner separation of concerns, where different Sharing data between threads requires careful synchronization
threads handle different aspects of the application logic, to avoid inconsistencies, which can complicate the code and
potentially simplifying the design. affect performance.
Scalability Overhead
Ans-
1. Cooperative Multithreading
In cooperative multithreading, the thread currently controlling the CPU maintains possession of it until it voluntarily yields control
through an explicit action, such as making an I/O request or explicitly yielding control. This model simplifies the design and
implementation of the thread scheduler but can lead to problems if a thread does not yield control, potentially starving other
threads and affecting the responsiveness of the application.
2. Preemptive Multithreading
Preemptive multithreading allows the operating system to determine when a context switch should occur. This is based on
priorities or time slicing, where the scheduler forcibly suspends the currently running thread after it has run for a predetermined
slice of time and switches the CPU to another thread. This model is more complex but ensures that all threads receive some
amount of CPU time, improving responsiveness and reliability of the system.
4. Hardware Multithreading
Hardware multithreading involves the CPU itself being designed to support execution of multiple threads in parallel. This can be
seen in designs that include multiple execution cores (multi-core processors) where each core can execute a separate thread in
parallel, or in designs like SMT, where even within a single core, multiple threads can be advanced simultaneously.
Vinod Dham is an inventor, entrepreneur and venture capitalist. He is popularly known as the Father of the Pentium chip, for his
contribution to the development of highly successful Pentium processors from Intel. He is a mentor, advisor and investor; and sits
on the boards of many companies including promising startups funded through his India based fund – Indo US Venture
Partners,where he is the founding Managing Director.
Vinod Dham’s accomplishment as the “Father of Pentium” and as an Indian-American technology pioneer from Silicon Valley, is
being celebrated at a first-ever exhibition on South Asians in the National Museum of Natural History at the storied Smithsonian in
Washington DC, highlighting Indian-Americans who have helped shape America.
Vinod Dham was born in 1950. His father was a member of the army civilian department who had moved from Rawalpindi to
India during thePartition of India. Dham did B.E. degree in Electrical Engineering from the prestigious Delhi College of
Engineering in 1971 at the age of 21.
After completing his B.E. degree in Electrical Engineering (with an emphasis on Electronics Engg) from Delhi College of
Engineering (now known as Delhi Technological University) in 1971, he joined a Delhi-based semiconductor company called
Continental Devices as Engineer. In 1975, he left this job and went to US University of Cincinnati in Cincinnati, Ohio – USA, to
pursue a MS degree in Electrical Engineering, where he specialised in Solid State Electronics. After completing his MSEE degree
in 1977, he joined NCR Corporation at Dayton, Ohio,as Engineer where he did cutting edge work in developing advanced
Non-Volatile Memories. He then joined Intel,as Engineer where he led the development of the world famous Pentium processor.
He is called the “Father of Pentium” for his role in the development of the Pentium processor. He is also one of the co-inventors of
Intel’s first Flash memory technology (ETOX).He rose to the position of vice-president of Micro processor Group at Intel.