Computer Architecture and Design Interview Questions And Answers
Download Computer Architecture Interview Questions and Answers PDF
Enhance your Computer Architecture interview preparation with our set of 30 carefully chosen questions. Each question is crafted to challenge your understanding and proficiency in Computer Architecture. Suitable for all skill levels, these questions are essential for effective preparation. Secure the free PDF to access all 30 questions and guarantee your preparation for your Computer Architecture interview. This guide is crucial for enhancing your readiness and self-assurance.
30 Computer Architecture Questions and Answers:
Computer Architecture Job Interview Questions Table of Contents:
1 :: What are the basic components in a Microprocessor?
1)address lines to refer to the address of a block
2)data lines for data transfer
3)IC chips 4 processing data
Read More2)data lines for data transfer
3)IC chips 4 processing data
2 :: What is MESI?
The MESI protocol is also known as Illinois protocol due to its development at the University of Illinois at Urbana-Champaign and MESI is a widely used cache coherency and memory coherence protocol.
MESI is the most common protocol which supports write-back cache. Its use in personal computers became widespread with the introduction of Intel's Pentium processor to "support the more efficient write-back cache in addition to the write-through cache previously used by the Intel 486 processor"
Read MoreMESI is the most common protocol which supports write-back cache. Its use in personal computers became widespread with the introduction of Intel's Pentium processor to "support the more efficient write-back cache in addition to the write-through cache previously used by the Intel 486 processor"
3 :: What are the different hazards? How do you avoid them?
There are situations, called hazards, that prevent the next instruction in the instruction stream from executing during its designated clock cycle. Hazards reduce the performance from the ideal speedup gained by pipelining.
There are three classes of Hazards:
1. Structural Hazards: It arise from resource conflicts when the hardware cannot support all possible combinations of instructions simultaniously in ovelapped execution.
2. Data Hazards: It arise when an instruction depends on the results of previous instruction in a way that is exposed by the ovelapping of instructions in the pipeline.
3. Control Hazards: It arise from the pipelining of branches and other instructions that change the PC.
How to Avoid Hazards:
1. Structural Hazard: This arise when some functional unit is not fully pipelined. Then the sequence of instructions using that unpipelined unit cannot proceed at the rate of one one per clock cycle. Another common way that it may appear is when some resources are not duplicated enough to allow all combination of instructional the pipeline to execute. So by fully pipe lining the stages and duplicating resources will avoid structural pipeline.
Read MoreThere are three classes of Hazards:
1. Structural Hazards: It arise from resource conflicts when the hardware cannot support all possible combinations of instructions simultaniously in ovelapped execution.
2. Data Hazards: It arise when an instruction depends on the results of previous instruction in a way that is exposed by the ovelapping of instructions in the pipeline.
3. Control Hazards: It arise from the pipelining of branches and other instructions that change the PC.
How to Avoid Hazards:
1. Structural Hazard: This arise when some functional unit is not fully pipelined. Then the sequence of instructions using that unpipelined unit cannot proceed at the rate of one one per clock cycle. Another common way that it may appear is when some resources are not duplicated enough to allow all combination of instructional the pipeline to execute. So by fully pipe lining the stages and duplicating resources will avoid structural pipeline.
4 :: What is the pipelining?
A technique used in advanced microprocessors where the microprocessor begins executing a second instruction before the first has been completed. That is, several instructions are in the pipeline simultaneously, each at a different processing stage.
Read More5 :: Cache Size is 64KB, Block size is 32B and the cache is Two-Way Set Associative. For a 32-bit physical address, give the division between Block Offset, Index and Tag.
64k/32 = 2000 blocks
2 way set assoc- 2000/2 = 1000 lines-> 10 bits for index
32B block-> 5 bits for block offset
32-10-5= 17 bits for tag
Read More2 way set assoc- 2000/2 = 1000 lines-> 10 bits for index
32B block-> 5 bits for block offset
32-10-5= 17 bits for tag
6 :: What is a Snooping cache?
DNS cache snooping is not a term the author just made up, it is known and discussed by some notable
DNS implementation developers, and a few interested DNS administrators have probably at least heard of it.
Read MoreDNS implementation developers, and a few interested DNS administrators have probably at least heard of it.
7 :: What is Cache Coherency?
Cache coherence refers to the integrity of data stored in local caches of a shared resource. Cache coherence is a special case of memory coherence. When clients in a system, particularly CPUs in a multiprocessing system cache occurs.
Read More8 :: What is Virtual Memory?
Virtual Memory is a way of extending a computers memory by using a disk file to simulate add'l memory space. The OS keeps track of these add'l memory addresses on the hard disk called pages, and the operation in bringing in pages is called page fault.
Read More9 :: What are the five stages in a DLX pipeline?
The instruction sets can be differentiated by
► Operand storage in the CPU
► Number of explicit operands per instruction
► Operand location
► Operations
► Type and size of operands
Read More► Operand storage in the CPU
► Number of explicit operands per instruction
► Operand location
► Operations
► Type and size of operands
10 :: What is a cache?
It turns out that caching is an important computer-science process that appears on every computer in a variety of forms. There are memory caches, hardware and software disk caches, page caches and more. Virtual memory is even a form of caching.
Caching is a technology based on the memory subsystem of your computer. The main purpose of a cache is to accelerate your computer while keeping the price of the computer low. Caching allows you to do your computer tasks more rapidly. Cache technology is the use of a faster but smaller memory type to accelerate a slower but larger memory type. A cache has some maximum size that is much smaller than the larger storage area. It is possible to have multiple layers of cache.
A computer is a machine in which we measure time in very small increments. When the microprocessor accesses the main memory (RAM), it does it in about 60 nanoseconds (60 billionths of a second). That's pretty fast, but it is much slower than the typical microprocessor. Microprocessors can have cycle times as short as 2 nanoseconds, so to a microprocessor 60 nanoseconds seems like an eternity.
Read MoreCaching is a technology based on the memory subsystem of your computer. The main purpose of a cache is to accelerate your computer while keeping the price of the computer low. Caching allows you to do your computer tasks more rapidly. Cache technology is the use of a faster but smaller memory type to accelerate a slower but larger memory type. A cache has some maximum size that is much smaller than the larger storage area. It is possible to have multiple layers of cache.
A computer is a machine in which we measure time in very small increments. When the microprocessor accesses the main memory (RAM), it does it in about 60 nanoseconds (60 billionths of a second). That's pretty fast, but it is much slower than the typical microprocessor. Microprocessors can have cycle times as short as 2 nanoseconds, so to a microprocessor 60 nanoseconds seems like an eternity.
11 :: Convert 65(Hex) to Binary?
65 to decimal
65/16=4
remainder=1
==41 decimal
decimal to binary
101001
Read More65/16=4
remainder=1
==41 decimal
decimal to binary
101001
12 :: Convert a number to its twos compliment and back?
First convert a number into binary format. Then keep last binary number as it is & complement all others.
Eg:- 1101001
2's Compl:0010111
Read MoreEg:- 1101001
2's Compl:0010111
13 :: The CPU is busy but you want to stop and do some other task. How do you do it?
Arise a non maskable interrupt.
Then give jump instruction to required subroutine.
Read MoreThen give jump instruction to required subroutine.
14 :: What is the difference between interrupt service routine and subroutine?
Subroutine are the part of executing processes(like any process can call a subroutine for achieve task),while the interrupt subroutine never be the part.interrupt subroutine are subroutine that are external to a process.
Read More15 :: For a pipeline with "n" stages, whats the ideal throughput? What prevents us from achieving this ideal throughput?
With "n" stage pipeline the throughput should be "n" instructions.
As the pipe stages can't be perfectly balanced ( time to perform task in a pipeline stage), furthermore pipeline does involve some
overheads.
Read MoreAs the pipe stages can't be perfectly balanced ( time to perform task in a pipeline stage), furthermore pipeline does involve some
overheads.
16 :: How do you handle precise exceptions or interrupts?
Like java have a feature for handling exception handling "prime catch".the exception like divide by zero,out of bound.
Read More17 :: Whats the difference between Write-Through and Write-Back Caches? Explain advantages and disadvantages of each?
The comparison can be made out of two factors
1) Performance and
2) Integrity of Data
Write through is better in integrity as it will flush for each writes.
Write back holds up the write till the same cache line has to be used up for a read, which question the data integrity when multiple processors access the same region of data using its own internal cache.
Write Back - gives a good performance, as it save many memory write cycles /write.
Write Through - Doesn't give this performance compared to the write-back.
Read More1) Performance and
2) Integrity of Data
Write through is better in integrity as it will flush for each writes.
Write back holds up the write till the same cache line has to be used up for a read, which question the data integrity when multiple processors access the same region of data using its own internal cache.
Write Back - gives a good performance, as it save many memory write cycles /write.
Write Through - Doesn't give this performance compared to the write-back.
18 :: Explain What is Virtual Memory?
Virtual memory is a concept that, when implemented by a computer and its operating system, allows programmers to use a very large range of memory or storage addresses for stored data. The computing system maps the programmer's virtual addresses to real hardware storage addresses. Usually, the programmer is freed from having to be concerned about the availability of data storage.
In addition to managing the mapping of virtual storage addresses to real storage addresses, a computer implementing virtual memory or storage also manages storage swapping between active storage (RAM) and hard disk or other high volume storage devices. Data is read in units called "pages" of sizes ranging from a thousand bytes (actually 1,024 decimal bytes) up to several megabyes in size. This reduces the amount of physical storage access that is required and speeds up overall system performance.
Read MoreIn addition to managing the mapping of virtual storage addresses to real storage addresses, a computer implementing virtual memory or storage also manages storage swapping between active storage (RAM) and hard disk or other high volume storage devices. Data is read in units called "pages" of sizes ranging from a thousand bytes (actually 1,024 decimal bytes) up to several megabyes in size. This reduces the amount of physical storage access that is required and speeds up overall system performance.
19 :: Cache Size is 64KB, Block size is 32B and the cache is Two-Way Set Associative. For a 32-bit physical address, give the division between Block Offset, Index and Tag.
if block size is 32B,then block offset = 2^5 so 5 bits, index has equation, cache size/(block size*associative) so 64K/(32*2) = 1K, 2^10, so index = 10 bits,now tag = physical address-block offset-index, which is 32-5-10 = 17bits
Read More20 :: What is the difference between Write-Through and Write-Back Caches? Explain advantages and disadvantages of each?
Writing in cache is possible through two method. 1. Write back-update the cache memory along with main memory. 2. Write through-writing in cache only, equivalent copy is produce in main memory, when word is not updated from a long time.
Writing in cache is possible through two method. 1. Write back-update the cache memory along with main memory. 2. Write through-writing in cache only, equivalent copy is produce in main memory, when word is not updated from a long time.
Read MoreWriting in cache is possible through two method. 1. Write back-update the cache memory along with main memory. 2. Write through-writing in cache only, equivalent copy is produce in main memory, when word is not updated from a long time.
21 :: Explain What is a cache?
It turns out that caching is an important computer-science process that appears on every computer in a variety of forms. There are memory caches, hardware and software disk caches, page caches and more. Virtual memory is even a form of caching.
Caching is a technology based on the memory subsystem of your computer. The main purpose of a cache is to accelerate your computer while keeping the price of the computer low. Caching allows you to do your computer tasks more rapidly. Cache technology is the use of a faster but smaller memory type to accelerate a slower but larger memory type. A cache has some maximum size that is much smaller than the larger storage area. It is possible to have multiple layers of cache.
A computer is a machine in which we measure time in very small increments. When the microprocessor accesses the main memory (RAM), it does it in about 60 nanoseconds (60 billionths of a second). That's pretty fast, but it is much slower than the typical microprocessor. Microprocessors can have cycle times as short as 2 nanoseconds, so to a microprocessor 60 nanoseconds seems like an eternity.
A computer is a machine in which we measure time in very small increments. When the microprocessor accesses the main memory (RAM), it does it in about 60 nanoseconds (60 billionths of a second). That's pretty fast, but it is much slower than the typical microprocessor. Microprocessors can have cycle times as short as 2 nanoseconds, so to a microprocessor 60 nanoseconds seems like an eternity.
What if we build a special memory bank in the motherboard, small but very fast (around 30 nanoseconds)? That's already two times faster than the main memory access. That's called a level 2 cache or an L2 cache. What if we build an even smaller but faster memory system directly into the microprocessor's chip? That way, this memory will be accessed at the speed of the microprocessor and not the speed of the memory bus. That's an L1 cache, which on a 233-megahertz (MHz) Pentium is 3.5 times faster than the L2 cache, which is two times faster than the access to main memory.
Some microprocessors have two levels of cache built right into the chip. In this case, the motherboard cache -- the cache that exists between the microprocessor and main system memory -- becomes level 3, or L3 cache.
Read MoreCaching is a technology based on the memory subsystem of your computer. The main purpose of a cache is to accelerate your computer while keeping the price of the computer low. Caching allows you to do your computer tasks more rapidly. Cache technology is the use of a faster but smaller memory type to accelerate a slower but larger memory type. A cache has some maximum size that is much smaller than the larger storage area. It is possible to have multiple layers of cache.
A computer is a machine in which we measure time in very small increments. When the microprocessor accesses the main memory (RAM), it does it in about 60 nanoseconds (60 billionths of a second). That's pretty fast, but it is much slower than the typical microprocessor. Microprocessors can have cycle times as short as 2 nanoseconds, so to a microprocessor 60 nanoseconds seems like an eternity.
A computer is a machine in which we measure time in very small increments. When the microprocessor accesses the main memory (RAM), it does it in about 60 nanoseconds (60 billionths of a second). That's pretty fast, but it is much slower than the typical microprocessor. Microprocessors can have cycle times as short as 2 nanoseconds, so to a microprocessor 60 nanoseconds seems like an eternity.
What if we build a special memory bank in the motherboard, small but very fast (around 30 nanoseconds)? That's already two times faster than the main memory access. That's called a level 2 cache or an L2 cache. What if we build an even smaller but faster memory system directly into the microprocessor's chip? That way, this memory will be accessed at the speed of the microprocessor and not the speed of the memory bus. That's an L1 cache, which on a 233-megahertz (MHz) Pentium is 3.5 times faster than the L2 cache, which is two times faster than the access to main memory.
Some microprocessors have two levels of cache built right into the chip. In this case, the motherboard cache -- the cache that exists between the microprocessor and main system memory -- becomes level 3, or L3 cache.
23 :: Explain What are the different hazards? How do you avoid them?
There are situations, called hazards, that prevent the next instruction in the instruction stream from executing during its designated clock cycle. Hazards reduce the performance from the ideal speedup gained by pipelining. There are three classes of Hazards:
1. Structural Hazards: It arise from resource conflicts when the hardware cannot support all possible combinations of instructions simultaniously in ovelapped execution.
2. Data Hazards: It arise when an instruction depends on the results of previous instruction in a way that is exposed by the ovelapping of instructions in the pipeline.
3. Control Hazards: It arise from the pipelining of branches and other instructions that change the PC.
How to Avoid Hazards:
1. Structural Hazard: This arise when some functional unit is not fully pipelined. Then the sequence of instructions using that unpipelined unit cannot proceed at the rate of one one per clock cycle. Another common way that it may appear is when some resources are not duplicated enough to allow all combination of instructionsin the pipeline to execute. So by fully pipelining the stages and duplicating resouces will avoid structural pipeline.
2. Data Hazards: A major effect of pipelining is to change the relative timing of instructions by overlapping their execution. This overlap introduce the data and control hazards. Data hazards occur when the pipeline changes the order of read/write accesses to operands so that the order differs from the order seen by sequentially executing instructions on an unpipelined processor. It can be mimimized by simple hardware technique called forwarding or by adding stalls.
3. Control Hazards: They are also know as Branch Hazards. The simplest scheme to handle branches hazard is to freeze or flush the pipeline, holding or deleting any instructions after the branch until the branch destination is known. In this case branch penalty is fixed and cannot be reduced by software. The other scheme is predicted-not-taken or predicted-untaken and delayed branch.
Read More1. Structural Hazards: It arise from resource conflicts when the hardware cannot support all possible combinations of instructions simultaniously in ovelapped execution.
2. Data Hazards: It arise when an instruction depends on the results of previous instruction in a way that is exposed by the ovelapping of instructions in the pipeline.
3. Control Hazards: It arise from the pipelining of branches and other instructions that change the PC.
How to Avoid Hazards:
1. Structural Hazard: This arise when some functional unit is not fully pipelined. Then the sequence of instructions using that unpipelined unit cannot proceed at the rate of one one per clock cycle. Another common way that it may appear is when some resources are not duplicated enough to allow all combination of instructionsin the pipeline to execute. So by fully pipelining the stages and duplicating resouces will avoid structural pipeline.
2. Data Hazards: A major effect of pipelining is to change the relative timing of instructions by overlapping their execution. This overlap introduce the data and control hazards. Data hazards occur when the pipeline changes the order of read/write accesses to operands so that the order differs from the order seen by sequentially executing instructions on an unpipelined processor. It can be mimimized by simple hardware technique called forwarding or by adding stalls.
3. Control Hazards: They are also know as Branch Hazards. The simplest scheme to handle branches hazard is to freeze or flush the pipeline, holding or deleting any instructions after the branch until the branch destination is known. In this case branch penalty is fixed and cannot be reduced by software. The other scheme is predicted-not-taken or predicted-untaken and delayed branch.
24 :: Explain What are the five stages in a DLX pipeline?
The instruction sets can be differentiated by
* Operand storage in the CPU
* Number of explicit operands per instruction
* Operand location
* Operations
* Type and size of operands
Submitted by Sowjanya Rao (Sowjanya_Rao@Dell.com)
IF: Instruction Fetch ( from memory) ID: Instruction decode and register read EX: Execution of the operation or address calculation MEM: Data memory access ( i.e accessing the operand) WB: Write Back ( the result)
Read More* Operand storage in the CPU
* Number of explicit operands per instruction
* Operand location
* Operations
* Type and size of operands
Submitted by Sowjanya Rao (Sowjanya_Rao@Dell.com)
IF: Instruction Fetch ( from memory) ID: Instruction decode and register read EX: Execution of the operation or address calculation MEM: Data memory access ( i.e accessing the operand) WB: Write Back ( the result)
25 :: For a pipeline with n stages, what is the ideal throughput? What prevents us from achieving this ideal throughput?
With "n" stage pipeline the throughput should be "n" instructions.
As the pipe stages can't be perfectly balanced ( time to perform task in a pipeline stage), furthermore pipeline does involve some
overheads.
Read MoreAs the pipe stages can't be perfectly balanced ( time to perform task in a pipeline stage), furthermore pipeline does involve some
overheads.