Computer Architecture GATE 2022 Solved Questions

Ques 1: Which one of the following facilitates transfer of bulk data from hard disk to main memory with the highest throughput?

GATE 2022 Q no 17

(A) DMA based I/O transfer

(B) Interrupt driven I/O transfer

(C) Polling based I/O transfer

(D) Programmed I/O transfer

Ans: (A) DMA based I/O transfer

Solution: DMA(Direct Memory Access) is used for transferring bulk data from Main Memory with the highest throughput as the CPU is unable to access the hard disk directly. DMA helps in the effective utilization of the CPU’s time.

Another method used for data transfer in a similar fashion is programmed Input / Output but this method is quite slow.

Ques 2: Let WB and WT be two set associative cache organizations that use LRU algorithm for cache block replacement. WB is a write back cache and WT is a write through cache. Which of the following statements is/are FALSE?

GATE 2022 Q no 24

(A) Each cache block in WB and WT has a dirty bit.

(B) Every write hit in WB leads to a data transfer from cache to main memory.

(C) Eviction of a block from WT will not lead to data transfer from cache to main memory.

(D) A read miss in WB will never lead to eviction of a dirty block from WB.

Ans: (A), (B) , (D)

Solution:

  • WB Cache: Updates to the cache block don’t result in Updates to the Main Memory immediately. Burst Writes are preferred for higher throughput and those needing higher write performance.
  • WT Cache: Updates to the Cache block are reflected on Main Memory before carrying out other processes. Consistency is preferred.
  1. A WB must necessarily have a dirty bit to avoid redundant writes to Main Memory. Whereas a WT cache needn’t as the change is reflected after a write. False
  2. A WB cache’s primary use is to increase throughput or useful work. Multiple writes to the same cache block will not be reflected immediately, avoiding unnecessary data transfer time. False
  3. WT cache’s main goal is to prefer consistency overwrite performance, and a cache block is made to reflect the current main memory. True
  4. LRU doesn’t have specific replacement strategies for dirty and regular blocks. Hence a read miss might evict a dirty block. False

Ques 3: A cache memory that has a hit rate of 0.8 has an access latency 10 ns and miss penalty 100 ns. An optimization is done on the cache to reduce the miss rate. However, the optimization results in an increase of cache access latency to 15 ns,
whereas the miss penalty is not affected. The minimum hit rate (rounded off to two decimal places) needed after the optimization such that it should not increase the average memory access time is _.

GATE 2022 Q no 33

Ans: 0.85

Solution:

For a given cache, Average memory access time can be computed as:

AMAT=HitTime+Miss rate∗Miss Penalty

Initially, Hit rate of cache=0.8

Miss rate=0.2

Access Latency = HitTime =10ns

Miss Penalty=100ns

AMATunoptimized=10+0.2(100)=30ns

For the optimized cache,

Access Latency = HitTime=15ns

AMAToptimized=15+x(100)

Now, AMATunoptimized⩾AMAToptimized

30⩾15+100x

15⩾100x

0.15⩾x

0.85⩽1−x

∴The required hit rate=(1−x)=0.85

Ques 4: Consider a system with 2 KB direct mapped data cache with a block size of 64 bytes. The system has a physical address space of 64 KB and a word length of 16 bits. During the execution of a program, four data words P, Q, R, and S are accessed in
that order 10 times (i.e., PQRSPQRS…). Hence, there are 40 accesses to data cache altogether. Assume that the data cache is initially empty and no other data words are accessed by the program. The addresses of the first bytes of P, Q, R, and S are 0xA248, 0xC28A, 0xCA8A, and 0xA262, respectively. For the execution of the above program, which of the following statements is/are TRUE with respect to the data cache?

GATE 2022 Q no 54

(A) Every access to S is a hit.

(B) Once  P is brought to the cache it is never evicted.

(C) At the end of the execution only R and S reside in the cache.

(D) Every access to R evicts Q from the cache.

Solution: (A) , (B) , (D)

Solution: Physical memory = 64KB means 16 bits required to represent Physical memory

Cache memory = 2KB means 11 bits for cache memory

Block size = 64 B = 32 words means 6 bits because of system is Byte addressable.

Tag = 16-11 = 5 bits

Cache index = 11-6 = 5 bits

Block offset = 6 bits

P = 0XA248 = 1010 0010 0100 1000 = 10100 01001 001000 ( Tag – cache index – Block offset )

Q = 0XC28A = 1100 0010 1000 1010 = 11000 01010 001010

R = 0XCA8A = 1100 1010 1000 1010 = 11001010100010101100101010001010

S=0XA262 = 1010 0010 0110 0010 = 10100 01001 100010

Given that, Direct mapped cache, If we observe, P and S are belongs to same Block ( Tag and cache bits are same ). Therefore every access of S should result in a hit due to neither Q nor R competing for the same cache block and once P brought to the cache, it is never evicted.

If we observe Q and R, those are competing for same cache block. So at the end R only present in the cache due to R is accessed at last. ( compaing to Q ) and every access to R evicts Q from the Cache.

Therefore at the end, P,R and S in the Cache.

Options A,B and D are true.

error: You can only copy the programs code and output from this website. You are not allowed to copy anything else.