Advertisements
Cache memory
- Buffer between main memory & CPU
- Large access time of main memory wastes CPU time
- Small & fast – minimizes idle CPU time
- MMU – tracks mapping of MM in cache
- Load in advance so that CPU fetches from cache
- CPU’s memory access time is short
Address Mapping
- When address is not mapped in cache, make direct access to Main memory
- Cache is updated by OS & loc of MM is brought into cache
- If cache is full? -> Replace existing content of cache with new entry
- Update table in MMU
Replacement policy/ Algorithm
- i. FIFO -
- Removing the contents in cache that is for a long time
- ii. LRU- Least recent Used
- Followed by many OS
- Entry in cache whose last ref/access occurred long ago is picked for removal.
- If an entry is not used for a long time, then the probability that it will be accessed soon is small
- LRU aims to keep the hit/miss ratio high
Hit:
-
Refers to availability of an item in cache, when CPU tries to access it.
Miss:
-
vice versa
-
Miss leads to accessing main memory & increases access time.
Hit/Miss ratio depends on
-
Cache memory size
-
Replacement algorithm used by OS
- Program
Impossible to minimize hit/miss ratio for all situations.
Storing results
If no main memory- cache mapping , result stored in MM
else both in MM & cache
Two Policies
- Write – through policy
- Write the result in both main memory & cache
- Write – back policy
- Write result in cache only but mark a flag to remember that MM is invalid
- Whenever content in cache is to be replaced in MM, write back that time.
Principle of Operation
Address translation mechanism has to map physical main memory space onto cache
Using associative memory is one such tech