Computer Science

Cache size

Cache size refers to the amount of data that can be stored in a computer's cache memory. The cache is a small, high-speed memory that stores frequently accessed data to improve the performance of the computer. A larger cache size can result in faster processing times and improved overall performance.

Written by Perlego with AI-assistance

2 Key excerpts on "Cache size"

  • Computer Systems Architecture
    Since it is usually a smaller memory (in terms of capacity), it can be more expensive. The added cost it inflicts will have a very marginal effect on the overall system’s price. This combination provides the better of the two solutions: on the one hand, a fast memory that will enhance performance; and on the other hand, a meaningless increase in price. Every time the processor needs data or instructions, it will look first in the cache memory. If it is found, then the access time will be very fast. If it is not in the cache, it will have to be brought from memory, and then the time will be longer (Figure 6.6). To better understand the concept of cache memory, we will use a more realistic and known example. Let us assume that a student has to write a seminar paper and for that reason he or she is working in the library. The student has a limited-sized desk on which only one book can be used. Every time he or she has to refer to or cite various bibliographic sources, he or she has to return the book to the shelf, take the new book that is needed, and bring it to the working desk. It should be noted that sometimes it is a very large library, and the books needed may be on a different floor. Furthermore, the working space and the desk may be located in a special area far away from the shelves and sometimes even in a different building. Due to library rules that permit only one book on the desk, before using a new book, the current one has to be returned. Even if the previous book that he or she used several minutes ago is needed once more, it does not shorten the time, and the student has to return the current book and bring out the previous one once again
  • Computer System Design
    eBook - ePub
    • Michael J. Flynn, Wayne Luk(Authors)
    • 2011(Publication Date)
    • Wiley
      (Publisher)
    The cache designer must deal with the processor’s accessing requirements on the one hand, and the memory system’s requirements on the other. Effective cache designs balance these within cost constraints.
    4.4 BASIC NOTIONS
    Processor references contained in the cache are called cache hits. References not found in the cache are called cache misses. On a cache miss, the cache fetches the missing data from memory and places it in the cache. Usually, the cache fetches an associated region of memory called the line. The line consists of one or more physical words accessed from a higher-level cache or main memory. The physical word is the basic unit of access to the memory.
    The processor–cache interface has a number of parameters. Those that directly affect processor performance (Figure 4.4 ) include the following:
    1. Physical word—unit of transfer between processor and cache.
    Typical physical word sizes: 2–4 bytes—minimum, used in small core-type processors 8 bytes and larger—multiple instruction issue processors (superscalar)
    2. Block size (sometimes called line )—usually the basic unit of transfer between cache and memory. It consists of n physical words transferred from the main memory via the bus.
    3. Access time for a cache hit—this is a property of the Cache size and organization.
    4. Access time for a cache miss—property of the memory and bus.
    5. Time to compute a real address given a virtual address (not-in-translation lookaside buffer [TLB] time)—property of the address translation facility.
    6. Number of processor requests per cycle.
    Figure 4.4 Parameters affecting processor performance.
    Cache performance is measured by the miss rate or the probability that a reference made to the cache is not found. The miss rate times the miss time is the delay penalty due to the cache miss. In simple processors, the processor stalls on a cache miss.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.