Computer Science

Clock speed

Clock speed refers to the rate at which a computer's processor executes instructions. It is measured in gigahertz (GHz) and determines how quickly a computer can perform tasks. A higher clock speed generally results in faster performance.

Written by Perlego with AI-assistance

2 Key excerpts on "Clock speed"

  • Essentials of Computer Architecture
    One of the primary questions about processors concerns speed: how fast does the fetch-execute cycle operate? The answer depends on the processor, the technology used to store a program, and the time required to execute each instruction. On one hand, a processor used as a microcontroller to actuate a physical device (e.g., an electric door) can be relatively slow because a response time under one-tenth of a second seems fast to a human. On the other hand, a processor used in the highest-speed computers must be as fast as possible because the goal is maximum performance.
    As we saw in Chapter 2 , most processors use a clock to control the rate at which the underlying digital logic operates. Anyone who has purchased a computer knows that sales personnel push customers to purchase a fast clock with the argument that a higher clock rate will result in higher performance. Although a higher clock rate usually means higher processing speed, it is important to realize that the clock rate does not give the rate at which the fetch-execute cycle proceeds. In particular, in most systems, the time required for the execute portion of the cycle depends on the instruction being executed. We will see later that operations involving memory access or I/O can require significantly more time (i.e., more clock cycles) than those that do not. The time also varies among basic arithmetic operations: integer multiplication or division requires more time than integer addition or subtraction. Floating point computation is especially costly because floating point operations usually require more clock cycles than equivalent integer operations. Floating point multiplication or division stands out as especially costly — a single floating point division can require orders of magnitude more clock cycles than an integer addition.
    For now, it is sufficient to remember the general principle:
    The fetch-execute cycle may not proceed at a fixed rate because the time taken to execute an instruction depends on the operation being performed. An operation such as multiplication requires more time than an operation such as addition
    .

    4.14 Control: Getting Started And Stopping

    So far, we have discussed a processor running a fetch-execute cycle without giving details. We now need to answer two basic questions. How does the processor start running the fetch-execute cycle? What happens after the processor executes the last step in a program?
  • Computer Architecture
    eBook - ePub

    Computer Architecture

    A Quantitative Approach

    • John L. Hennessy, David A. Patterson(Authors)
    • 2011(Publication Date)
    • Morgan Kaufmann
      (Publisher)
    In the examples above, we needed the fraction consumed by the new and improved version; often it is difficult to measure these times directly. In the next section, we will see another way of doing such comparisons based on the use of an equation that decomposes the CPU execution time into three separate components. If we know how an alternative affects these three components, we can determine its overall performance. Furthermore, it is often possible to build simulators that measure these components before the hardware is actually designed.

    The Processor Performance Equation

    Essentially all computers are constructed using a clock running at a constant rate. These discrete time events are called ticks, clock ticks, clock periods, clocks, cycles , or clock cycles . Computer designers refer to the time of a clock period by its duration (e.g., 1 ns) or by its rate (e.g., 1 GHz). CPU time for a program can then be expressed two ways:
    or
    In addition to the number of clock cycles needed to execute a program, we can also count the number of instructions executed—the instruction path length or instruction count (IC). If we know the number of clock cycles and the instruction count, we can calculate the average number of clock cycles per instruction (CPI). Because it is easier to work with, and because we will deal with simple processors in this chapter, we use CPI. Designers sometimes also use instructions per clock (IPC), which is the inverse of CPI.
    CPI is computed as This processor figure of merit provides insight into different styles of instruction sets and implementations, and we will use it extensively in the next four chapters. By transposing the instruction count in the above formula, clock cycles can be defined as IC × CPI. This allows us to use CPI in the execution time formula: Expanding the first formula into the units of measurement shows how the pieces fit together:
    As this formula demonstrates, processor performance is dependent upon three characteristics: clock cycle (or rate), clock cycles per instruction, and instruction count. Furthermore, CPU time is equally
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.