Computer Science

Complexity analysis

Complexity analysis is the study of the resources required to solve a problem as the input size grows. It helps in understanding the efficiency of algorithms and data structures. The goal is to design algorithms that can solve problems in a reasonable amount of time and space.

Written by Perlego with AI-assistance

3 Key excerpts on "Complexity analysis"

  • Consumer Optimization Problem Solving
    • Alfred L Norman(Author)
    • 2014(Publication Date)
    • WSPC
      (Publisher)
    Analyzing human algorithms requires a very different approach than analyzing computer algorithms. Computer algorithms are specified so that they can be mathematically analyzed. Human algorithms must be inferred from human behavior. The fact that computational complexity definitions are based on a fixed cost of elementary operations does not create a problem in analyzing human procedures. With learning, the cost of performing an elementary operation is likely to decline. This does not affect the computational complexity definitions as long as there is a lower bound representing the learning limit. The definitions strip constants in determining to which equivalence class an algorithm belongs.
    To determine the difficultly in solving a problem with a computer algorithm the researcher constructs an algorithm and shows that a more efficient algorithm does not exist, thus establishing the computational complexity of the problem. The approach is to use mathematical analysis of the specified algorithm, for example [Norman and Jung (1977)]. A traditional computational complexity approach can be used to study economic theory, such as providing an explanation for money, [Norman (1987)] and a Bayesian explanation of F. Knight’s uncertainty versus risk, [Norman and Shimer (1994)]. For a discussion of the relationship between computability, complexity and economics see [Velupillai (2010)] and [Velupillai (2005)].
    In studying procedures used by humans the procedure must be inferred from the subjects’ behavior. Analysis of human procedures can be done by watching the hand motions of subjects handling the test objects or which buttons the subjects click on a computer screen. Also, asking the subjects what they are doing can be illuminating. Algorithms that are members of different equivalence classes can be identified using statistical techniques such as regression analysis. We use these approaches in determining what procedures humans use in ordering alternatives in Chapter 3
  • Theory of Computation Simplified
    eBook - ePub

    Theory of Computation Simplified

    Simulate Real-world Computing Machines and Problems with Strong Principles of Computation (English Edition)

    • Dr. Varsha H. Patil, Dr. Vaishali S. Pawar, Dr. Swati A. Bhavsar, Dr. Aboli H. Patil(Authors)
    • 2022(Publication Date)
    • BPB Publications
      (Publisher)
    Algorithmics is a field of computer science, defined as the study of algorithms. The overall goal of algorithmics is an understanding of the complexity of algorithms. The study includes the design and analysis of algorithms.
    Although computer scientists have studied algorithms and algorithm efficiency extensively, the field has not been given an official name. Brassard and Bratley coined the term algorithmics, which they defined as the systematic study of the fundamental techniques used to design and analyze efficient algorithms.
    When we want to solve a problem, there may be a choice of algorithms available. In such a case, it is important to decide which one to use. Depending on our priorities and the limits of the equipment available to use, we may want to choose an algorithm that takes the least time or uses the least storage or is the easiest to program and so on. The answer can depend on many factors, such as the number involved, the way the problem is presented, or the speed and storage capacity of the available computing equipment.
    It may be possible that none of the available algorithms is entirely suitable, and therefore, we have to design a new algorithm of our own. Algorithmics is the science that lets us evaluate the effect of these various external factors on the available algorithms, so that we can choose the one that best suits our particular circumstances; it is also the science that tells us how to design a new algorithm.
    Algorithmics include the following:
    • How to devise algorithms
  • Ant Colony Optimization and Constraint Programming
    • Christine Solnon(Author)
    • 2013(Publication Date)
    • Wiley-ISTE
      (Publisher)
    Chapter 2

    Computational Complexity

    A problem is said to be combinatorial if it can be resolved by the review of a finite set of combinations. Most often, this kind of solving process is met with an explosion of the number of combinations to review. This is the case, for example, when a timetable has to be designed. If there are only a few courses to schedule, the number of combinations is rather small and the problem is quickly solved. However, adding a few more courses may result in such an increase of the number of combinations that it is no longer possible to find a solution within a reasonable amount of time.
    This kind of combinatorial explosion is formally characterized by the theory of computational complexity, which classifies problems with respect to the difficulty of solving them. We introduce algorithm complexity in section 2.1 , which allows us to evaluate the amount of resources needed to run an algorithm. In section 2.2 , we introduce the main complexity classes and describe the problems we are interested in within this classification. We show that some instances of a problem may be more difficult to solve than others in section 2.3 or, in other words, that the input data may change the difficulty involved in finding a solution in practice. We introduce the concepts of phase transition and search landscape which may be used to characterize instance hardness. Finally, in section 2.4 , we provide an overview of the main approaches that may be used to solve combinatorial problems.

    2.1. Complexity of an algorithm

    Algorithmic complexity utilizes computational resources to characterize algorithm scalability. In particular, the time complexity of an algorithm gives an order of magnitude of the number of elementary instructions that are executed at run time. It is used to compare different algorithms independently of a given computer or programming language.
    Time complexity usually depends on the size of the input data of the algorithm. Indeed, given a problem, we usually want to solve different instances of this problem where each instance corresponds to different input data.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.