Computer Science

Recursion Programming

Recursion programming is a technique in which a function calls itself repeatedly until a specific condition is met. It is commonly used in solving problems that can be broken down into smaller sub-problems. Recursion can be a powerful tool for solving complex problems, but it requires careful design to avoid infinite loops.

Written by Perlego with AI-assistance

4 Key excerpts on "Recursion Programming"

  • Learning and Awareness
    • Ference Marton, Shirley Booth(Authors)
    • 2013(Publication Date)
    • Routledge
      (Publisher)
    Then there is a return to a normal state by reversing the process. Recursion is a construct which, as far as it occurs in programmed algorithms, allows the programmer to bring about repetition, in a rather special way compared to the more usual (or, one might say, in other older programming environments) iterative method of looping through instructions until a specified condition is met. For example, a number of cycles has been made, or a required state is reached, or a particular item has been found in the data, or time runs out. To write such a looping algorithm in a program it is necessary to define what should be done each time the cycle is performed and exactly what conditions should call a halt. Recursion, in contrast, is able to bring about repetition through the fundamental property of self-reference: A statement that is to be repeated makes use of itself in a more limited form—thereby getting repeated, until a previously specified terminating case is reached. The context in which the students in the study met recursion was as a way of writing ML functions that facilitate repetition, brought about by the defined function referring to itself, in an ever diminishing form, until a suitable criterion, or terminating case, is met (Booth, 1992b). As was mentioned in chapter 2, ML is grounded in mathematics, and recursion is directly equivalent to mathematical induction, an approach to proving a mathematical statement by assuming the truth of a similar statement and knowing that one instantiation of the statement is proven true. It should also be said that recursion, like mathematical induction, is a notoriously difficult concept for students to make sense of, and a good deal of research has gone into identifying ways of simplifying it for the learner (Anderson, Pirolli, & Farrell, 1988; Henderson & Romero, 1989)
  • Essential Algorithms
    eBook - ePub

    Essential Algorithms

    A Practical Approach to Computer Algorithms Using Python and C#

    • Rod Stephens(Author)
    • 2019(Publication Date)
    • Wiley
      (Publisher)
    CHAPTER 9 Recursion
    Recursion occurs when a method calls itself. The recursion can be direct (when the method calls itself) or indirect (when the method calls some other method that then calls the first method).
    Recursion can also be single (when the method calls itself once) or multiple (when the method calls itself multiple times).
    Recursive algorithms can be confusing because people don't naturally think recursively. For example, to paint a fence, you probably would start at one end and start painting until you reach the other end. It is less intuitive to think about breaking the fence into left and right halves and then solving the problem by recursively painting each half.
    However, some problems are naturally recursive. They have a structure that allows a recursive algorithm to easily keep track of its progress and find a solution. For example, trees are naturally recursive because branches divide into smaller branches that divide into still smaller branches and so on. For that reason, algorithms that build, draw, and search trees are often recursive.
    This chapter explains some useful algorithms that are naturally recursive. Some of these algorithms are useful by themselves, but learning how to use recursion in general is far more important than learning how to solve a single problem. Once you understand recursion, you can find it in many programming situations.
    Recursion is not always the best solution, however, so this chapter also explains how you can remove recursion from a program when recursion might cause poor performance.

    Basic Algorithms

    Some problems have naturally recursive solutions. The following sections describe several naturally recursive algorithms for calculating factorials, finding Fibonacci numbers, solving the rod-cutting problem, and moving disks in the Tower of Hanoi puzzle.
    These relatively straightforward algorithms demonstrate important concepts used by recursive algorithms. Once you understand them, you'll be ready to move on to the more complicated algorithms described in the rest of this chapter.
  • Programming Interviews Exposed
    eBook - ePub

    Programming Interviews Exposed

    Coding Your Way Through the Interview

    • John Mongan, Noah Suojanen Kindler, Eric Giguere(Authors)
    • 2018(Publication Date)
    • Wrox
      (Publisher)
    8 Recursion
    Recursion is a deceptively simple concept: any function that calls itself is recursive. Despite this apparent simplicity, understanding and applying recursion can be surprisingly complex. One of the major barriers to understanding recursion is that general descriptions tend to become highly theoretical, abstract, and mathematical. Although there is certainly value in that approach, this chapter instead follows a more pragmatic course, focusing on example, application, and comparison of recursive and iterative (nonrecursive) algorithms.

    UNDERSTANDING RECURSION

    Recursion is useful for tasks that can be defined in terms of similar subtasks. For example, sort, search, and traversal problems often have simple recursive solutions. A recursive function performs a task in part by calling itself to perform the subtasks. At some point, the function encounters a subtask that it can perform without calling itself. This case, in which the function does not recurse, is called the base case; the former, in which the function calls itself to perform a subtask, is referred to as the recursive case.

    NOTE

    Recursive algorithms have two cases: recursive cases and base cases.
    These concepts can be illustrated with a simple and commonly used example: the factorial operator. n! (pronounced “n factorial”) is the product of all integers between n and 1. For example, 4! = 4 · 3 · 2 · 1 = 24. n! can be more formally defined as follows:
    n! = n (n – 1)!
    0! = 1! = 1
    This definition leads easily to a recursive implementation of factorial. The task is to determine the value of n!, and the subtask is to determine the value of (n! – 1)!. In the recursive case, when n is greater than 1, the function calls itself to determine the value of (n – 1)! and multiplies that by n. In the base case, when n
  • Algorithms: Design Techniques And Analysis (Revised Edition)
    eBook - ePub

    Algorithms: Design Techniques And Analysis (Revised Edition)

    Design Techniques and Analysis(Revised Edition)

    • M H Alsuwaiyel(Author)
    • 2016(Publication Date)
    • WSPC
      (Publisher)
    PART 2 Techniques Based on Recursion
    This part of the book is concerned with a particular class of algorithms, called recursive algorithms . These algorithms turn out to be of fundamental importance and indispensible in virtually every area in the field of computer science. The use of recursion makes it possible to solve complex problems using algorithms that are concise, easy to comprehend, and efficient (from an algorithmic point of view). In its simplest form, recursion is the process of dividing the problem into one or more subproblems , which are identical in structure to the original problem and then combining the solutions of these subproblems to obtain the solution to the original problem. We identify three special cases of this general design technique: (1) Induction or tail-recursion. (2) Nonoverlapping subproblems. (3) Overlapping subproblems with redundant invocations to subproblems, allowing trading space for time. The higher numbered cases subsume the lower numbered ones. The first two cases will not require additional space for the maintenance of solutions for continued reuse. The third class, however, renders the possibility of efficient solutions for many problems that at first glance appear to be time-consuming to solve.
    Chapter 4 is devoted to the study of induction as a technique for the development of algorithms. In other words, the idea of induction in mathematical proofs is carried over to the design of efficient algorithms. In this chapter, several examples are presented to show how to use induction to solve increasingly sophisticated problems.
    Chapter 5 provides a general overview of one of the most important algorithm design techniques, namely divide and conquer. First, we derive divide-and-conquer algorithms for the search problem and sorting by merging. In particular, Algorithm MERGESORT is compared with Algorithm BOTTOMUPSORT presented in Chapter 1 , which is an iterative version of the former. This comparison reveals the most appealing merits of divide-and-conquer algorithms: conciseness, ease of comprehension and implementation, and most importantly the simple inductive proofs for the correctness of divide-and-conquer algorithms. Next, some useful algorithms such as Algorithms quicksort and select for finding the k
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.