Computer Science

Petabyte

A petabyte is a unit of digital information storage that is equal to 1,000 terabytes or one quadrillion bytes. It is commonly used to describe the storage capacity of large computer systems, data centers, and cloud storage services. A petabyte can hold vast amounts of data, including text, images, videos, and other types of digital content.

Written by Perlego with AI-assistance

1 Key excerpts on "Petabyte"

  • Digital Transformation
    eBook - ePub

    Digital Transformation

    Survive and Thrive in an Era of Mass Extinction

    • Thomas M. Siebel(Author)
    • 2019(Publication Date)
    • Rodin Books
      (Publisher)
    Using base-2 arithmetic, we can represent any number. The ASCII encoding system, developed from telegraph code in the 1960s, enables the representation of any character or word as a sequence of zeros and ones.
    As information theory developed and we began to amass increasingly large data sets, a language was developed to describe this phenomenon. The essential unit of information is a bit . A string of eight bits in a sequence is a byte. We measure computer storage capacity as multiples of bytes as follows:
    One byte is 8 bits. One thousand (1000) bytes is a kilobyte.
    One million (10002 ) bytes is a megabyte.
    One billion (10003 ) bytes is a gigabyte.
    One trillion (10004 ) bytes is a terabyte.
    One quadrillion (10005 ) bytes is a Petabyte.
    One quintillion (10006 ) bytes is an exabyte.
    One sextillion (10007 ) bytes is a zettabyte.
    One septillion (10008 ) bytes is a yottabyte.
    To put this in perspective, all the information contained in the U.S. Library of Congress is on the order of 15 terabytes.1 It is not uncommon for large corporations today to house scores of Petabytes of data. Google, Facebook, Amazon, and Microsoft collectively house on the order of an exabyte of data.2 As we think about big data in today’s computer world, we are commonly addressing Petabyte- and exabyte-scale problems.
    There are three essential constraints on computing capacity and the resulting complexity of the problem a computer can address. These relate to (1) the amount of available storage, (2) the size of the binary number the central processing unit (CPU) can add, and (3) the rate at which the CPU can execute addition. Over the past 70 years, the capacity of each has increased dramatically.
    As storage technology advanced from punch cards, in common use as recently as the 1970s, to today’s solid-state drive (SSD) non-volatile memory storage devices, the cost of storage has plummeted, and the capacity has expanded exponentially. A computer punch card can store 960 bits of information. A modern SSD array can access exabytes of data.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.