Computer Science

Byte

A byte is a unit of digital information that consists of 8 bits. It is the basic unit for storing and processing data in computer systems. A single byte can represent a small amount of data, such as a single character or a small integer, and is fundamental to the operation of computers and digital devices.

Written by Perlego with AI-assistance

7 Key excerpts on "Byte"

  • Learn C Programming
    Appendix for texts that treat this subject more fully.
    The basic data value in C is a Byte or a sequence of 8 bits. The set of values a Byte can represent is 256 or 28 values. These values range from 0 to 255 or 28–1. 0 is a value that must be represented in the set of 256 values; we can't leave that value out. A Byte can either represent a positive integer ranging from 0 to 255, or 28–1, or a negative integer ranging from -128 to 127. In either case, there are only 256 unique combinations of 1s and 0s.
    While, in our daily routine, most humans don't ordinarily count this high, for a computer, this is a very narrow range of values. A Byte is the smallest of the chunks of data since each Byte in memory can be addressed directly. Also, a Byte is commonly used for alphanumeric characters (similar to those you are now reading) but is not large enough for Unicode characters. American Standard Code for Information Interchange (ASCII ) characters, UTF-8, and Unicode characters will be explained in greater detail in Chapter 15 , Working with Strings .
    Chunks, or Bytes, increase in multiples of 2 from 1 Byte, 2 Bytes, 4 Bytes, 8 Bytes, and 16 Bytes. The following table shows how these could be used:
    In the history of computing, there have been various Byte ranges for basic computation. The earliest and simplest CPUs used 1-Byte integers. These very rapidly developed into 16-bit computers whose address space and largest integer value could be expressed in 2 Bytes. As the range of integers increased from 2 to 4, to 8, so too did the range of possible memory addresses and the ranges of floating-point numbers. As the problems that were addressed by computers further expanded, computers themselves expanded. This resulted in more powerful computers with a 4-Byte address range and 4-Byte integer values. These machines were prevalent from the 1990s to the early part of the 21st century.
  • Information Technology
    eBook - ePub

    Information Technology

    An Introduction for Today's Digital World

    We represent these two states using 0s and 1s. The two available binary digits are 0 and 1, so we give the name of a single 0 or 1 a bit. Humans are not set up to easily understand dozens, hundreds, thousands, millions, or trillions of bits. For our convenience, we want to group bits together into larger collections. The meaningful sizes that we deal with are Bytes and words. A Byte is 8 bits. One Byte of storage lets us store a small number or a single character of a string. The word is some number of Bytes that denotes the typical size of data within the computer. Modern computers have word sizes of 32 or 64 bits (4 or 8 Bytes). When discussing the size of memory and storage, we use abbreviations to help us cope with large sizes, as described in Table 3.1. Did You Know? When we talk about kiloBytes, megaBytes, gigaBytes, and teraBytes, the actual value differs depending on whether we are using the K/M/G prefix as a decimal term or binary term. If binary, then kiloByte is 2 10 = 1024, but if it is a decimal, then kiloByte is 10 3 = 1000
  • Foundations of Discrete Mathematics with Algorithms and Programming
    Algorithms exist independent of any computer. In the absence of a computer it is the human being who plays the role of a machine. Of course, humans are subject to errors, fatigue, etc., whereas computers can execute long sequences of instructions with reliability, and with thoughtless obedience.
    The modern digital computer was invented to facilitate difficult, long and tedious calculations. A computer is only a machine which manipulates a material called “information.” Information can be described as a finite sequence of letters, digits, and some special characters like *, +, :, etc.
    The information manipulated by a computer must be:
    1. Stored : This is the role of the memory.
    2. Manipulated : This is the role of the Central Processing Unit.
    3. Exchanged (with a programmer or with other machines): This is the role of the input/output peripheral units like the keyboard and monitor.
    In the figure below, a model of a classical computer is depicted (see [7 ]).
    Figure 5.2 A model of a classic computer
    Main memory
    Information is stored in memory as a sequence of bits (binary digits ). A binary digit is either 0 or 1. In fact, the information manipulated by a computer is materialized by two different voltages (0 means a high voltage, 1 means a low voltage, for example) in the computer’s circuitry. The smallest storage unit of memory is referred to as a cell or a Byte . We may treat a Byte as a box containing a small chunk of information. Each Byte possesses an address which distinguishes it from the other Bytes. In fact, the addresses of the Bytes are the non-negative integers starting from 0,1,... and ending with
    n - 1
    where n is the number of Bytes in the main memory . Thus a Byte is the smallest storage unit which has its own address, that is, the smallest addressable unit of memory. Usually a Byte is a sequence of 8 bits.
    It is generally 8, 16, 24 or 32 bits under high/low voltage. The set of all Bytes is referred to as the main memory. In the main memory, the computer stocks not only the sequence of statements comprising a program, indicating the manipulation/computation to be carried out on the data , but also the data on which these statements of the program will be executed. The following parameters are associated with the memory: 1. The capacity or size of the memory. 2. The speed with which the instructions can be stocked and retrieved. The main memory is also called the Random Access Memory
  • Cybercrime and Information Technology
    eBook - ePub

    Cybercrime and Information Technology

    Theory and Practice: The Computer Network Infostructure and Computer Security, Cybersecurity Laws, Internet of Things (IoT), and Mobile Devices

    • Alex Alexandrou(Author)
    • 2021(Publication Date)
    • CRC Press
      (Publisher)
    Table 1.10 ).
    FIGURE 1.8 Demonstrates the difference between a Bit and a Byte.
    The binary system is the main language used by computers because it is simple and reliable. Furthermore, it is efficient because it requires a minimum number of electric circuits, and is of low-cost, uses little energy, and occupies minimum space.
    Computers calculate conversion to binary code much faster than a human could, but it is useful to know how to do conversion by hand (see Table 1.3 Decimal vs Binary Systems and Table 1.4 Computer Data Conversion).
    TABLE 1.3 Decimal vs Binary System
    Decimal SystemThe decimal system uses 10 digits from 0 to 9 Binary SystemThe binary system uses only two digits, 0 & 1
    The decimal system uses the powers of 10 for calculation (100 =1, 101 =10, 102 =100, 103 =1000, 104 =10000……).
    The Binary system uses the powers of 2 (20 =1, 21 =2, 22 =4, 23 =8, 24 =16……).
    TABLE 1.4 Computer Data Conversion
    Units(Byte prefixes) Actual number of Bytes(decimal form) Base-2 (Number of Bytes) ApproximateSize in Bytes(decimal form) Base-10 (decimal)
    KiloByte (KB) 1,024
    210
    One-thousand Bytes
    103
    MegaByte (MB) 1,048,576
    220
    One-million Bytes
    106
    GigaByte (GB) 1,073,742.824
    230
    One-billion Bytes
    109
    TeraByte (TB) 1,099,511,627,776
    240
    One trillion Bytes
    1012
    PetaByte (PB) 1,125,899,906,842,624
    250
    One-quadrillion Bytes
    1015
    ExaByte (EB) 1,152,921,504,606,846,976
    260
    One-quintillion Bytes
    1018
    ZettaByte (ZB) 1,180,591,620,717,411,303,424
    270
    One-sextillion Bytes
    1021
    YottaByte (YB) 1,208,925,819,614,629,174,706,176
    280
    One-septillion Bytes
    1024

    Conversion from Binary to Decimal

    We would like to convert Binary 1001 to a decimal number. In Rows 1 and 2 of table 1.5
  • The Most Complex Machine
    eBook - ePub

    The Most Complex Machine

    A Survey of Computers and Computing

    1.1.2. Text. If you are like most people, there is something that might be bothering you at this point. You might reasonably point out that you have been working quite happily with computers for years— typing papers, drawing pictures, or whatever—without ever having heard or thought of binary numbers. Although bits and binary numbers are an essential aspect of the internal workings of computers, it’s true that a person who simply wants to use a computer can do so without knowing anything about them. Nevertheless, as you sit there typing on your computer, everything that the computer does is in fact accomplished by manipulating bits. We need to understand how so much can be done with just the two values zero and one.
     
    Let’s start with the simple question of how a computer can represent the characters you type as binary numbers. The answer is also simple: Each possible character is assigned a unique binary code number. Most computers use a code called ASCII (American Standard Code for Information Interchange). In this code, each character is represented by an eight-bit binary number. For example, the lowercase letter ‘a’ corresponds to 011000012 , while the comma ‘,’ is represented by 0010110002 . As we saw above, there are 28 , or 256, different strings of eight bits, so the ASCII code allows for 256 different characters. Only the first 128 of these are assigned standard meanings; on a particular computer, the extra code numbers are either not used or are used for special characters such as the accented e, ‘é’. Of the 128 standard codes, not all of them stand for characters that might appear on your computer screen. Some are used for so-called “nonprintable” or “control” characters, such as a tab or carriage return (which have codes 000010012 and 000011012 , respectively).
    An eight-bit binary number is also called a Byte, so that it takes exactly one Byte to specify one character in ASCII. Data is often measured in Bytes rather than bits. For example, a document stored on the computer might contain 10,000 Bytes. That is another way of saying that it contains 10,000 characters, or 80,000 bits.
    Now, any ASCII code number could just as easily be written as a decimal number somewhere in the range from 0 to 255. In base ten, the codes for ‘a’, comma, and tab are 97, 44, and 9, respectively. In some sense, though, the binary numbers are closer to reality. When you press the letter ‘a’ on your keyboard, the eight bits 0, 0, 1, 0, 1, 1, 0, 0 are transmitted to the computer. If the computer is storing the letter ‘a’, then somewhere inside it that sequence of bits is stored in some way. As a user of the computer, you don’t have to be aware of any of this—as far as you are concerned, the computer simply understands the letter you type. However, its “understanding” is all based on pushing bits around, and the people who design computers (or who try to understand them) must sometimes deal with things on that level.
  • Exploring Computer Hardware
    eBook - ePub

    Exploring Computer Hardware

    The Illustrated Guide to Understanding Computer Hardware, Components, Peripherals & Networks

    Computer Fundamentals

    A computer is a machine that can store and process data according to a sequence of instructions called a program. At their most fundamental level, these machines can only process binary data: 1s and 0s. In this chapter, we’ll take a look at using the binary code to encode data, as well as binary arithmetic and number bases. We’ll look at using logic gates to build simple circuits and how they form the building blocks for electronic devices, before moving onto the fetch execute cycle and instruction sets. Have a look at the video demos to help you understand. Open your web browser and navigate to the following website:
    elluminetpress.com/ funda
    Passage contains an image

    Representing Data

    The computer uses 1s and 0s to encode computer instructions and data. RAM is essentially a bank of switches: ‘off’ represents a 0 and ‘on’ represents a 1. Using this idea, data can be encoded using either ASCII or Unicode and stored in RAM or on a disc drive. Passage contains an image

    ASCII code

    The American Standard Code for Information Interchange (ASCII), originally used a 7-bit binary code to represent letters, numbers and other characters. Each character is assigned a binary number between 0 to 127. For example:
    Capital A is 010000012 (6510 )
    Lowercase a is 011000012 (9710 )
    000-31 is reserved for control characters such as end of line, carriage returns, end of text, and so on. 032-126 covers symbols, numbers 0-9, and all lowercase and uppercase letters. The ASCII code set was later extended to 8-bit which allowed more characters to be encoded. These included mathematical symbols, international characters and other special characters needed. Passage contains an image

    Unicode

    Unicode is a universal encoding standard for representing the characters of all the languages of the world, including those with larger character sets such as Chinese, Japanese, and Korean.
    UTF-8 is a variable length encoding system that uses one to four Bytes to represent a character. UTF-8 is backwards compatible with ASCII and widely used in internet web pages. In your HTML code you might see something like this: <meta charset=”utf-8”>
  • The Digital Document
    • Bruce Duyshart(Author)
    • 2013(Publication Date)
    • Routledge
      (Publisher)

    Chapter 5

    Digital Data Types

    BASIC PRINCIPLES

    In some of the earliest uses of computers, the work that was carried out was often referred to as electronic data processing or EDP. 1 Even though the technologies have changed dramatically since then, the principles involved have largely remained the same.
    Today, practically all computers are binary based. The information contained within data files they use, are encoded using a series of binary numbers which are made up of a string of 1s or 0s known as bits (derived from the term binary digits). A bit can travel at the speed of light and has no colour, size or weight. It represents the smallest element possible in the composition of information. In almost all cases, information that is processed by a computer has to be in this binary format.
    Since each bit can store only two possible states, on or off, data such as numbers, text and graphics have to be described using a series of bits. Figure 5.1 illustrates a comparison between common decimal (base-ten) numbers and the binary (base-two) numbers that are used by computers.
    Figure 5.1 A comparison between decimal and binary numbers
    At the fundamental level of computing, a software application can be used to create integers, real numbers and alphabetic characters from a series of bits. More complex types of digital data such as audio, graphics, and text can also be formed. Once they have been created, data can then be stored in individual computer files for later retrieval. For example, a data file might contain a sound, a picture or some text output from a numerical analysis application. These data files can then used in the composition of digital documents, which exist at the highest level of this hierarchy, as illustrated in Figure 5.2 .
    Figure 5.2 A representation of the flow of data
    When data is stored in a file, it is usually structured in a manner that is tailored to specific types of information. It is also structured in a manner that allows recovery of the data with a reasonable degree of efficiency. By using these methods, data independency can be achieved, whereby information can be used and reused with a wide range of applications, operating systems, and computer types.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.