Click to original lesson (Ted Ed)

*Trancription below!

The vocabulary and thought with this video allows you to explore new conversations or pathways, using the same constructions that you would with your normal conversations and economic material.

Go ahead and see how much you understand without reading the transcript.

Remember, try answering the questions that you yourself make (who, what, where, when, why, how…?) before watching the video.

After watching for the first time, actually write some questions down to do with material that you should listen for the next time.

I’m quite sure you will be surprised with the results… you can do it!

Additional Resources for you to Explore

The binary system used in computers is a numeric system, like the decimal that we use in our day-to-day lives. The only difference between them is that the decimal uses ten symbols to represent numbers (0 to 9), while binary uses only two (0 and 1). This page has an interesting parallel between the two systems, including the representation of rational numbers. This lesson illustrates how the binary system is used, along with other numeric systems. In decimal systems, we know that any number can be represented—you just add digits as the represented number gets bigger. Hence, a system with “only” ten symbols is enough to represent virtually infinite possibilities. Binary works the exact same way: if you need to represent more information, you can add extra bits to your memory. However, using systems with less symbols is not easy: a number represented in binary requires around three times more digits to be written than its decimal representation.

Wouldn’t it be better to use the widespread decimal system then? Actually, we use binary computers because binary devices are easier to implement than “decimal devices.” Currently, a computer’s central processing units (CPU) are made of electrical components called “transistors.” Check out this lesson to see how transistors are made, how they operate, and how they have changed in the last century. Binary can also be implemented in multiple other ways. An optical fiber transmits data encoded in pulses of light, while hard disks store information using magnetic fields. You can check several examples of binary devices, even solutions that use more than one transistor to implement a single bit, here. The choice of which technology will be used depends specifically on each application and the most important parameters are cost, speed, size, and robustness.

Now that we know how to build devices that can represent numbers, we can expand their scope by mapping those numbers to a set of interest. A set of interest can be literally anything, such as letters in the alphabet or colors in a palette.

The effort to encode letters with industry standards began in the 1960s, when the American Standard Code for Information Interchange (ASCII) was developed. This encoding used 7 bits, which were mapped to the English letters and special characters. The values corresponding to each symbol are here. As computers became more popular worldwide, the need for tables containing more symbols emerged. Nowadays, the most used encoding is the UTF-8, which was created in 1993 and maps more than one million symbols to binary, including characters in all known alphabets and even emojis. You can navigate through the UTF-8 table here. In this encoding, each character can take one to four bytes to be represented, as shown in the first table here.

Colors are also mapped to binary sequences. Besides the RGB system, other systems were conceived to represent colors. The HSL system also uses three components to represent colors: hue, which varies from 0 to 360, with the color “red” being mapped to 0, “green” to 120 and “blue” to 240; saturation, which is the intensity of the colored component; and lightness, the “white level” of the color. The CMYK uses four components, corresponding to the levels of cyan, magenta, yellow, and black in the pixel. This system is called “subtractive,” which means that, as a component gets larger, the pixel emits less light. It is very convenient for printers, where the ink acts as a “filter” for the white canvas of the paper. In this paragraph, the name of each color system has a link to interactive panels that allow you to see the colors corresponding to each possible encoding. If you want to see how those systems compare with each other, check this website.

It is possible to reduce the number of bits without losing information, optimizing data transmission or storage. Strategies that perform compression include the run-length encoding, the Lempel-Ziv algorithm, and the Huffman coding. The Lempel-Ziv algorithm replaces repeated patterns in the data with a token. Both the token and the pattern are added to the compressed file, so the decoder can accurately rebuild the original file. Although this post’s discussion is not related to binary, it illustrates the Lempel-Ziv algorithm. The Huffman coding counts the number of occurrences of symbols in the file and creates a new binary encoding for each one of those symbols. Symbols that are more frequent receive a shorter binary sequence, reducing the size of the file. The ZIP files are created by using a combination of those algorithms.

Binary is a way to represent information like any other language. Engineers use international standards to attribute meaning to states of binary devices; they are able to represent letters, colors and even sounds. Just as a painting of pipe is not the pipe itself, the meaning of each bit is not embedded in itself, but in the program that reads it.

About TED

Guided Discussion from Ted Talks:

  1. Can you explain why a binary encoding of a number has around three times more digits than its decimal representation?
  2. What are the advantages and limitations of using the binary system?
  3. How can technology aid in the development of ethics?
  4. Are technological developments and ethics development independent?


Transcription English:

Imagine trying to use words to describe every scene in a film,

every note in your favorite song,

or every street in your town.

Now imagine trying to do it using only the numbers 1 and 0.

Every time you use the Internet to watch a movie,

listen to music,

or check directions,

that’s exactly what your device is doing,

using the language of binary code.

Computers use binary because it’s a reliable way of storing data.

For example, a computer’s main memory is made of transistors

that switch between either high or low voltage levels,

such as 5 volts and 0 volts.

Voltages sometimes oscillate, but since there are only two options,

a value of 1 volt would still be read as “low.”

That reading is done by the computer’s processor,

which uses the transistors’ states to control other computer devices

according to software instructions.

The genius of this system is that a given binary sequence

doesn’t have a pre-determined meaning on its own.

Instead, each type of data is encoded in binary

according to a separate set of rules.

Let’s take numbers.

In normal decimal notation,

each digit is multiplied by 10 raised to the value of its position,

starting from zero on the right.

So 84 in decimal form is 4×10⁰ + 8×10¹.

Binary number notation works similarly,

but with each position based on 2 raised to some power.

So 84 would be written as follows:

Meanwhile, letters are interpreted based on standard rules like UTF-8,

which assigns each character to a specific group of 8-digit binary strings.

In this case, 01010100 corresponds to the letter T.

So, how can you know whether a given instance of this sequence

is supposed to mean T or 84?

Well, you can’t from seeing the string alone

– just as you can’t tell what the sound “da” means from hearing it in isolation.

You need context to tell whether you’re hearing Russian, Spanish, or English.

And you need similar context

to tell whether you’re looking at binary numbers or binary text.

Binary code is also used for far more complex types of data.

Each frame of this video, for instance,

is made of hundreds of thousands of pixels.

In color images,

every pixel is represented by three binary sequences

that correspond to the primary colors.

Each sequence encodes a number

that determines the intensity of that particular color.

Then, a video driver program transmits this information

to the millions of liquid crystals in your screen

to make all the different hues you see now.

The sound in this video is also stored in binary,

with the help of a technique called pulse code modulation.

Continuous sound waves are digitized

by taking “snapshots” of their amplitudes every few milliseconds.

These are recorded as numbers in the form of binary strings,

with as many as 44,000 for every second of sound.

When they’re read by your computer’s audio software,

the numbers determine how quickly the coils in your speakers should vibrate

to create sounds of different frequencies.

All of this requires billions and billions of bits.

But that amount can be reduced through clever compression formats.

For example, if a picture has 30 adjacent pixels of green space,

they can be recorded as “30 green” instead of coding each pixel separately –

a process known as run-length encoding.

These compressed formats are themselves written in binary code.

So is binary the end-all-be-all of computing?

Not necessarily.

There’s been research into ternary computers,

with circuits in three possible states,

and even quantum computers,

whose circuits can be in multiple states simultaneously.

But so far, none of these has provided

as much physical stability for data storage and transmission.

So for now, everything you see,


and read through your screen

comes to you as the result of a simple “true” or “false” choice,

made billions of times over.


Transcription Spanish:

Traductor: Sonia Escudero Sánchez Revisor: Sebastian Betti

Imagina usar palabras para describir cada escena de un film,

cada nota de tu canción favorita,

o cada calle de tu ciudad.

Ahora imagina hacerlo usando solo los números 1 y 0.

Cada vez que usamos Internet para ver una película,

escuchar música,

o comprobar direcciones,

eso es exactamente lo que hace tu dispositivo,

al usar el lenguaje del código binario.

Los ordenadores usan el binario por ser una forma fiable de almacenar datos.

Por ejemplo, la memoria principal del ordenador está hecha de transistores

que cambia de niveles altos a bajos de voltaje,

como 5 voltios y 0 voltios.

Los voltajes a veces oscilan, pero dado que solo hay dos opciones,

un valor de 1 voltio sería aún considerado “bajo”.

El procesador del ordenador lleva a cabo la lectura

mediante los estados del transistor para controlar los dispositivos informáticos

siguiendo las instrucciones del software.

Lo brillante de este sistema es que la secuencia binaria dada

no tiene un significado predeterminado por sí sola.

Al contrario, cada tipo de datos está codificado en binario

siguiendo un conjunto independiente de normas.

Tomemos los números.

En una notación decimal normal,

cada dígito se multiplica por 10 elevado a su posición,

empezando por el 0 a su derecha.

Así 84 en forma decimal es 4×10⁰ + 8×10¹

La notación de un número binario funciona de forma similar,

pero cada posición basada en 2 elevado a una potencia.

Así que 84 sería escrito como sigue:

Mientras tanto, las letras son interpretadas según el estándar UTF-8,

que asigna cada caracter a un grupo específico de 8 dígitos binarios.

En este caso, 01010100 corresponde a la letra T.

Pero, ¿cómo puede saber si una instancia dada en esta secuencia

significaría T o 84?

Bueno, no puedes viendo la secuencia por separado

como no puedes saber qué significa el sonido “da” por sí solo.

Necesitas un contexto para decir si escuchas ruso, español o inglés.

Y necesitas un contexto similar

para saber si estás mirando un número o texto binario.

El código binario también se usa en formas más complejas de datos.

Cada fotograma de este video, por ejemplo,

está hecho de cientos de millones de píxeles.

En las imágenes a color,

cada pixel está representado por tres secuencias binarias

que corresponden a los colores primarios.

Cada secuencia codifica un número

que determina la intensidad de ese color en particular.

Entonces, un driver de vídeo transmite esta información

a los millones de cristales líquidos de tu pantalla

para crear los diferentes tonos que ahora ves.

El sonido del vídeo también está almacenado en binario,

con la ayuda de una técnica llamada modulación por impulsos codificados.

Las ondas de sonido continuas se digitalizan

tomando “instantáneas” de sus amplitudes cada pocos milisegundos.

Estos se registran como número en forma de cadenas binarias,

con hasta 44 000 por cada segundo de sonido.

Cuando el software de audio del ordenador las lee

los números determinan cómo vibran los altavoces

para crear sonidos en diferentes frecuencias.

Todo ello requiere miles de millones de bits.

Pero puede reducirse esa cantidad con formatos de compresión inteligente.

Por ejemplo, si una imagen tiene 30 píxeles adyacentes en un espacio verde,

se puede grabar “30 verde” en lugar de separar cada pixel por separado,

un proceso conocido como codificación RLE.

Estos formatos comprimidos también están escritos en código binario.

¿Así que el binario es el punto final de la informática?

No necesariamente.

Ha habido investigaciones sobre ordenadores ternarios,

con circuitos en 3 posibles estados,

e incluso ordenadores cuánticos,

cuyos circuitos pueden estar en múltiples estados en simultáneo.

Pero hasta ahora, ninguno ha dado

tanta estabilidad física para el almacenamiento y transmisión de datos.

Por ahora, todo lo que ves,


y lees en tu pantalla

es el resultado de una elección simple de “verdadero” o “falso”,

hecha miles de millones de veces.