Understanding How Computers Work with 0s and 1s
1. ON/OFF
- When electricity flows through a light bulb, the bulb turns ON. When no electricity flows, the bulb turns OFF.
- In the same way, computers use the binary system (Binary), where ON is represented as 1 and OFF as 0.
1.1. Bit and Byte
- The smallest unit used to represent values with 0 and 1 is called a bit. (0 or 1)
- 8 bits make up 1 byte, and 1 byte is the basic unit used to represent a character.
1.2. Char
- 8 bits, or 1 byte, are used to represent English (ASCII) characters as a char.
- When representing code, binary values such as "00010001" are long and difficult to read. For this reason, hexadecimal notation such as "0x11", which can express 8 bits (1 byte) in two digits, is commonly used.
[Source] Computer Architecture - The ASCII character representations
1.3. Code
- Just as characters are defined using binary code, executable code is also defined in binary form.
- The CPU operates based on machine language (Assembly), and the operation codes (Instruction Code) defined when designing a CPU are structured differently depending on the CPU architecture, such as x86, x64, and ARM.
- Instruction Code can be thought of as a collection of musical notes used to play the piano.
[Source] Stack Overflow - Intel x86(8086/8088) 16bit Opcode Instruction
1.4. Process
- The CPU contains ultra-fast storage spaces called registers, which are used to process Instruction Code. Machine language code is delivered to these registers through a series of steps, where the instructions are then executed.