Convert Numbers Between Binary, Decimal, Hex, and Octal
Working with color codes? Debugging bit flags? Reading memory addresses? Different number systems are everywhere in programming and digital electronics. Binary (base 2), decimal (base 10), hexadecimal (base 16), and octal (base 8) all represent the same values but look completely different.
This converter handles them all. Type a number in any format and see it in every other base instantly. No manual calculation needed.
How to Use
- Enter your number in any base (binary, decimal, hex, or octal).
- Select what base your input is using from the dropdown.
- All other bases update automatically—see the same value in every format.
- Copy whichever converted format you need.
Where This Helps
Color code conversion: CSS hex colors like #FF5733 make more sense in decimal: RGB(255, 87, 51).
Network subnetting: IP addresses and subnet masks are clearer when you see them in binary.
File permissions: Unix file permissions (chmod) use octal. Understanding 755 or 644 is easier with binary breakdown.
Debugging: Comparing values in different formats helps spot bit patterns and alignment issues.
Quick Answers
What's the difference between hex and decimal?
Decimal counts 0-9, then starts over at 10. Hex goes 0-9, then A-F (where A=10, B=11, etc.), giving 16 symbols total. Programmers love it because it maps cleanly to groups of 4 binary bits.
Why do programmers use binary?
Computers store everything as 1s and 0s. Binary lets you see exactly what the hardware is working with—useful for bitwise operations and understanding memory.
When would I use octal?
Octal shows up mainly in file permissions and some older systems. Three binary digits compress into one octal digit, making it more compact than binary.
Understanding Number Bases
A number base (or radix) determines how many unique digits are used to represent numbers. Decimal (base 10) uses 0-9 because humans have 10 fingers. Computers use binary (base 2) with just 0 and 1 because transistors have two states: on or off. Hexadecimal (base 16) uses 0-9 and A-F, making it compact for representing large binary numbers - each hex digit represents exactly 4 binary bits.
Real-World Examples
Example 1: RGB Color Codes
Decimal: RGB(255, 0, 128)
Hexadecimal: #FF0080
Use case: Web designers use hex colors. Converting to decimal helps understand RGB values: 255 red, 0 green, 128 blue makes bright pink.
Example 2: File Permissions
Octal: 755
Binary: 111 101 101
Meaning: Owner: read(4)+write(2)+execute(1)=7, Group: read(4)+execute(1)=5, Others: read(4)+execute(1)=5
Use case: Understanding Linux chmod commands becomes clear when you see the binary breakdown.
Example 3: Memory Addresses
Hexadecimal: 0x1A2B
Decimal: 6699
Binary: 0001101000101011
Use case: Debugging programs that show memory addresses in hex. Converting to decimal helps calculate offsets and sizes.
Conversion Table Reference
| Decimal | Binary | Octal | Hexadecimal |
| 0 | 0000 | 0 | 0 |
| 1 | 0001 | 1 | 1 |
| 8 | 1000 | 10 | 8 |
| 15 | 1111 | 17 | F |
| 16 | 10000 | 20 | 10 |
| 255 | 11111111 | 377 | FF |
Common Questions
Can I convert negative numbers?
This tool handles positive integers. For negative numbers in programming, you'll need to understand signed number representation (like two's complement in binary).
What about decimal fractions?
This converter works with whole numbers (integers). Converting fractional numbers between bases requires different algorithms involving the fractional part separately.
Why does hex use letters?
Base 16 needs 16 unique symbols. We use 0-9 (10 symbols) plus A-F (6 more letters) to represent values 10-15. This keeps each digit single-character while allowing compact representation of large numbers.
Is this conversion accurate?
Yes, 100%. Mathematical base conversion is deterministic - the same input always produces the same output across all bases. The tool uses JavaScript's built-in parseInt and toString functions which are thoroughly tested and reliable.
Tips for Programmers
- Quick check: Use this tool to verify your manual calculations when working with different number systems.
- Bit patterns: Convert to binary to see exactly which bits are set in flags or permissions.
- Color debugging: Convert CSS hex colors to decimal RGB to understand color mixing better.
- Memory addresses: Convert hex memory addresses to decimal for offset calculations.
- Learn patterns: Practice with small numbers (0-15) to memorize common conversions.
Computer Science Fundamentals: Why Different Bases Matter
Understanding number bases is fundamental to computer science because digital systems operate at the binary level - transistors are either on (1) or off (0). While humans naturally think in decimal (base 10) due to having ten fingers, computers process everything as binary (base 2). However, writing long binary numbers is tedious and error-prone: 11111111 is hard to read at a glance. Hexadecimal (base 16) provides a compact representation where each hex digit represents exactly four binary bits. The binary number 11111111 becomes FF in hex - much cleaner. This alignment makes hex ideal for representing memory addresses, color codes, and debugging low-level code.
Octal (base 8) fell out of favor but remains relevant for Unix file permissions. Each octal digit represents exactly three binary bits, creating a natural mapping for permission groups. The permission 755 in octal translates to 111 101 101 in binary, where each group of three bits represents read-write-execute permissions for owner, group, and others respectively. Understanding this relationship makes chmod commands intuitive rather than mysterious. Different bases aren't just academic curiosities - they provide the most natural representation for different problem domains, improving human comprehension of machine operations.
Hexadecimal in Web Development and Design
Web developers encounter hexadecimal constantly when working with colors. CSS color codes like #FF5733 represent RGB values in hexadecimal. The format is #RRGGBB where each two-digit hex pair specifies red, green, or blue intensity from 00 (0 in decimal, none of that color) to FF (255 in decimal, maximum intensity). Breaking down #FF5733: FF red (255), 57 green (87), and 33 blue (51) produces a vivid orange. Converting between hex and decimal RGB helps designers understand color mixing and calculate intermediate shades. Tools like this converter reveal that designers often choose round numbers in hex (#FF0000 for pure red) even though they appear as 255 in decimal.
Shortened hex codes use three digits instead of six when each pair has identical digits: #FFFFFF becomes #FFF (white), #000000 becomes #000 (black), and #FFAA00 becomes #FA0 (orange). This shorthand only works when both digits in each pair match. Understanding hex color codes enables manual color adjustments - decreasing the red channel in #FF5733 by one hex value (FF to FE) produces #FE5733, a barely perceptible shift. Developers debugging color issues benefit from seeing both hex and decimal representations simultaneously to spot patterns or calculation errors that might be obvious in one base but hidden in another.
Binary Arithmetic and Bitwise Operations
Binary representation reveals how bitwise operations work. AND, OR, XOR, and bit shifting operate directly on binary digits, making these operations mysterious when thinking in decimal but obvious in binary. For example, 12 AND 10 seems arbitrary until you see the binary: 1100 AND 1010 equals 1000 (8 in decimal). Each bit position ANDs independently: 1 AND 1 gives 1, 1 AND 0 gives 0, resulting in 1000. Bitwise operations enable efficient manipulation of flags, permissions, and packed data structures without expensive division or multiplication.
Bit shifting in binary is simple multiplication or division by powers of two. Shifting 1100 (12) left by one position produces 11000 (24) - exactly doubling the value. Shifting right by one position gives 110 (6) - halving the value. Programmers use bit shifts for fast multiplication and division by powers of two since shifting executes faster than general multiplication on most processors. Understanding binary makes these optimizations intuitive rather than magical compiler tricks. Bit masks, which selectively enable or disable specific bits, become trivial in binary: mask 00001111 extracts the lower four bits of any value while zeroing the upper bits.
Network Engineering and IP Address Calculations
Network engineers routinely convert between decimal and binary when working with IP addresses and subnet masks. An IPv4 address like 192.168.1.100 appears as four decimal numbers, but the computer processes it as a 32-bit binary number: 11000000.10101000.00000001.01100100. Subnet masks determine network and host portions of addresses. The common subnet mask 255.255.255.0 is 11111111.11111111.11111111.00000000 in binary, clearly showing the first 24 bits identify the network and the last 8 bits identify individual hosts within that network.
CIDR notation like 192.168.1.0/24 specifies that 24 bits represent the network portion. Converting the decimal address to binary and applying the subnet mask reveals which addresses belong to the same network. This binary perspective makes subnet calculations straightforward: a /25 network (255.255.255.128 mask) splits a /24 network in half, providing 128 addresses per subnet instead of 256. Network administrators debugging routing issues or configuring subnets benefit enormously from fluency in binary-to-decimal conversion, quickly determining if two addresses share the same network without specialized calculators.
Historical Context and Evolution of Number Systems
Early computers used octal extensively because vacuum tubes and early transistors naturally grouped into threes. The PDP-8, influential in computing history, featured 12-bit words naturally represented as four octal digits. As 8-bit bytes became standard (powers of 2 aligning with binary), hexadecimal displaced octal for most applications since 8 bits equals two hex digits exactly. Octal requires three digits (000 to 377) for 8 bits, making hex more compact. This historical shift explains why older Unix documentation and systems prefer octal while modern systems favor hexadecimal for memory dumps, debugging output, and low-level programming.
Different cultures developed various number systems throughout history. Babylonians used base 60 (sexagesimal), whose legacy survives in time measurement: 60 seconds per minute, 60 minutes per hour. Mayans used base 20 (vigesimal), likely from counting fingers and toes. These alternative bases aren't arbitrary - they reflect what was convenient for their context. Similarly, computer number systems evolved from hardware constraints. Binary matches transistor states, octal matched early hardware word sizes, hexadecimal matches modern byte-oriented architectures. Understanding this historical context helps programmers appreciate why certain representations persist despite seeming arbitrary.
Advanced Programming Applications
Low-level programming and embedded systems require constant base conversion. Reading microcontroller datasheets presents register values in hexadecimal: setting bits in a control register to 0x3F enables specific hardware features. Converting 0x3F to binary (00111111) reveals exactly which six bits are set, corresponding to six features in the datasheet. Debuggers display memory contents in hex because it's compact and aligns with byte boundaries. A single byte ranges from 0x00 to 0xFF (0 to 255 decimal), fitting neatly into two hex digits. Memory addresses like 0x08048000 indicate precise locations without decimal's awkward large numbers.
Graphics programming manipulates individual color channels represented in hex. Bit packing stores multiple values in single integers using specific bit ranges: a 32-bit integer might hold a red channel in bits 24-31, green in bits 16-23, blue in bits 8-15, and alpha in bits 0-7. Extracting the green channel requires shifting and masking operations obvious in binary/hex but obscure in decimal. Cryptography and hashing algorithms output hexadecimal strings because hash values are raw binary data without meaningful decimal interpretation. A SHA-256 hash is 256 bits (32 bytes) displayed as 64 hex characters - attempting to show it as a decimal number would produce an incomprehensibly large integer.
Additional Frequently Asked Questions
How do I convert manually without a calculator?
To convert decimal to binary, repeatedly divide by 2 and track remainders. 13 divided by 2 is 6 remainder 1, 6 divided by 2 is 3 remainder 0, continue until you reach 0, then read remainders backwards: 1101. For binary to decimal, multiply each digit by its position's power of 2 and sum: 1101 equals (1×8) + (1×4) + (0×2) + (1×1) = 13. Hex to decimal uses the same principle with powers of 16: 2F equals (2×16) + (15×1) = 47. Practice with small numbers builds intuition.
Why don't we use base 12 or other alternatives?
Base 12 (dozenal) advocates argue it has superior divisibility - 12 divides evenly by 2, 3, 4, and 6, while 10 only divides by 2 and 5. However, changing established systems requires enormous societal effort. In computing, we're locked into binary because transistors have two states. The bases we use (binary, octal, hex, decimal) emerged from hardware constraints and human physiology, not mathematical optimization. Cultural inertia keeps them standard despite theoretical alternatives.
What are common mistakes when converting bases?
The most frequent error is forgetting that hex uses letters A-F for values 10-15. New programmers sometimes treat A as 1 instead of 10, producing incorrect results. Another mistake is mixing up bit grouping: hex groups four bits, octal groups three. Converting binary 11111111 to hex requires grouping as 1111 1111 (FF), not 111 111 11. Also remember that leading zeros matter in fixed-width representations but not in value: 0x0F and 0xF represent the same number (15), but in byte-oriented contexts, 0x0F explicitly shows it's a single byte.
How do signed numbers work in binary?
Most systems use two's complement representation for signed integers. In 8-bit two's complement, the leftmost bit indicates sign: 0 for positive, 1 for negative. Positive numbers work normally: 00000101 equals 5. Negative numbers complement all bits and add 1: to represent -5, start with 5 (00000101), flip bits (11111010), add 1 (11111011). This system enables simple arithmetic - addition and subtraction work identically for signed and unsigned numbers, simplifying hardware. This converter handles unsigned integers; signed conversion requires understanding two's complement separately.
Can I use this for floating-point numbers?
This tool converts integers only. Floating-point numbers use completely different representation (IEEE 754 format) with sign bit, exponent, and mantissa components. Converting 3.14159 between bases requires separate handling of integer and fractional parts, then expressing in scientific notation. Floating-point to binary conversion is complex and typically requires specialized tools. For integer portions of mixed numbers, you can use this converter on the whole number part separately.
Practical Learning Exercises
- Memorize key conversions: Learn hex digits 0-F (0-15 decimal) and powers of 2 (1, 2, 4, 8, 16, 32, 64, 128, 256) by heart.
- Practice color codes: Pick random colors in design tools and manually convert hex codes to decimal RGB to build intuition.
- Decode permissions: When you see Unix file permissions like 644 or 755, convert to binary to understand what they mean.
- Analyze bit patterns: Look at memory dumps or network packets in hex and practice converting specific bytes to binary and decimal.
- Compare representations: Take the same number and write it in all four bases to see patterns and develop mental shortcuts.
- Check your work: Convert manually, then verify with this tool until you build confidence in your conversion skills.
Related Tools
✓ Base64 Encoder/Decoder - Encode binary data as text
✓ Hash Generator - Generate MD5, SHA1, SHA256 hashes
✓ Color Picker - Pick colors and get hex/RGB values
✓ ASCII Table - View ASCII character codes in decimal, hex, binary