Skip to main content
DevConverter
Home/Number / Base/IEEE 754 Float Converter

IEEE 754 Float Converter

Visualize 32-bit and 64-bit IEEE 754 floating-point numbers as hex and individual bits.

Examples:

About this tool

IEEE 754 is the technical standard for floating-point arithmetic used by virtually all modern CPUs and programming languages. It defines how real numbers are approximated in binary using a sign bit, an exponent, and a significand (also called the mantissa). A 32-bit single-precision float has 1 sign bit, 8 exponent bits, and 23 mantissa bits. A 64-bit double-precision float (the default in most languages) has 1 sign bit, 11 exponent bits, and 52 mantissa bits — giving about 15–17 significant decimal digits of precision.

Many decimal fractions cannot be represented exactly in binary floating-point, which leads to the well-known result that 0.1 + 0.2 is not exactly 0.3 in most programming languages. This is not a bug — it is an inherent property of binary representation. The result is the closest representable value to the true answer. For financial calculations requiring exact decimal arithmetic, use a decimal type or integer arithmetic (e.g., store currency as cents).

IEEE 754 also defines special values: positive and negative infinity (result of dividing by zero), negative zero (-0, which compares equal to +0 but has different behavior in some operations), and NaN (Not a Number, result of 0/0 or sqrt(-1)). Understanding the hexadecimal representation of floats is essential when debugging binary protocols, working with GPU shader code, analyzing binary file formats, or reading memory dumps from a debugger.