Hexadecimal, number base 16, or as it is mostly referred to, hex, is a numbering system with a base of 16. Numbers are usually written using the alphanumerical symbols 0–9 and A–F or a–f.

As an example, the decimal number 20 is expressed as 14x, or h14 or H14. Also HFF represents decimal 256 in hex, for those that know a bit about computers, a Bit is a '0' or '1' in binary, four bits are called a nibble. These days computers use groups of eight bits for data storage, called a Byte. Two bytes make a Word (16 bits) and two word make a Double (32 bits).

Some of my older readers will remember the new 8 Bit computers and then the new 16 Bit computers. Younger readers will be more familiar with the new 32 Bit computers and then the new 64 Bit computers.

So 8 bits in a byte all at '1' (11111111) is a binary coded representation of 256, or FF in hex. Can you see the connection here? The number base x the number of bits, minus 1 give the highest number of values a word can hold. This is referred to as binary resolution.

OK you've got it, hexadecimal is first and foremost used in computing to represent a byte, whose 256 possible values can be represented with only two digits in hexadecimal notation.

As a point of interest, it was IBM that introduced the current hexadecimal system. Using the digits 0-9 and A-F, into the computing world as we know it today.
A little know fact is an earlier version of number base 16 system, using the digits 0–9 and u–z was introduced in 1956, and was used by the Bendix-G15 computer.

So how do you tell a decimal 10 from a hex 10?
Well,some hexadecimal representations are indeterminable from decimal representations (to humans and computers). Therefore some convention is needed to define them.

In plain text, hexadecimal is quite often indicated by a sub-scripted suffix such as;

1B3C16, 2FD5SIXTEEN, F7B3H or 9F48HEX

However, in computer programming languages, which are nearly always typed as plain text without such typographical distinctions of formatting such as subscript and superscript, a wide variety of ways indicate hexadecimal representations; these are even seen in typeset text, especially in text that relates to a programming language.

As with most protocols made for computers, there's not a single standard agreed upon (big surprise), so several different conventions are in use, sometimes conventions are even mixed in the same reading script. But, as they are quite unambiguous seldom does any difficulty arises from this.

The most commonly used (like an unofficial standard if you like) and encountered conventions for that matter are the ones with a prefix "0x" or a subscript 16 (for hex numbers of course). As an example, both 0xFF and FF16 represent the decimal number 256 (or 25610).

The leading 0 is used so that the parser can simply recognize a number, and the x stands for hexadecimal (o for Octal and b for Binary). The x in 0x can be either in upper or lower case but is almost always seen written in lower case.

The following are some other examples you may well come across:

- 16#FF#

- #FF

- FFh

- #\$FF

- \$FF

- &HFF

- &FF

- 0hFF

All of these (and quite a few more not shown here) represent the same thing, that is they are all methods of representing the decimal number 256, in hex. I have not put all the conventions down that could be in use (I don't want to bore you).
But why are there so many? Because each computer language has to define a notation for the identification of dealing with numbers of different number bases. And languages are invented by different people sometimes simultaneously. Other than that, who knows!

There we are then, hexadecimal in a nutshell. I could go on and on but I'm sure you've got the hang of it, or certainly enough to have an input when "hex" is mentioned in conversation!