Wednesday, January 6, 2010

How does binary code distinguish between letters and numbers?

I understand binary numbers, but how does the computer know to read the code as letters rather than one long number?How does binary code distinguish between letters and numbers?
ASCII code is used for this, this is done by associating each letter or code (start/end of text) a binary code here is a list of ASCII codes.





http://www.asciitable.com/How does binary code distinguish between letters and numbers?
It doesn't. A computer is only an adding machine. The processor can move data, sort data and add. If you ask a computer to subtract 1 from 1000 it actually adds -1 to get the answer. Now, in a computer there are only 0's and 1's. 0 is also negative and 1 is positive. The first difference is the AASCI character table. each character has a unique binary number associated to it. Thus, a 240 long binary chain followed by a stop bit will be read as the character it corresponds to. Now we have characters expressed by binary numbers we can simplify. The first simplification is machine code which is used in your BIOS to control the computers parts (video chip, hard drive etc.). Machine code is in hexadecimal where the sequence is 16 numbers. So the first hex code is 00, 01, 02, 03, 04, 05, 06, 07, 08, 09, 0A, 0B, 0C, 0D, 0E and 0F. The next sequence is 10, 11 and so on so the last is F0, F1 etc to the last one which is FF. As FF is the very last it is always used as ';end of file';. Now you may be able to see how programing languages have developed. Finally the hexadecimal sequence is so important in computing because it is 16 16's (00 to 0F is 16 characters, 10 to 1F is another 16, 20 to 2F and so on) so we have 16 x 16 = 256, notice anything here? 2 x 256 is 512, 2 x 512 = 1024? How many bytes in a Megabyte? look at your memory, it is not 1000 bytes, it is 1024. Now you can see that binary to hex is a progression that led to the development of the modern computer. The modern machine is still in the hex format. I hope you have been helped by this.
Context.





As far as the computer is concerned, there is no difference. It's up to the programmer to determine how the program is going to interpret the data.





Most programmers work in high-level languages where this is managed by declaring the ';type'; of the data in question - character/string or integer/floating-point, so we don't really think about it all that much, but at the nitty gritty lowest level of the machine, everything is just bits and it gets treated the same way regardless of what it is.
The computer itself has software (it's operating system) that it uses to decode all the 1's and 0's into instructions.

No comments:

Post a Comment