Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js

Literals - the data in itself

Let's start with a simple experiment - take a look at the snippet in the editor. 

The first line looks familiar. The second seems to be erroneous due to the visible lack of quotes. 

If everything went okay, you should now see two identical lines. 

What happened?> What does it mean? 

Through this example, you encounter two different types of literals: 

  • string, which you already know. 
  • and an integer number, something completely new. 

The print() function presents them in exactly the same way - this example is obvious, as their human-readable representation is also the same.  Internally, in the computer's memory, these two values stored in completely different ways - the string exists as just a string - a series of letters. 

The numbers converted in machine representation (a set of bits). The print() function is able to show them both i a form readable to humans. 

We're now going to be spending some time discussing numeric literals and their internal life. 

 

Integers

You may already know a little about how computers perform calculations on numbers. Perhaps you've heard of the  binary system, and know that it's the system computers use for storing numbers, and that they can perform any operation upon them. 

We won't explore the intricacies of positional numeral systems here, but we'll say that the numbers handled by modern computers are two types: 

  • Integers, that is, those which are devoid of the fractional part
  • and floating-point numbers (or simply floats), that contain (or are able to contain) the fractional part. 

This definition is not entirely accurate, but quite sufficient for now. The distinction is very important, and the boundary between these two types of numbers is very strict. Both of these kinds of numbers differ significantly in how they're stored in a computer memory and in the range of acceptable values. 

The characteristic of the numeric value which determines its kind, range, and application, is called the type.

If you encode a literal and place it inside Python code, the form of the literal determines the representation (type) Python will use to store it in the memory. 

For now, let's leave the floating-point numbers aside (we'll come back to them soon) and consider the question of how Python recognizes integers. 

The process is almost like how you would write them with a pencil on paper - it's simply a string of digits that make up the number. But there's a reservation - you must not interject any characters that are not digits inside the number.

 Take, for example, the number eleven million one hundred and eleven thousand one hundred and eleven. If you took a pencil in your hand right now, you would write the number like this: 11, 111,111, or like this: 11.111.111, or even like this: 11 111 111 . 

It's clear that this provision makes it easier to read, especially when the number consists of many digits. However, Python doesn't accept things like these. It's prohibited. What Python does allow, though, is the use of underscores in numeric literals. *

Therefore, you can write this number either like this: 11111111, or like that: 11_111_111. 

NOTE   *Python 3.6 has introduced underscores in numeric literals, allowing for placing single underscores between digits and after base specifiers for improved readability. This feature is not available in older versions of Python. 

And how do we code negative numbers in Python? As usual - by adding a minus. You can write -11111111 or -11_111_111.

Positive numbers do not need to be preceded by the plus sign, but it's permissible, if you wish to do it. The following lines describe the same number: +11111111 and 11111111.