What’s New in Stable Python 3.9 | 10 Best Features in Python 3.9

The latest Python 3.9.0 stable version was out on October 5 th, 2020. It is indeed great news for all the Python programmers because it has further stabilized the Python standard library. Most of the redundant features have been removed in Python 3.9 such as Py_UNICODE_MATCH. Just like you all developers, we are also excited to explore the new features of Python 3.9. Before that, let’s look into the top 10 features that are noteworthy in this new release.

1. Dictionary Update and Merge Operators

Dictionary Update and Merge Operators are added to the built-in dict class in this release and they can be implemented using | and |=.

The | operated helps in merging your dictionaries and the |= operated helps in updating your dictionaries.

Below is an example code for better understanding:

To Merge Dictionaries: |

a = {‘TechAffinity’: 1, ‘IT Services’: 2, ‘Tampa’: 3}

b = {‘TechAffinity’: ‘IT Services’, ‘Location’: ‘Tampa’}

a | b

{‘TechAffinity’: ‘IT Services’, ‘IT Services’: 2, ‘Tampa’: 3, ‘Location’: ‘Tampa’}

b | a

{‘TechAffinity’: 1, ‘Location’: ‘Tampa’, ‘IT Services’: 2, ‘Tampa’: 3}

For Update: |=

a |= b

a

{‘TechAffinity’: ‘IT Services’, ‘IT Services’: 2, ‘Tampa’: 3, ‘Location’: ‘Tampa’}

You must remember that whenever the program comes across a conflict while executing, the rightmost value will be kept. In other words, the last seen value always wins. Even other dict operations follow this behavior.

A Deep Look into the Feature:

Here, you can assume the | operator as concatenate (+) in lists and |= as extend (+=) in lists.

In the earlier version (Python 3.8), there are a few ways available to merging and updating dictionaries.

For example, we can use first_dict.update(second_dict) in Python 3.8. While using this method, you will witness that it will modify first_dict in place. To avoid this issue, you have to declare a temporary variable and store the first_dict in it and then execute the update operation. Hence, you are writing an extra line of code to get the same update/merge operator to work.

You can also use {**first_dict, **second_dict}. While using this method, you will witness that it is not easily discoverable and is hard to comprehend the intention behind the code. In addition, the mapping types are excluded and only the dict type is considered. For example, if the first_dict is a defaultdict and the second_dict is a type dict, then the program will fail.

Lastly, the collections library comes with a ChainMap function. It is capable of taking in two dictionaries as shown below:

ChainMap(first_dict, second_dict)

When executed, it will return a merged dictionary. Though this method is easy to use when compared with the above two methods, it is not widely known/used. Additionally, it fails for subclasses of dict, which has an incompatible __init__ method.

For Further Information: https://www.python.org/dev/peps/pep-0584

2. New Flexible High Performant PEG-Based Parser

Python 3.9 introduced a new PEG Parser for CPython instead of the current LL(1) based Python parser. The PEG Parser offers high performance and stability.

A Deep Look into the Feature:

The current CPython parser is LL(1) based and it is a top-down parser. Also, it parses the inputs from left to right. The new grammar is context-free and as a result, the contexts of the tokens are not taken into account.

In other words, the PEG parser lifts the current LL(1) restrictions on current Python grammar. Moreover, to use the current parser requires a set of customization are required to be made. But, the new parser eliminates these requirements and makes it easy to use. This results in the reduction of maintenance cost in the long-run.

Though the parsers and grammar are simple to implement in LL(1), the restrictions inhibit the expression of common constructs in a natural way. The parser looks only one token ahead to distinguish possibilities.

The choice operator | executes in an orderly fashion. Let’s look at an example for better understanding.

A | B | C

Since the LL(1) parser is a context-free grammar parser, it generates constructions that given an input string will deduce which alternative (A or B or C) should be expanded. On the other hand, a PEG parser checks the first alternative, if it doesn’t succeed, only then it checks the second alternative. Hence, the PEG parser generates only one valid-tree for a given string. Thus, it is not ambiguous like LL(1) parser.

Also, the PEG parser directly generates the AST (Abstract Syntax Tree) nodes for a rule via grammar actions. Hence, the generation of intermediate steps is avoided.

You have to note that the PEG parser is extremely tested and validated. Hence, the PEG parser is fine-tuned. As a result, most of the instructions execute with the help of only 10% memory and speed consumption of the current parser. It is because the intermediate syntax tree is not constructed.

For Further Information: https://www.python.org/dev/peps/pep-0617

3. New String Functions to Remove Prefix and Suffix

Two new functions are included in the str object.

The first function helps in removing the prefix and the keyword is, str.removeprefix(prefix).

The second function helps in removing the suffix and the keyword is, str.removesuffix(suffix).

‘TechAffinity_IT Services Company’.removeprefix(‘TechAffinity_’)

# it returns the string IT Services Company

‘TechAffinity_IT Services Company’.removesuffix(‘_IT Services Company’)

# it returns the string TechAffinity

A Deep Look into the Feature:

Most of the tasks of a data science application involve manipulating texts such as removing the suffix or prefix of the strings. The two new functions can be used to remove the unwanted suffix and prefix from a string.

As we all know, string is a collection of characters and each character has an index in a string. We can leverage these indexes along with colon : to return a subset of the string. It is popularly termed as slicing a string.

Generally, functions check whether a string starts with a prefix or ends with a suffix, and if it does, then they return a string without a prefix or after a suffix using str[:] slicing feature.

Apart from these functions being included in the standard library, you also get APIs that are consistent, high performing and less fragile.

For Further Information: https://www.python.org/dev/peps/pep-0616

4. Type Hinting For Built-in Generic Types

The parallel type hierarchy in Python is removed and thus making it simple for annotating programs.

The support for the generic syntax has been enabled in this release in all standard collections currently available in the typing module. Instead of using the typing.List or typing.Dict in the signature of a function, you can use the built-in dict or list collection types as generic types. As a result, the code will look cleaner and will be easy to comprehend or explain the code.

A Deep Look into the Feature:

Since Python is a dynamically typed language, the annotation of the types in a Python program enables introspection of the type. As a result, you can use the annotation to generate API of runtime type checking.

Usually, a generic type is a container, say, a list. It is a type that can be parameterized. A parameterized generic is an instance of a generic with the expected types for container elements e.g. list[str].

In the previous releases, the Python runtime has witnessed a number of static typing features being included incrementally. Some of these features were restricted by existing syntax and runtime behavior. As a result, there was a duplicate collection hierarchy in the typing module due to generics.

For Further Information: https://www.python.org/dev/peps/pep-0585

5. Support For IANA timezone In DateTime

Using the module zoneinfo, you can create the IANA time zone database and this support has been included in the standard library now.

IANA time zones are represented mostly using tz or zoneinfo. A large number of time zones are associated with IANA time zone with different search paths to specify the IANA timezone to a date-time object. For instance, you can pass in the name of a search path as a continent/city to a datetime object to set its tzinfo.

dt = datetime(2000, 01, 25, 01, tzinfo=ZoneInfo(“Europe/London”))

When you pass in an invalid key, zoneinfo.ZoneInfoNotFoundError will be raised.

A Deep Look into the Feature:

You can use the datetime library to introduce a datetime object and specify its timezone by setting the tzinfo property. But, you can end up creating complex timezone rules while using datetime.tzinfo baseline.

Generally, it is safe to set the object and its timezone to either UTC (system local time), or IANA time zone.

You can create a zoneinfo.ZoneInfo(key) object, wherein the key is of type string indicating the search path of the zone file in the system time zone database. You can create the zoneinfo.ZoneInfo(key) object and set as the tzinfo property of the datetime object.

from zoneinfo import ZoneInfo

from datetime import datetime

dt = datetime(2000, 01, 25, 01, tzinfo=ZoneInfo(“America/Los_Angeles”))

For Further Information: https://www.python.org/dev/peps/pep-0615

6. Ability to Cancel Concurrent Futures

A new parameter cancel_futures has been included in the concurrent.futures.Executor.shutdown().

This new parameter helps in cancelling all of the pending futures that have not started. In the previous versions, the process would wait for the pending futures to complete before closing the executor.

A Deep Look into the Feature:

The new parameter has been included into both ThreadPoolExecutor and ProcessPoolExecutor. To explain its working, when the value of the parameter is True, then all the pending futures would be cancelled when the shutdown() function is called. Once it confirms there are no pending work items, it shuts down the worker.

Link: https://bugs.python.org/issue30966

7. AsyncIO and Multiprocessing Improvements

In this release, you can witness a number of improvements in asyncio and multiprocessing library.

Now, the reuse_address parameter of asyncio.loop.create_datagram_endpoint() is not supported due to security concerns.

Also, new coroutines, shutdown_default_executor() and coroutine asyncio,to_thread() are added. The shutdown_default_executor() schedules a shutdown for the default executor that waits for the ThreadPoolExecutor to finish closing. The asyncio.to_thread() is predominantly used for running IO-bound functions in a separate thread to mitigate the blocking of event loop.

When it comes to multiprocessing library improvements, a new method close() is included to the multiprocessing.SimpleQueue class. This method closes the queue. It ensures that the queue is closed and doesn’t stay longer than expected. You must also remember that the methods get(), put(), and empty() should not be called once you close the queue.

Link: https://bugs.python.org/issue30966

8. Consistent Package Import Errors

When you look into earlier versions of Python, one important issue with importing Python libraries was the inconsistent import behavior when relative import went past its top-level package.

The builtins.__import__() raises ValueError; whereas, importlib.__import__() raises ImportError.

Now, __import__() raises ImportError instead of ValueError

For Further Information: https://bugs.python.org/issue37444

9. Random Bytes Generation

Another new feature included in the Python 3.9 version is the random.Random.randbytes() function. You can use this function to generate random bytes.

Earlier, you can generate random numbers, but what will you do when you need to generate random bytes? Before Python 3.9 you need to get creative to generate these random bytes. Though you can os.getrandom(), secrets.token_bytes() or os.urandom(), you still can’t generate pseudo-random patterns

For example, when you want to ensure the random numbers are generated with expected behavior and the process is reproducible, you can use the seed with random.Random module. Hence, the random.Random.randbytes() method is now introduced.

For Further Information: https://bugs.python.org/issue40286

10. String Replace Function Fix

In the previous versions of Python, the “”.replace(“”,s,n) returned s as output instead of empty string for all non-zero n. Since the bug confused the developer community a lot and caused inconsistent behavior in application, it is fixed in the Python 3.9 release.

The new release made it consistent by introducing the “”.replacce(“”,s).

For a given max replace occurrence argument, the replace function replaces a set of characters from the string with a new set of characters.

string.replace(s, old, new[, maxreplace])

The above code returns a copy of string s with all occurrences of old substrings replaced by new. When the optional argument maxreplace is given, the first maxreplace occurrences are replaced.

To further explain the issue, prior to the version 3.9, the replace function had inconsistent behaviour

“”.replace(“”, “TechAffinity”, 3)

It returns “

One would expect to see ‘TechAffinity’

“”.replace(“”, “Tampa”, 5)

It returns “

One would expect to see ‘Tampa’

“”.replace(“”, “IT Services”)

Now, it returns ‘IT Services’ instead of throwing error.

Therefore, in Python 3.9, if we now pass in:

“”.replace(“”,s,n) it returns s instead of empty string for all non-zero n

For Further Information: https://bugs.python.org/issue28029

We, at TechAffinity, handle complex websites and web apps with proficient Python developers who have hands-on experience in implementing new Python features to web projects. If you want to build a new web project or improvise your web projects concerning Python development, you can shoot an email to mailto:media@techaffinity.com or schedule a call with our expert team.

Originally published at https://techaffinity.com on October 6, 2020.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Tech & Marketing blogs by TechAffinity

TechAffinity is a technology-driven custom software solutions firm delivering unrivaled solutions to companies ranging from #startups to #Fortune500.