Data Science, ML and Analytics Engineering

Python logging HOWTO: logging vs loguru

In this post we will try to choose a library for logging in Python. Logs help to record and understand what went wrong in the work of your service. Informational messages are often written to the logs. For example: parameters, quality metrics and model training progress. An example of a piece of model training log:

An example of a piece of model training log
An example of a piece of model training log

logging vs loguru

The Logging library is the standard Python logging solution. Logging is often criticized for the complexity of the configs, the inconvenience of setting different levels of logging and rotation of log files.

Loguru, on the contrary, positions itself as the most simple library for logging in Python.

Let’s try to sort out the pros and cons of each library and choose the best solution for our tasks.

Basic usage

Compare the code for the most basic logging. We will write the log to a file.

import logging
logging.basicConfig(filename='logs/logs.log', level=logging.DEBUG)
logging.debug('Error')'Information message')

Analyze the code a little. basicConfig – creates a basic configuration for our logging, filename is the path to the log file, level is the logging level. When logging.DEBUG it will skip all log entries.

from loguru import logger
logger.add('logs/logs.log', level='DEBUG')
logger.debug('Error')'Information message')

I think everything is clear here and the code looks very similar. But let’s look at the result in the file and console.

INFO:root:Information message
2021-02-04 17:44:10.914 | DEBUG    | main::14 - Error
2021-02-04 17:44:10.915 | INFO     | main::15 - Information message
2021-02-04 17:44:10.915 | WARNING  | main::16 - Warning
Loguru result
Loguru result

Both in the console and in the Loguru file it looks more informative right away by default.


Let’s now try to do some formatting in Logger. For this there is a method .setFormatter. We will display the time of the event, the type and the event itself as in Loguru.

import logging
import sys
logger = logging.getLogger()
fileHandler = logging.FileHandler('logs/logs.log')
fileHandler.setFormatter(logging.Formatter(fmt='[%(asctime)s: %(levelname)s] %(message)s'))
streamHandler = logging.StreamHandler(stream=sys.stdout)
streamHandler.setFormatter(logging.Formatter(fmt='[%(asctime)s: %(levelname)s] %(message)s'))
logging.debug('Error')'Information message')

Oh you! There is much more code. Let’s take a look at the new classes and methods. First, we have a Handler class. FileHandler to write to file and StreamHandler to write to console. Then you need to use the addHandler method to transfer them to our logger. You will find a few more Handlers in the documentation.

Now let’s take a look at the Formatter class. From the name it is clear that this class is responsible for the format of the recording of our log. In our example, in addition to the message itself, we added the recording time and its type. Now our log looks like this:

[2021-02-04 21:28:28,283: DEBUG] Error
[2021-02-04 21:28:28,283: INFO] Information message
[2021-02-04 21:28:28,283: WARNING] Warning

Loguru also has formatting. This is done like this: logger.add ('logs / logs.log', level = 'DEBUG', format = "{time} {level} {message}")

Rotation / retention / compression

It is often necessary to rotate the log – that is, to archive, clear or create a new log file with a specified frequency. For example this is needed when the logs become too large. Let’s compare the code again.

import logging
import time
from logging.handlers import RotatingFileHandler
def create_rotating_log(path):
     logger = logging.getLogger("Rotating Log")
     handler = RotatingFileHandler(path, maxBytes=20,                               backupCount=5)
     for i in range(10): "This is test log line %s" % i)
 if name == "main":
     log_file = "test.log"

This uses RotatingFileHandler. maxBytes is the maximum file size, backupCount is how many files to store.

Let’s see it can be done in Loguru:

logger.add("file_1.log", rotation="500 MB")    # Automatically rotate too big file 
logger.add("file_2.log", rotation="12:00")     # New file is created each day at noon 
logger.add("file_3.log", rotation="1 week")    # Once the file is too old, it's rotated 
logger.add("file_X.log", retention="10 days")  # Cleanup after some time 
logger.add("file_Y.log", compression="zip")    # Save some loved space

Again, everything looks simpler and more convenient.

Exception Handling

Logging has a separate method – exception. This is quite convenient and you need to use it in a try except block.

import logging
def my_function(x, y, z):
     return x / (y * z)
     my_function(1, 2, 0)
except ZeroDivisionError:

The log will look like this:

Traceback (most recent call last):
   File "", line 5, in 
     my_function(1, 2, 0)
   File "", line 3, in my_function
     return x / (y * z)
ZeroDivisionError: division by zero

In Loguru, we will need to use the @logger.catch decorator:

from loguru import logger
def my_function(x, y, z):
     return x / (y * z)
my_function(1, 2, 0)

Wow! The log looks cool. Even shows the value of the variables:

Loguru exception handling
Loguru exception handling


In this post, I have skimmed two libraries for logging in Python. Both libraries have a number of other features and capabilities. I recommend reading more in the documentation. I love Loguru and it is great for use in machine learning pipelines or for training neural networks. In small services, Loguru is great too. The only minus to using Loguru that I found is another extra dependency in your project.


Loguru –

Logging –

Read more my posts

Share it

If you liked the article - subscribe to my channel in the telegram or you can support me Become a Patron!

Other entries in this category: