Python Logging: The Complete Guide to Logging in Python
Updated on
You scatter print() statements across your code during development. It works for a quick debug. Then the project grows. You have 50 files, a production server, and no idea which print statement produced that cryptic line in the terminal. You cannot turn them off without deleting them. You cannot filter by severity. You cannot write to a file and the console at the same time. This is exactly the problem Python's built-in logging module solves. It gives you leveled, configurable, production-ready logging out of the box -- no third-party packages required.
Why Logging Beats Print Statements
Before diving into the logging module, it helps to understand why print() falls short in any project that grows beyond a single script.
| Feature | print() | logging |
|---|---|---|
| Severity levels | No | DEBUG, INFO, WARNING, ERROR, CRITICAL |
| Output to file | Manual redirect only | Built-in FileHandler |
| Timestamps | Must add manually | Built-in via formatters |
| Toggle on/off | Delete or comment out | Change log level |
| Multiple destinations | No | Multiple handlers |
| Thread safety | No | Yes |
| Module/source info | No | Automatic logger name, line number |
| Production ready | No | Yes |
The logging module ships with Python's standard library. There is nothing to install. You import it and start using it.
Basic Logging Setup with logging.basicConfig
The fastest way to start logging in Python is logging.basicConfig(). It configures the root logger with sensible defaults.
import logging
logging.basicConfig(level=logging.DEBUG)
logging.debug("This is a debug message")
logging.info("Application started")
logging.warning("Disk space is running low")
logging.error("Failed to connect to database")
logging.critical("System is shutting down")Output:
DEBUG:root:This is a debug message
INFO:root:Application started
WARNING:root:Disk space is running low
ERROR:root:Failed to connect to database
CRITICAL:root:System is shutting downBy default (without basicConfig), the root logger only shows WARNING and above. Calling basicConfig(level=logging.DEBUG) lowers the threshold so you see everything.
Logging to a File
To write logs to a file instead of the console, pass the filename parameter:
import logging
logging.basicConfig(
filename="app.log",
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
)
logging.info("Application started")
logging.error("Something went wrong")This creates app.log with timestamped entries:
2026-02-11 10:30:45,123 - INFO - Application started
2026-02-11 10:30:45,124 - ERROR - Something went wrongImportant: basicConfig() only works if the root logger has no handlers yet. If you call it twice, the second call does nothing. This is the most common "logging does not work" issue beginners encounter.
Python Logging Levels Explained
The logging module defines five standard levels. Each level has a numeric value, and the logger only processes messages at or above its configured threshold.
| Level | Numeric Value | When to Use |
|---|---|---|
DEBUG | 10 | Detailed diagnostic information. Used during development to trace program flow. |
INFO | 20 | Confirmation that things are working as expected. Startup messages, successful operations. |
WARNING | 30 | Something unexpected happened, or a potential problem. The application still works. |
ERROR | 40 | A serious problem. The application could not perform a specific function. |
CRITICAL | 50 | A fatal error. The application is about to crash or has crashed. |
The hierarchy works like a filter. If you set the level to WARNING, only WARNING, ERROR, and CRITICAL messages pass through. DEBUG and INFO are silently discarded.
import logging
logging.basicConfig(level=logging.WARNING)
logging.debug("This will NOT appear")
logging.info("This will NOT appear either")
logging.warning("This WILL appear")
logging.error("This WILL appear too")Output:
WARNING:root:This WILL appear
ERROR:root:This WILL appear tooChoosing the Right Level
A common mistake is logging everything at INFO or DEBUG. Follow this rule of thumb:
- DEBUG: Variable values, function entry/exit, loop iterations. Turned off in production.
- INFO: "User logged in", "File processed", "Server started on port 8080".
- WARNING: "Config file not found, using defaults", "Deprecated function called".
- ERROR: "Database query failed", "API returned 500", caught exceptions you handle.
- CRITICAL: "Out of memory", "Cannot bind to port", unrecoverable failures.
Formatters and Format Strings
The default format (LEVEL:name:message) is bare. Formatters let you control exactly what appears in each log line.
import logging
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s [%(levelname)-8s] %(name)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
logging.info("Server started")
logging.error("Connection timeout")Output:
2026-02-11 10:30:45 [INFO ] root - Server started
2026-02-11 10:30:45 [ERROR ] root - Connection timeoutCommon Format Variables
| Variable | Description | Example Output |
|---|---|---|
%(asctime)s | Timestamp | 2026-02-11 10:30:45,123 |
%(levelname)s | Log level name | WARNING |
%(name)s | Logger name | myapp.database |
%(module)s | Module (filename without extension) | database |
%(funcName)s | Function name | connect |
%(lineno)d | Line number | 42 |
%(message)s | The log message | Connection failed |
%(filename)s | Filename | database.py |
%(pathname)s | Full file path | /app/src/database.py |
%(process)d | Process ID | 12345 |
%(thread)d | Thread ID | 140735195286272 |
A production-ready format string might look like this:
FORMAT = "%(asctime)s [%(levelname)s] %(name)s (%(filename)s:%(lineno)d) - %(message)s"Handlers: Where Your Logs Go
Handlers determine the destination of log messages. You can attach multiple handlers to a single logger -- write to a file and the console simultaneously.
StreamHandler (Console Output)
import logging
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
formatter = logging.Formatter("%(asctime)s [%(levelname)s] %(message)s")
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
logger.info("Visible on console")
logger.debug("NOT visible -- console handler level is INFO")FileHandler (Write to File)
import logging
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)
file_handler = logging.FileHandler("app.log")
file_handler.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s [%(levelname)s] %(name)s - %(message)s")
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.debug("This goes to app.log")RotatingFileHandler (Prevent Giant Log Files)
In production, log files can grow to gigabytes. RotatingFileHandler automatically rotates files when they hit a size limit.
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)
handler = RotatingFileHandler(
"app.log",
maxBytes=5 * 1024 * 1024, # 5 MB per file
backupCount=3, # Keep 3 old files
)
handler.setFormatter(logging.Formatter("%(asctime)s [%(levelname)s] %(message)s"))
logger.addHandler(handler)
# When app.log reaches 5 MB, it becomes app.log.1,
# the previous app.log.1 becomes app.log.2, and so on.
# app.log.3 is deleted when a new rotation happens.TimedRotatingFileHandler (Rotate by Time)
import logging
from logging.handlers import TimedRotatingFileHandler
handler = TimedRotatingFileHandler(
"app.log",
when="midnight", # Rotate at midnight
interval=1, # Every 1 day
backupCount=30, # Keep 30 days of logs
)Combining Multiple Handlers
A common pattern: DEBUG-level logs go to a file, WARNING-level and above go to the console.
import logging
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)
# Console: only warnings and above
console = logging.StreamHandler()
console.setLevel(logging.WARNING)
console.setFormatter(logging.Formatter("[%(levelname)s] %(message)s"))
# File: everything
file_handler = logging.FileHandler("debug.log")
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(
logging.Formatter("%(asctime)s [%(levelname)s] %(name)s:%(lineno)d - %(message)s")
)
logger.addHandler(console)
logger.addHandler(file_handler)
logger.debug("Only in the file")
logger.info("Only in the file")
logger.warning("In the file AND on the console")
logger.error("In the file AND on the console")Logger Hierarchy and getLogger
Calling logging.getLogger("myapp.database") does not create an isolated logger. It creates a logger that is a child of myapp, which is itself a child of the root logger. This hierarchy is the single most important concept in Python logging.
import logging
# Parent logger
parent = logging.getLogger("myapp")
parent.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter("[%(name)s] %(levelname)s: %(message)s"))
parent.addHandler(handler)
# Child logger -- inherits handler from "myapp"
child = logging.getLogger("myapp.database")
child.warning("Connection pool exhausted")
# Output: [myapp.database] WARNING: Connection pool exhaustedThe child logger myapp.database has no handlers of its own, but the message still appears. Why? Log messages propagate upward through the hierarchy. The child passes its message to the parent myapp, which has a handler attached.
The __name__ Pattern
The standard practice is to create one logger per module using __name__:
# In file: myapp/database.py
import logging
logger = logging.getLogger(__name__)
# __name__ is "myapp.database" when imported as part of the myapp package
def connect():
logger.info("Connecting to database...")# In file: myapp/api.py
import logging
logger = logging.getLogger(__name__)
# __name__ is "myapp.api"
def handle_request():
logger.info("Handling request...")This gives you automatic module-level granularity. You can turn on DEBUG logging for just myapp.database without flooding your console with messages from every other module.
logging.getLogger("myapp.database").setLevel(logging.DEBUG)
logging.getLogger("myapp.api").setLevel(logging.WARNING)Propagation
By default, propagate is True, meaning messages travel up to parent loggers. If you add a handler to both a child and parent logger, you get duplicate messages. To fix this:
child_logger = logging.getLogger("myapp.database")
child_logger.addHandler(special_handler)
child_logger.propagate = False # Stop messages from reaching the parentConfiguring Logging with dictConfig
For anything beyond trivial scripts, configure logging declaratively using dictConfig. It is cleaner, easier to maintain, and can be loaded from a YAML or JSON file.
import logging
import logging.config
LOGGING_CONFIG = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"standard": {
"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s",
},
"detailed": {
"format": "%(asctime)s [%(levelname)s] %(name)s (%(filename)s:%(lineno)d) - %(message)s",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": "INFO",
"formatter": "standard",
"stream": "ext://sys.stdout",
},
"file": {
"class": "logging.handlers.RotatingFileHandler",
"level": "DEBUG",
"formatter": "detailed",
"filename": "app.log",
"maxBytes": 10485760, # 10 MB
"backupCount": 5,
},
},
"loggers": {
"myapp": {
"level": "DEBUG",
"handlers": ["console", "file"],
"propagate": False,
},
"myapp.database": {
"level": "WARNING",
"handlers": ["file"],
"propagate": False,
},
},
"root": {
"level": "WARNING",
"handlers": ["console"],
},
}
logging.config.dictConfig(LOGGING_CONFIG)
logger = logging.getLogger("myapp")
logger.info("Application started with dictConfig")Loading Config from a YAML File
import logging
import logging.config
import yaml
with open("logging.yaml", "r") as f:
config = yaml.safe_load(f)
logging.config.dictConfig(config)Example logging.yaml:
version: 1
disable_existing_loggers: false
formatters:
standard:
format: "%(asctime)s [%(levelname)s] %(name)s: %(message)s"
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: standard
stream: ext://sys.stdout
file:
class: logging.handlers.RotatingFileHandler
level: DEBUG
formatter: standard
filename: app.log
maxBytes: 10485760
backupCount: 5
root:
level: DEBUG
handlers: [console, file]Logging in Multi-Module Projects
In a real project, you configure logging once at the entry point and let every other module use getLogger(__name__).
myapp/
__init__.py
main.py # Configure logging here
database.py # getLogger(__name__)
api.py # getLogger(__name__)
utils.py # getLogger(__name__)# myapp/main.py
import logging
import logging.config
from myapp import database, api
LOGGING_CONFIG = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"standard": {
"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s"
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "standard",
},
},
"root": {
"level": "INFO",
"handlers": ["console"],
},
}
def main():
logging.config.dictConfig(LOGGING_CONFIG)
logger = logging.getLogger(__name__)
logger.info("Starting application")
database.connect()
api.start_server()
if __name__ == "__main__":
main()# myapp/database.py
import logging
logger = logging.getLogger(__name__)
def connect():
logger.info("Connecting to PostgreSQL on localhost:5432")
# ... connection logic
logger.debug("Connection pool initialized with 10 connections")Each module logs with its own name (myapp.database, myapp.api), and the configuration in main.py controls what gets printed and where.
Logging Exceptions with Tracebacks
When you catch an exception, log it with exc_info=True or use the logger.exception() shortcut to include the full traceback.
import logging
logger = logging.getLogger(__name__)
def divide(a, b):
try:
return a / b
except ZeroDivisionError:
logger.exception("Division failed for a=%s, b=%s", a, b)
return None
divide(10, 0)Output:
ERROR:__main__:Division failed for a=10, b=0
Traceback (most recent call last):
File "example.py", line 7, in divide
return a / b
ZeroDivisionError: division by zerologger.exception() always logs at ERROR level and includes the traceback. If you need a different level, use exc_info=True explicitly:
logger.warning("Non-critical failure", exc_info=True)Structured Logging and JSON Output
Plain text logs are hard to search and parse at scale. Structured logging outputs each log entry as a JSON object, which tools like Elasticsearch, Datadog, and CloudWatch can ingest directly.
Using python-json-logger
import logging
from pythonjsonlogger import jsonlogger
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
formatter = jsonlogger.JsonFormatter(
"%(asctime)s %(levelname)s %(name)s %(message)s",
rename_fields={"asctime": "timestamp", "levelname": "level"},
)
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.info("User logged in", extra={"user_id": 42, "ip": "192.168.1.10"})Output:
{"timestamp": "2026-02-11 10:30:45,123", "level": "INFO", "name": "myapp", "message": "User logged in", "user_id": 42, "ip": "192.168.1.10"}Rolling Your Own JSON Formatter
If you want to avoid external dependencies:
import logging
import json
from datetime import datetime, timezone
class JSONFormatter(logging.Formatter):
def format(self, record):
log_entry = {
"timestamp": datetime.fromtimestamp(
record.created, tz=timezone.utc
).isoformat(),
"level": record.levelname,
"logger": record.name,
"message": record.getMessage(),
"module": record.module,
"function": record.funcName,
"line": record.lineno,
}
if record.exc_info and record.exc_info[0]:
log_entry["exception"] = self.formatException(record.exc_info)
# Merge any extra fields
for key, value in record.__dict__.items():
if key not in logging.LogRecord(
"", 0, "", 0, "", (), None
).__dict__ and key not in log_entry:
log_entry[key] = value
return json.dumps(log_entry)
# Usage
handler = logging.StreamHandler()
handler.setFormatter(JSONFormatter())
logger = logging.getLogger("myapp")
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
logger.info("Order processed", extra={"order_id": "ORD-12345", "amount": 99.99})Performance Considerations
Logging is not free. In hot loops or latency-sensitive code, the overhead matters.
1. Use Lazy String Formatting
# BAD -- string is formatted even if DEBUG is disabled
logger.debug(f"Processing item {item.id} with data {item.to_dict()}")
# GOOD -- string is only formatted if DEBUG is enabled
logger.debug("Processing item %s with data %s", item.id, item.to_dict())The %s style defers string formatting until the message is actually emitted. If the logger's level is higher than DEBUG, the formatting never happens.
2. Guard Expensive Operations
If computing the log message itself is expensive, check the level first:
if logger.isEnabledFor(logging.DEBUG):
# This expensive computation only runs when DEBUG is on
stats = compute_detailed_stats(data)
logger.debug("Detailed stats: %s", stats)3. Avoid Logging in Tight Loops
# BAD -- logs 1 million times
for row in rows: # 1,000,000 rows
logger.debug("Processing row: %s", row)
# GOOD -- log summary instead
logger.info("Processing %d rows", len(rows))
for row in rows:
process(row)
logger.info("Finished processing all rows")4. Use QueueHandler for High-Throughput Applications
In multi-threaded applications, logging can become a bottleneck because handlers acquire locks. QueueHandler sends log records to a queue, and a separate thread processes them.
import logging
import logging.handlers
import queue
log_queue = queue.Queue()
queue_handler = logging.handlers.QueueHandler(log_queue)
# Actual handlers that do the I/O
console_handler = logging.StreamHandler()
file_handler = logging.FileHandler("app.log")
# QueueListener processes records from the queue in a background thread
listener = logging.handlers.QueueListener(
log_queue, console_handler, file_handler
)
listener.start()
logger = logging.getLogger("myapp")
logger.addHandler(queue_handler)
logger.setLevel(logging.DEBUG)
logger.info("This is non-blocking")
# When shutting down
listener.stop()Real-World Patterns
Pattern 1: Application-Wide Logger Setup
# config/logging_setup.py
import logging
import logging.config
import os
def setup_logging(log_level=None):
"""Configure logging for the entire application."""
level = log_level or os.environ.get("LOG_LEVEL", "INFO")
config = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"standard": {
"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "standard",
"level": level,
},
"file": {
"class": "logging.handlers.RotatingFileHandler",
"formatter": "standard",
"filename": os.environ.get("LOG_FILE", "app.log"),
"maxBytes": 10 * 1024 * 1024,
"backupCount": 5,
"level": "DEBUG",
},
},
"root": {
"level": "DEBUG",
"handlers": ["console", "file"],
},
}
logging.config.dictConfig(config)Pattern 2: Request Context Logging (Web Applications)
import logging
import uuid
import threading
# Thread-local storage for request context
_request_context = threading.local()
class RequestContextFilter(logging.Filter):
"""Adds request_id to every log record."""
def filter(self, record):
record.request_id = getattr(_request_context, "request_id", "no-request")
return True
def setup_request_logging():
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter(
"%(asctime)s [%(levelname)s] [%(request_id)s] %(name)s: %(message)s"
))
handler.addFilter(RequestContextFilter())
logger = logging.getLogger("webapp")
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
return logger
# In your request handler
def handle_request(request):
_request_context.request_id = str(uuid.uuid4())[:8]
logger = logging.getLogger("webapp")
logger.info("Request received: %s %s", request.method, request.path)
# ... process request
logger.info("Request completed")Pattern 3: Logging Decorator for Function Calls
import logging
import functools
import time
def log_calls(logger=None, level=logging.DEBUG):
"""Decorator that logs function entry, exit, and duration."""
def decorator(func):
nonlocal logger
if logger is None:
logger = logging.getLogger(func.__module__)
@functools.wraps(func)
def wrapper(*args, **kwargs):
func_name = func.__qualname__
logger.log(level, "Calling %s", func_name)
start = time.perf_counter()
try:
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
logger.log(level, "%s completed in %.3fs", func_name, elapsed)
return result
except Exception:
elapsed = time.perf_counter() - start
logger.exception("%s failed after %.3fs", func_name, elapsed)
raise
return wrapper
return decorator
# Usage
@log_calls()
def fetch_data(url):
import urllib.request
with urllib.request.urlopen(url) as response:
return response.read()Common Mistakes and How to Fix Them
1. Calling basicConfig After Importing a Library That Logs
Many libraries (like boto3, requests, urllib3) configure the root logger when imported. If that happens before your basicConfig call, your configuration is silently ignored.
# BAD -- library may configure root logger before your basicConfig
import boto3
import logging
logging.basicConfig(level=logging.INFO) # May do nothing
# GOOD -- configure logging FIRST
import logging
logging.basicConfig(level=logging.INFO)
import boto3Even better, use dictConfig with disable_existing_loggers: False instead of basicConfig.
2. Adding Handlers Multiple Times
If your setup code runs more than once (e.g., in a test suite), you get duplicate log messages.
# BAD -- each call adds another handler
def setup():
logger = logging.getLogger("myapp")
logger.addHandler(logging.StreamHandler())
# GOOD -- check before adding
def setup():
logger = logging.getLogger("myapp")
if not logger.handlers:
logger.addHandler(logging.StreamHandler())3. Using the Root Logger Directly
# BAD -- everything uses the root logger, no way to filter
logging.info("Database connected")
logging.info("API request received")
logging.info("Cache miss")
# GOOD -- each module has its own logger
logger = logging.getLogger(__name__)
logger.info("Database connected")4. Silencing All Logging Accidentally
# BAD -- this silences EVERY logger in the application
logging.disable(logging.CRITICAL)
# GOOD -- silence only a specific noisy library
logging.getLogger("urllib3").setLevel(logging.WARNING)
logging.getLogger("boto3").setLevel(logging.WARNING)5. Forgetting to Set the Logger Level
A logger has its own level independent of its handlers. If the logger level is higher than the handler level, messages are discarded before reaching the handler.
logger = logging.getLogger("myapp")
# logger.setLevel() not called -- defaults to WARNING
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG) # Handler accepts DEBUG
logger.addHandler(handler)
logger.debug("This will NOT appear")
# The logger itself filters out DEBUG before the handler ever sees it
# Fix: set the logger level too
logger.setLevel(logging.DEBUG)Debugging and Logging in Jupyter Notebooks with RunCell
When working in Jupyter notebooks, logging has a specific quirk: the root logger often already has a handler attached by IPython. This means basicConfig may not work as expected, and logs can appear in unexpected places or not at all.
RunCell (opens in a new tab) is an AI agent designed for Jupyter environments. It understands the notebook execution context and can help you set up logging correctly inside notebooks, diagnose why log messages are not appearing, and trace issues across cells. Since RunCell operates inside your Jupyter session with access to your actual runtime state, it can inspect your logger hierarchy and pinpoint configuration issues that are invisible from a single cell's output.
For data science workflows where you need to track data pipeline steps -- which transformations ran, how many rows were filtered, what warnings the model raised -- proper logging combined with an AI assistant like RunCell keeps your analysis reproducible and debuggable.
Quick Reference: Logging Setup Cheat Sheet
# Minimal setup for scripts
import logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
)
# Per-module logger (recommended for packages)
import logging
logger = logging.getLogger(__name__)
# Log an exception with traceback
try:
risky_operation()
except Exception:
logger.exception("Operation failed")
# Silence noisy third-party libraries
logging.getLogger("urllib3").setLevel(logging.WARNING)
# Lazy formatting (always use this)
logger.info("Processed %d items in %.2fs", count, elapsed)FAQ
What is the default logging level in Python?
The root logger defaults to WARNING (level 30). This means DEBUG and INFO messages are discarded unless you explicitly lower the threshold with basicConfig(level=logging.DEBUG) or logger.setLevel(logging.DEBUG).
How do I log to both a file and the console in Python?
Create two handlers -- a StreamHandler for the console and a FileHandler for the file -- and add both to your logger. Each handler can have its own level and formatter, so you can show only warnings on screen while writing everything to the file.
What is the difference between logging.warning() and logger.warning()?
logging.warning() uses the root logger. logger.warning() uses a named logger you created with getLogger(). Always prefer named loggers because they give you granular control over which modules produce output.
Should I use logging or print for debugging?
Use logging. Even for quick debugging, logging.debug() is better because you can turn it off by changing the level instead of deleting print statements. In production code, print() should never be used for diagnostic output.
How do I configure logging from a file?
Use logging.config.dictConfig() with a dictionary loaded from YAML or JSON. This separates logging configuration from code and makes it easy to change behavior across environments (development, staging, production) without modifying source files.
What is structured logging and when should I use it?
Structured logging outputs log entries as JSON objects instead of plain text lines. Use it when your logs are consumed by centralized logging systems like Elasticsearch, Splunk, or CloudWatch. JSON logs are searchable, filterable, and can carry arbitrary metadata (user IDs, request IDs, durations) as key-value pairs.
Conclusion
Python's logging module is one of the most underused tools in the standard library. It replaces scattered print() statements with a system that scales from a single script to a multi-service application. The five severity levels give you precise control over what appears in your output. Handlers let you write to files, consoles, syslog, HTTP endpoints, and email simultaneously. Formatters standardize your log output. And the logger hierarchy means you configure logging once and every module in your project inherits the behavior automatically.
Start simple: basicConfig with a format string and a level. As your project grows, move to dictConfig for clean, declarative configuration. Use getLogger(__name__) in every module. Add RotatingFileHandler so your disk does not fill up. Switch to JSON output when you need machine-readable logs.
The investment pays off the first time you debug a production issue at 3 AM. Instead of guessing what happened, you open the log file and see exactly which function failed, what the input was, and what the error message said. That is the difference between logging and printing.