The Complete Guide to Getting Started with Python Developers


Photo by the Author
# Introduction
Most Python developers treat logging as an afterthought. They tossed around print() statements during development, maybe switch to basic logging later, and assume that's enough. But when problems arise in production, they learn that they lack the context needed to properly diagnose problems.
Appropriate logging methods give you visibility into application behavior, performance patterns, and error conditions. With the right approach, you can track user actions, identify issues, and debug issues without reproducing them locally. Good drug penetration changes troubleshooting from guesswork to systematic problem solving.
This article covers important logging patterns that Python developers can use. You'll learn how to organize log messages for search, handle exceptions without losing context, and configure logging in different environments. We'll start with the basics and work our way up to more advanced logging techniques that you can apply to projects quickly. We will only use entry module.
You can find the code on GitHub.
# Setting Up Your First Logger
Instead of jumping straight into complex configurations, let's understand what a logger actually does. We will create a basic logger that writes to both the console and a file.
import logging
logger = logging.getLogger('my_app')
logger.setLevel(logging.DEBUG)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
file_handler = logging.FileHandler('app.log')
file_handler.setLevel(logging.DEBUG)
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
console_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)
logger.addHandler(console_handler)
logger.addHandler(file_handler)
logger.debug('This is a debug message')
logger.info('Application started')
logger.warning('Disk space running low')
logger.error('Failed to connect to database')
logger.critical('System shutting down')
Here's what each piece of code does.
I getLogger() function creates an instance named logger. Think of it as creating a channel for your logs. The name 'my_app' helps you see where the logs come from in the main applications.
Set the entry level DEBUGwhich means it will process all messages. Then we create two handlers: one for console output and one for file output. Handlers control where the logs go.
The console handler is display only INFO level and above, while the file handler captures everything, including DEBUG messages. This is useful because you want detailed logs in the files but a clean output on the screen.
The format determines what your log messages look like. The format string uses wildcards such as %(asctime)s for the time stamp and %(levelname)s with difficulty.
# Understanding Entry Levels and When to Use Each
Python's entry module it has five standard levels, and knowing when to use each is essential to useful logs.
Here is an example:
logger = logging.getLogger('payment_processor')
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
logger.addHandler(handler)
def process_payment(user_id, amount):
logger.debug(f'Starting payment processing for user {user_id}')
if amount <= 0:
logger.error(f'Invalid payment amount: {amount}')
return False
logger.info(f'Processing ${amount} payment for user {user_id}')
if amount > 10000:
logger.warning(f'Large transaction detected: ${amount}')
try:
# Simulate payment processing
success = charge_card(user_id, amount)
if success:
logger.info(f'Payment successful for user {user_id}')
return True
else:
logger.error(f'Payment failed for user {user_id}')
return False
except Exception as e:
logger.critical(f'Payment system crashed: {e}', exc_info=True)
return False
def charge_card(user_id, amount):
# Simulated payment logic
return True
process_payment(12345, 150.00)
process_payment(12345, 15000.00)
Let's break down when to use each level:
- FIX for detailed information useful during development. You could use it for variable values, loop iteration, or step-by-step execution tracking. This is often disabled in production.
- INFORMATION mark the common activities you want to record. Starting a server, completing a task, or a successful transaction goes here. This ensures that your application works as expected.
- WARNING it shows something unexpected but not breaking. This includes low disk space, reduced API usage, or unusual but manageable situations. The app continues to work, but someone has to investigate.
- ERROR means something failed but the request can continue. Failed database queries, authentication errors, or network timeouts belong here. An operation failed, but the application continues to run.
- IMPORTANT indicates serious problems that can cause the application to crash or lose data. Use this sparingly for catastrophic failures that require immediate attention.
If you run the above code, you will get:
DEBUG: Starting payment processing for user 12345
DEBUG:payment_processor:Starting payment processing for user 12345
INFO: Processing $150.0 payment for user 12345
INFO:payment_processor:Processing $150.0 payment for user 12345
INFO: Payment successful for user 12345
INFO:payment_processor:Payment successful for user 12345
DEBUG: Starting payment processing for user 12345
DEBUG:payment_processor:Starting payment processing for user 12345
INFO: Processing $15000.0 payment for user 12345
INFO:payment_processor:Processing $15000.0 payment for user 12345
WARNING: Large transaction detected: $15000.0
WARNING:payment_processor:Large transaction detected: $15000.0
INFO: Payment successful for user 12345
INFO:payment_processor:Payment successful for user 12345
True
Next, let's continue to understand more about the different types of logging.
# Properly Separate Entry
When an exception occurs, you need more than just an error message; you need to trace the full stack. Here's how to successfully capture exceptions.
import json
logger = logging.getLogger('api_handler')
logger.setLevel(logging.DEBUG)
handler = logging.FileHandler('errors.log')
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)
def fetch_user_data(user_id):
logger.info(f'Fetching data for user {user_id}')
try:
# Simulate API call
response = call_external_api(user_id)
data = json.loads(response)
logger.debug(f'Received data: {data}')
return data
except json.JSONDecodeError as e:
logger.error(
f'Failed to parse JSON for user {user_id}: {e}',
exc_info=True
)
return None
except ConnectionError as e:
logger.error(
f'Network error while fetching user {user_id}',
exc_info=True
)
return None
except Exception as e:
logger.critical(
f'Unexpected error in fetch_user_data: {e}',
exc_info=True
)
raise
def call_external_api(user_id):
# Simulated API response
return '{"id": ' + str(user_id) + ', "name": "John"}'
fetch_user_data(123)
The key here is exc_info=True parameter. This tells the logger to include a traceback exception in your log. Without it, you only get an error message, which is usually not enough to fix the problem.
Notice how we catch the exception first, then the general Exception manager. Certain handlers allow us to provide context-appropriate error messages. A normal handler catches anything unexpected and re-raises it because we don't know how to handle it safely.
Note also that we enter the ERROR for expected exceptions (such as network errors) but CRITICAL to the unexpected. This distinction helps you prioritize when reviewing logs.
# Creating a Reusable Log Configuration
Copying the log setup code to every file is tedious and error prone. Let's create a configuration function that you can import anywhere in your project.
# logger_config.py
import logging
import os
from datetime import datetime
def setup_logger(name, log_dir="logs", level=logging.INFO):
"""
Create a configured logger instance
Args:
name: Logger name (usually __name__ from calling module)
log_dir: Directory to store log files
level: Minimum logging level
Returns:
Configured logger instance
"""
# Create logs directory if it doesn't exist
if not os.path.exists(log_dir):
os.makedirs(log_dir)
logger = logging.getLogger(name)
# Avoid adding handlers multiple times
if logger.handlers:
return logger
logger.setLevel(level)
# Console handler - INFO and above
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
console_format = logging.Formatter("%(levelname)s - %(name)s - %(message)s")
console_handler.setFormatter(console_format)
# File handler - everything
log_filename = os.path.join(
log_dir, f"{name.replace('.', '_')}_{datetime.now().strftime('%Y%m%d')}.log"
)
file_handler = logging.FileHandler(log_filename)
file_handler.setLevel(logging.DEBUG)
file_format = logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s"
)
file_handler.setFormatter(file_format)
logger.addHandler(console_handler)
logger.addHandler(file_handler)
return logger
Now that you have set it up logger_configyou can use it in your Python script like this:
from logger_config import setup_logger
logger = setup_logger(__name__)
def calculate_discount(price, discount_percent):
logger.debug(f'Calculating discount: {price} * {discount_percent}%')
if discount_percent < 0 or discount_percent > 100:
logger.warning(f'Invalid discount percentage: {discount_percent}')
discount_percent = max(0, min(100, discount_percent))
discount = price * (discount_percent / 100)
final_price = price - discount
logger.info(f'Applied {discount_percent}% discount: ${price} -> ${final_price}')
return final_price
calculate_discount(100, 20)
calculate_discount(100, 150)
This setup function handles a few important things. First, it creates a log directory when needed, preventing crashes from missing directories.
The function checks if handlers already exist before adding new ones. Without this test, calling setup_logger many times it will create duplicate log entries.
We automatically generate log file names. This prevents log files from growing indefinitely and makes it easy to find logs from specific dates.
The file handler includes more information than the console handler, including function names and line numbers. This is especially useful when debugging but can include console output.
Using __name__ since the name of the logger creates a sequence that matches the structure of your module. This allows you to control access to certain parts of your application independently.
# Editing content logs
Plain text logs are good for simple applications, but structured logs with context make debugging much easier. Let's add contextual information to our logs.
import json
from datetime import datetime, timezone
class ContextLogger:
"""Logger wrapper that adds contextual information to all log messages"""
def __init__(self, name, context=None):
self.logger = logging.getLogger(name)
self.context = context or {}
handler = logging.StreamHandler()
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
# Check if handler already exists to avoid duplicate handlers
if not any(isinstance(h, logging.StreamHandler) and h.formatter._fmt == '%(message)s' for h in self.logger.handlers):
self.logger.addHandler(handler)
self.logger.setLevel(logging.DEBUG)
def _format_message(self, message, level, extra_context=None):
"""Format message with context as JSON"""
log_data = {
'timestamp': datetime.now(timezone.utc).isoformat(),
'level': level,
'message': message,
'context': {**self.context, **(extra_context or {})}
}
return json.dumps(log_data)
def debug(self, message, **kwargs):
self.logger.debug(self._format_message(message, 'DEBUG', kwargs))
def info(self, message, **kwargs):
self.logger.info(self._format_message(message, 'INFO', kwargs))
def warning(self, message, **kwargs):
self.logger.warning(self._format_message(message, 'WARNING', kwargs))
def error(self, message, **kwargs):
self.logger.error(self._format_message(message, 'ERROR', kwargs))
You can use the ContextLogger like this:
def process_order(order_id, user_id):
logger = ContextLogger(__name__, context={
'order_id': order_id,
'user_id': user_id
})
logger.info('Order processing started')
try:
items = fetch_order_items(order_id)
logger.info('Items fetched', item_count=len(items))
total = calculate_total(items)
logger.info('Total calculated', total=total)
if total > 1000:
logger.warning('High value order', total=total, flagged=True)
return True
except Exception as e:
logger.error('Order processing failed', error=str(e))
return False
def fetch_order_items(order_id):
return [{'id': 1, 'price': 50}, {'id': 2, 'price': 75}]
def calculate_total(items):
return sum(item['price'] for item in items)
process_order('ORD-12345', 'USER-789')
This ContextLogger The wrapper does something useful: it automatically includes context in every log message. I order_id again user_id add to all logs without repeating them for each log call.
I JSON The format makes these logs easy to analyze and search.
I **kwargs Each login allows you to add additional context to specific login messages. This includes the global context (order_id, user_id) in local context (item_count, total) by default.
This pattern is especially useful for web applications where you want to request IDs, user IDs, or session IDs for every login message from the application.
# Rotating Log Files to Prevent Disk Space Problems
Log files grow rapidly in production. Without rotation, they will eventually fill up your disk. Here's how to use automatic log rotation.
from logging.handlers import RotatingFileHandler, TimedRotatingFileHandler
def setup_rotating_logger(name):
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
# Size-based rotation: rotate when file reaches 10MB
size_handler = RotatingFileHandler(
'app_size_rotation.log',
maxBytes=10 * 1024 * 1024, # 10 MB
backupCount=5 # Keep 5 old files
)
size_handler.setLevel(logging.DEBUG)
# Time-based rotation: rotate daily at midnight
time_handler = TimedRotatingFileHandler(
'app_time_rotation.log',
when='midnight',
interval=1,
backupCount=7 # Keep 7 days
)
time_handler.setLevel(logging.INFO)
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
size_handler.setFormatter(formatter)
time_handler.setFormatter(formatter)
logger.addHandler(size_handler)
logger.addHandler(time_handler)
return logger
logger = setup_rotating_logger('rotating_app')
Now let's try to use log file rotation:
for i in range(1000):
logger.info(f'Processing record {i}')
logger.debug(f'Record {i} details: completed in {i * 0.1}ms')
RotatingFileHandler manages logs based on file size. When the log file reaches 10MB (specified in bytes), it is renamed to app_size_rotation.log.1and new app_size_rotation.log it starts. I backupCount of 5 means you will keep the 5 oldest log files before the oldest ones are deleted.
TimedRotatingFileHandler rotates based on time periods. The 'midnight' parameter means that it creates a new log file every day at midnight. You can also use 'H' for hourly, 'D' for daily (any time), or 'W0' for weekly Monday.
I interval The parameter works with when parameter. With when='H' again interval=6logs will rotate every 6 hours.
These holders are essential in manufacturing environments. Without them, your operating system can crash when the disk becomes full of logs.
# Entering Different Areas
Your logging needs differ between development, staging, and production. Here's how to set up logging that suits each location.
import logging
import os
def configure_environment_logger(app_name):
"""Configure logger based on environment"""
environment = os.getenv('APP_ENV', 'development')
logger = logging.getLogger(app_name)
# Clear existing handlers
logger.handlers = []
if environment == 'development':
# Development: verbose console output
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter(
'%(levelname)s - %(name)s - %(funcName)s:%(lineno)d - %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)
elif environment == 'staging':
# Staging: detailed file logs + important console messages
logger.setLevel(logging.DEBUG)
file_handler = logging.FileHandler('staging.log')
file_handler.setLevel(logging.DEBUG)
file_formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(funcName)s - %(message)s'
)
file_handler.setFormatter(file_formatter)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.WARNING)
console_formatter = logging.Formatter('%(levelname)s: %(message)s')
console_handler.setFormatter(console_formatter)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
elif environment == 'production':
# Production: structured logs, errors only to console
logger.setLevel(logging.INFO)
file_handler = logging.handlers.RotatingFileHandler(
'production.log',
maxBytes=50 * 1024 * 1024, # 50 MB
backupCount=10
)
file_handler.setLevel(logging.INFO)
file_formatter = logging.Formatter(
'{"timestamp": "%(asctime)s", "level": "%(levelname)s", '
'"logger": "%(name)s", "message": "%(message)s"}'
)
file_handler.setFormatter(file_formatter)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.ERROR)
console_formatter = logging.Formatter('%(levelname)s: %(message)s')
console_handler.setFormatter(console_formatter)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
return logger
This location-based configuration handles each category differently. The upgrade displays everything on the console with detailed information, including job names and line numbers. This makes debugging faster.
The platform balances development and production. It writes detailed logs to files for investigation but only displays warnings and errors on the console to avoid noise.
Productivity focuses on performance and structure. It only comes in INFO level and above files, uses JSON formatting for easy parsing, and uses log rotation to manage disk space. Console output is limited to errors only.
# Set environment variable (normally done by deployment system)
os.environ['APP_ENV'] = 'production'
logger = configure_environment_logger('my_application')
logger.debug('This debug message won't appear in production')
logger.info('User logged in successfully')
logger.error('Failed to process payment')
Nature is determined by the APP_ENV environment variable. Your feed plan (Docker, Kubernetesor other cloud platforms) sets this variable by default.
Note how we clear existing handlers before configuration. This prevents duplicate handlers if a function is called multiple times during the application's life cycle.
# Wrapping up
Proper medication entry makes the difference between identifying problems quickly and spending hours guessing what went wrong. Start with basic logging using appropriate difficulty levels, add structured context to create audit logs, and configure rotation to prevent disk space issues.
The patterns shown here work for applications of any size. Start easily with basic logging, then add structured logging when you need better searchability, and use location-specific configuration when deploying to production.
Happy logging!
Count Priya C is an engineer and technical writer from India. He loves working at the intersection of mathematics, programming, data science, and content creation. His areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, he works to learn and share his knowledge with the engineering community by authoring tutorials, how-to guides, ideas, and more. Bala also creates engaging resource overviews and code tutorials.



