Python Collections Module: Counter, defaultdict, deque, namedtuple Guide
Updated on
Python's built-in data structures -- lists, dicts, tuples, sets -- handle most tasks. But once your code grows beyond toy examples, you start hitting their limits. Counting elements requires manual dictionary loops. Grouping data means littering your code with if key not in dict checks. Using a list as a queue punishes you with O(n) pops from the front. Representing structured records with plain tuples turns field access into an unreadable index guessing game. Each workaround is small on its own, but they pile up fast, making code harder to read, slower to run, and more likely to break.
The collections module in Python's standard library solves these problems with purpose-built container types. Counter counts elements in one call. defaultdict eliminates KeyError with automatic default values. deque gives you O(1) operations on both ends of a sequence. namedtuple adds field names to tuples without the overhead of a full class. OrderedDict and ChainMap handle ordering and layered lookup patterns that plain dicts cannot express cleanly.
This guide covers every major class in the collections module with working code, performance analysis, and real-world patterns. Whether you are processing log files, building caches, managing configuration layers, or structuring data pipelines, these containers will make your code shorter, faster, and more correct.
Overview of the collections Module
The collections module provides specialized container datatypes that extend Python's general-purpose built-in containers.
import collections
# See all available classes
print([name for name in dir(collections) if not name.startswith('_')])
# ['ChainMap', 'Counter', 'OrderedDict', 'UserDict', 'UserList',
# 'UserString', 'abc', 'defaultdict', 'deque', 'namedtuple']| Class | Purpose | Replaces |
|---|---|---|
Counter | Count hashable objects | Manual dict counting loops |
defaultdict | Dict with automatic default values | dict.setdefault(), if key not in checks |
deque | Double-ended queue with O(1) ends | list used as queue/stack |
namedtuple | Tuple with named fields | Plain tuples, simple data classes |
OrderedDict | Dict that remembers insertion order | dict (pre-3.7), ordered operations |
ChainMap | Layered dictionary lookups | Manual dict merging |
Counter: Counting Elements
Counter is a dict subclass for counting hashable objects. It maps elements to their counts and provides methods for frequency analysis.
Creating a Counter
from collections import Counter
# From an iterable
words = ['apple', 'banana', 'apple', 'cherry', 'banana', 'apple']
word_count = Counter(words)
print(word_count)
# Counter({'apple': 3, 'banana': 2, 'cherry': 1})
# From a string
letter_count = Counter('mississippi')
print(letter_count)
# Counter({'s': 4, 'i': 4, 'p': 2, 'm': 1})
# From a dictionary
inventory = Counter({'shirts': 25, 'pants': 15, 'hats': 10})
# From keyword arguments
stock = Counter(laptops=5, monitors=12)most_common() and Frequency Ranking
from collections import Counter
text = "to be or not to be that is the question"
words = Counter(text.split())
# Get the 3 most common words
print(words.most_common(3))
# [('to', 2), ('be', 2), ('or', 1)]
# Get all elements sorted by frequency
print(words.most_common())
# [('to', 2), ('be', 2), ('or', 1), ('not', 1), ('that', 1), ('is', 1), ('the', 1), ('question', 1)]
# Least common: reverse the list or slice from the end
print(words.most_common()[-3:])
# [('is', 1), ('the', 1), ('question', 1)]Counter Arithmetic
Counters support addition, subtraction, intersection, and union -- treating them as multisets.
from collections import Counter
a = Counter(x=4, y=2, z=1)
b = Counter(x=1, y=3, z=5)
# Addition: combine counts
print(a + b) # Counter({'z': 6, 'y': 5, 'x': 5})
# Subtraction: drops zero and negative results
print(a - b) # Counter({'x': 3})
# Intersection (min of each)
print(a & b) # Counter({'y': 2, 'x': 1, 'z': 1})
# Union (max of each)
print(a | b) # Counter({'z': 5, 'x': 4, 'y': 3})Practical Counter Patterns
from collections import Counter
# Word frequency analysis
log_entries = [
"ERROR: disk full",
"WARNING: high memory",
"ERROR: disk full",
"ERROR: timeout",
"WARNING: high memory",
"ERROR: disk full",
"INFO: backup complete",
]
error_types = Counter(entry.split(":")[0].strip() for entry in log_entries)
print(error_types)
# Counter({'ERROR': 4, 'WARNING': 2, 'INFO': 1})
# Find unique elements (count == 1)
data = [1, 2, 3, 2, 1, 4, 5, 4]
unique = [item for item, count in Counter(data).items() if count == 1]
print(unique) # [3, 5]
# Check if one collection is a subset of another (anagram check)
def is_anagram(word1, word2):
return Counter(word1.lower()) == Counter(word2.lower())
print(is_anagram("listen", "silent")) # True
print(is_anagram("hello", "world")) # FalseFor a deep dive into Counter, see our dedicated Python Counter guide.
defaultdict: Automatic Default Values
defaultdict is a dict subclass that calls a factory function to supply default values for missing keys, eliminating KeyError and the need for defensive checks.
Factory Functions
from collections import defaultdict
# int factory: default is 0
counter = defaultdict(int)
counter['apples'] += 1
counter['oranges'] += 3
print(dict(counter)) # {'apples': 1, 'oranges': 3}
# list factory: default is []
groups = defaultdict(list)
pairs = [('fruit', 'apple'), ('veggie', 'carrot'), ('fruit', 'banana'), ('veggie', 'pea')]
for category, item in pairs:
groups[category].append(item)
print(dict(groups))
# {'fruit': ['apple', 'banana'], 'veggie': ['carrot', 'pea']}
# set factory: default is set()
index = defaultdict(set)
words = [('file1', 'python'), ('file2', 'python'), ('file1', 'java'), ('file3', 'python')]
for filename, lang in words:
index[lang].add(filename)
print(dict(index))
# {'python': {'file1', 'file2', 'file3'}, 'java': {'file1'}}The Grouping Pattern
Grouping related data is the single most common use of defaultdict(list). Compare the manual approach:
from collections import defaultdict
students = [
('Math', 'Alice'), ('Science', 'Bob'), ('Math', 'Charlie'),
('Science', 'Diana'), ('Math', 'Eve'), ('History', 'Frank'),
]
# Without defaultdict -- verbose and error-prone
groups_manual = {}
for subject, name in students:
if subject not in groups_manual:
groups_manual[subject] = []
groups_manual[subject].append(name)
# With defaultdict -- clean and direct
groups = defaultdict(list)
for subject, name in students:
groups[subject].append(name)
print(dict(groups))
# {'Math': ['Alice', 'Charlie', 'Eve'], 'Science': ['Bob', 'Diana'], 'History': ['Frank']}Nested defaultdict
Build multi-level data structures without initializing each level manually.
from collections import defaultdict
# Two-level nested defaultdict
def nested_dict():
return defaultdict(int)
sales = defaultdict(nested_dict)
sales['2025']['Q1'] = 150000
sales['2025']['Q2'] = 175000
sales['2026']['Q1'] = 200000
print(sales['2025']['Q1']) # 150000
print(sales['2024']['Q3']) # 0 (auto-created, no KeyError)
# Arbitrary depth nesting with a recursive factory
def deep_dict():
return defaultdict(deep_dict)
config = deep_dict()
config['database']['primary']['host'] = 'localhost'
config['database']['primary']['port'] = 5432
config['database']['replica']['host'] = 'replica.local'
print(config['database']['primary']['host']) # localhostCustom Factory Functions
from collections import defaultdict
# Lambda for custom defaults
scores = defaultdict(lambda: 100) # Every student starts with 100
scores['Alice'] -= 5
scores['Bob'] -= 10
print(scores['Charlie']) # 100 (new student gets default)
print(dict(scores)) # {'Alice': 95, 'Bob': 90, 'Charlie': 100}
# Named function for complex defaults
def default_user():
return {'role': 'viewer', 'active': True, 'login_count': 0}
users = defaultdict(default_user)
users['alice']['role'] = 'admin'
print(users['bob']) # {'role': 'viewer', 'active': True, 'login_count': 0}For more patterns, see our Python defaultdict guide.
deque: Double-Ended Queue
deque (pronounced "deck") provides O(1) append and pop operations from both ends. Lists are O(n) for pop(0) and insert(0, x) because all elements must shift. For any workload that touches both ends of a sequence, deque is the correct choice.
Core Operations
from collections import deque
d = deque([1, 2, 3, 4, 5])
# O(1) operations on both ends
d.append(6) # Add to right: [1, 2, 3, 4, 5, 6]
d.appendleft(0) # Add to left: [0, 1, 2, 3, 4, 5, 6]
right = d.pop() # Remove from right: 6
left = d.popleft() # Remove from left: 0
print(d) # deque([1, 2, 3, 4, 5])
# Extend from both sides
d.extend([6, 7]) # Right extend: [1, 2, 3, 4, 5, 6, 7]
d.extendleft([-1, 0]) # Left extend (reversed): [0, -1, 1, 2, 3, 4, 5, 6, 7]Bounded Deques with maxlen
When maxlen is set, adding elements beyond the limit automatically discards items from the opposite end. This is perfect for sliding windows and caches.
from collections import deque
# Keep only the last 5 items
recent = deque(maxlen=5)
for i in range(10):
recent.append(i)
print(recent) # deque([5, 6, 7, 8, 9], maxlen=5)
# Sliding window average
def moving_average(iterable, window_size):
window = deque(maxlen=window_size)
for value in iterable:
window.append(value)
if len(window) == window_size:
yield sum(window) / window_size
data = [10, 20, 30, 40, 50, 60, 70]
print(list(moving_average(data, 3)))
# [20.0, 30.0, 40.0, 50.0, 60.0]Rotation
rotate(n) shifts elements n steps to the right. Negative values rotate left.
from collections import deque
d = deque([1, 2, 3, 4, 5])
d.rotate(2) # Rotate right by 2
print(d) # deque([4, 5, 1, 2, 3])
d.rotate(-3) # Rotate left by 3
print(d) # deque([2, 3, 4, 5, 1])deque vs list Performance
from collections import deque
import time
# Benchmark: append/pop from left side
n = 100_000
# List: O(n) for each insert at position 0
start = time.perf_counter()
lst = []
for i in range(n):
lst.insert(0, i)
list_time = time.perf_counter() - start
# Deque: O(1) for appendleft
start = time.perf_counter()
dq = deque()
for i in range(n):
dq.appendleft(i)
deque_time = time.perf_counter() - start
print(f"List insert(0, x): {list_time:.4f}s")
print(f"Deque appendleft: {deque_time:.4f}s")
print(f"Deque is {list_time / deque_time:.0f}x faster")
# Typical output:
# List insert(0, x): 1.2340s
# Deque appendleft: 0.0065s
# Deque is 190x faster| Operation | list | deque |
|---|---|---|
append(x) (right) | O(1) amortized | O(1) |
pop() (right) | O(1) | O(1) |
insert(0, x) / appendleft(x) | O(n) | O(1) |
pop(0) / popleft() | O(n) | O(1) |
access by index [i] | O(1) | O(n) |
| Memory per element | Lower | Slightly higher |
Use deque when you need fast operations on both ends. Use list when you need fast random access by index.
For the full guide, see Python deque.
namedtuple: Tuples with Named Fields
namedtuple creates tuple subclasses with named fields, making code self-documenting without the overhead of defining a full class.
Creating namedtuples
from collections import namedtuple
# Define a type
Point = namedtuple('Point', ['x', 'y'])
p = Point(3, 4)
# Access by name or index
print(p.x) # 3
print(p[1]) # 4
print(p) # Point(x=3, y=4)
# Alternative field definition styles
Color = namedtuple('Color', 'red green blue') # Space-separated string
Config = namedtuple('Config', 'host, port, database') # Comma-separated stringWhy Use namedtuple Over Plain Tuples?
from collections import namedtuple
# Plain tuple: which index is what?
employee_tuple = ('Alice', 'Engineering', 95000, True)
print(employee_tuple[2]) # 95000 -- but what does index 2 mean?
# namedtuple: self-documenting
Employee = namedtuple('Employee', 'name department salary active')
employee = Employee('Alice', 'Engineering', 95000, True)
print(employee.salary) # 95000 -- immediately clear
print(employee.department) # EngineeringKey Methods
from collections import namedtuple
Employee = namedtuple('Employee', 'name department salary')
emp = Employee('Alice', 'Engineering', 95000)
# _replace: create a new instance with some fields changed (immutable)
promoted = emp._replace(salary=110000)
print(promoted) # Employee(name='Alice', department='Engineering', salary=110000)
print(emp) # Employee(name='Alice', department='Engineering', salary=95000) -- unchanged
# _asdict: convert to OrderedDict (Python 3.8+ returns regular dict)
print(emp._asdict())
# {'name': 'Alice', 'department': 'Engineering', 'salary': 95000}
# _fields: get field names
print(Employee._fields) # ('name', 'department', 'salary')
# _make: create from an iterable
data = ['Bob', 'Marketing', 85000]
emp2 = Employee._make(data)
print(emp2) # Employee(name='Bob', department='Marketing', salary=85000)Default Values
from collections import namedtuple
# defaults parameter (Python 3.6.1+)
Connection = namedtuple('Connection', 'host port timeout', defaults=[5432, 30])
conn1 = Connection('localhost') # port=5432, timeout=30
conn2 = Connection('db.example.com', 3306) # timeout=30
conn3 = Connection('db.example.com', 3306, 60)
print(conn1) # Connection(host='localhost', port=5432, timeout=30)
print(conn2) # Connection(host='db.example.com', port=3306, timeout=30)typing.NamedTuple Alternative
For type annotations and more class-like syntax, use typing.NamedTuple:
from typing import NamedTuple
class Point(NamedTuple):
x: float
y: float
label: str = "origin"
p = Point(3.0, 4.0, "A")
print(p.x, p.label) # 3.0 A
# Still a tuple -- supports unpacking, indexing, iteration
x, y, label = p
print(f"({x}, {y})") # (3.0, 4.0)namedtuple vs dataclass
For a full guide on the @dataclass decorator, see Python dataclasses.
| Feature | namedtuple | dataclass |
|---|---|---|
| Immutable by default | Yes | No (frozen=True required) |
| Memory footprint | Same as tuple (small) | Larger (regular class) |
| Iteration/unpacking | Yes (it is a tuple) | No (unless you add methods) |
| Type annotations | Via typing.NamedTuple | Built-in |
| Methods/properties | Requires subclassing | Direct support |
| Inheritance | Limited | Full class inheritance |
| Best for | Lightweight data records | Complex mutable objects |
OrderedDict: Ordered Dictionary Operations
Since Python 3.7, regular dict preserves insertion order. So when do you still need OrderedDict?
When OrderedDict Still Matters
from collections import OrderedDict
# 1. Equality considers order
d1 = {'a': 1, 'b': 2}
d2 = {'b': 2, 'a': 1}
print(d1 == d2) # True -- regular dicts ignore order in comparison
od1 = OrderedDict([('a', 1), ('b', 2)])
od2 = OrderedDict([('b', 2), ('a', 1)])
print(od1 == od2) # False -- OrderedDict considers order
# 2. move_to_end() for reordering
od = OrderedDict([('a', 1), ('b', 2), ('c', 3)])
od.move_to_end('a') # Move 'a' to the end
print(list(od.keys())) # ['b', 'c', 'a']
od.move_to_end('c', last=False) # Move 'c' to the beginning
print(list(od.keys())) # ['c', 'b', 'a']Building an LRU Cache with OrderedDict
from collections import OrderedDict
class LRUCache:
def __init__(self, capacity):
self.cache = OrderedDict()
self.capacity = capacity
def get(self, key):
if key not in self.cache:
return -1
self.cache.move_to_end(key) # Mark as recently used
return self.cache[key]
def put(self, key, value):
if key in self.cache:
self.cache.move_to_end(key)
self.cache[key] = value
if len(self.cache) > self.capacity:
self.cache.popitem(last=False) # Remove oldest
cache = LRUCache(3)
cache.put('a', 1)
cache.put('b', 2)
cache.put('c', 3)
cache.get('a') # Access 'a', moves it to end
cache.put('d', 4) # Evicts 'b' (least recently used)
print(list(cache.cache.keys())) # ['c', 'a', 'd']ChainMap: Layered Dictionary Lookups
ChainMap groups multiple dictionaries into a single view for lookups. It searches each dictionary in order, returning the first match. This is ideal for configuration layering, scoped variable lookups, and context management.
Basic Usage
from collections import ChainMap
defaults = {'theme': 'light', 'language': 'en', 'timeout': 30}
user_prefs = {'theme': 'dark'}
session = {'language': 'fr'}
config = ChainMap(session, user_prefs, defaults)
# Lookup searches session -> user_prefs -> defaults
print(config['theme']) # 'dark' (from user_prefs)
print(config['language']) # 'fr' (from session)
print(config['timeout']) # 30 (from defaults)Configuration Layering
from collections import ChainMap
import os
# Real-world config pattern: CLI args > env vars > config file > defaults
defaults = {
'debug': False,
'log_level': 'WARNING',
'port': 8080,
'host': '0.0.0.0',
}
config_file = {
'log_level': 'INFO',
'port': 9090,
}
env_vars = {
k.lower(): v for k, v in os.environ.items()
if k.lower() in defaults
}
cli_args = {'debug': True} # Parsed from argparse
config = ChainMap(cli_args, env_vars, config_file, defaults)
print(config['debug']) # True (from cli_args)
print(config['log_level']) # 'INFO' (from config_file)
print(config['host']) # '0.0.0.0' (from defaults)Scoped Contexts with new_child()
from collections import ChainMap
# Simulating variable scoping (like nested function scopes)
global_scope = {'x': 1, 'y': 2}
local_scope = ChainMap(global_scope)
# Enter a new scope
inner_scope = local_scope.new_child()
inner_scope['x'] = 10 # Shadows global x
inner_scope['z'] = 30 # New local variable
print(inner_scope['x']) # 10 (local)
print(inner_scope['y']) # 2 (falls through to global)
print(inner_scope['z']) # 30 (local)
# Exit scope -- original is unchanged
print(local_scope['x']) # 1 (global still intact)Comparison of All Collection Types
| Type | Base Class | Mutable | Use Case | Key Advantage |
|---|---|---|---|---|
Counter | dict | Yes | Counting elements | most_common(), multiset arithmetic |
defaultdict | dict | Yes | Auto-initialize missing keys | No KeyError, factory functions |
deque | -- | Yes | Double-ended queue | O(1) on both ends, maxlen |
namedtuple | tuple | No | Structured data records | Named field access, lightweight |
OrderedDict | dict | Yes | Order-sensitive dicts | move_to_end(), order equality |
ChainMap | -- | Yes | Layered lookups | Config layering, scoped contexts |
Performance Benchmarks
Counter vs Manual Counting
from collections import Counter, defaultdict
import time
data = list(range(1000)) * 1000 # 1 million items, 1000 unique
# Method 1: Counter
start = time.perf_counter()
c = Counter(data)
counter_time = time.perf_counter() - start
# Method 2: defaultdict(int)
start = time.perf_counter()
dd = defaultdict(int)
for item in data:
dd[item] += 1
dd_time = time.perf_counter() - start
# Method 3: Manual dict
start = time.perf_counter()
manual = {}
for item in data:
manual[item] = manual.get(item, 0) + 1
manual_time = time.perf_counter() - start
print(f"Counter: {counter_time:.4f}s")
print(f"defaultdict(int):{dd_time:.4f}s")
print(f"dict.get(): {manual_time:.4f}s")
# Typical: Counter ~0.03s, defaultdict ~0.07s, dict.get() ~0.09sdeque vs list for Queue Operations
from collections import deque
import time
n = 100_000
# Simulate a FIFO queue: append right, pop left
# List
start = time.perf_counter()
q = list(range(n))
while q:
q.pop(0)
list_queue_time = time.perf_counter() - start
# Deque
start = time.perf_counter()
q = deque(range(n))
while q:
q.popleft()
deque_queue_time = time.perf_counter() - start
print(f"List pop(0): {list_queue_time:.4f}s")
print(f"Deque popleft(): {deque_queue_time:.4f}s")
print(f"Deque is {list_queue_time / deque_queue_time:.0f}x faster")
# Typical: List ~2.5s, Deque ~0.004s -> ~600x fasterReal-World Examples
Log Analysis with Counter
from collections import Counter
from datetime import datetime
# Parse and analyze server logs
log_lines = [
"2026-02-18 10:15:03 GET /api/users 200",
"2026-02-18 10:15:04 POST /api/login 401",
"2026-02-18 10:15:05 GET /api/users 200",
"2026-02-18 10:15:06 GET /api/products 500",
"2026-02-18 10:15:07 POST /api/login 200",
"2026-02-18 10:15:08 GET /api/users 200",
"2026-02-18 10:15:09 GET /api/products 500",
"2026-02-18 10:15:10 POST /api/login 401",
]
# Count status codes
status_codes = Counter(line.split()[-1] for line in log_lines)
print("Status codes:", status_codes.most_common())
# [('200', 4), ('401', 2), ('500', 2)]
# Count endpoints
endpoints = Counter(line.split()[3] for line in log_lines)
print("Top endpoints:", endpoints.most_common(2))
# [('/api/users', 3), ('/api/login', 3)]
# Count error endpoints (status >= 400)
errors = Counter(
line.split()[3] for line in log_lines
if int(line.split()[-1]) >= 400
)
print("Error endpoints:", errors)
# Counter({'/api/login': 2, '/api/products': 2})Configuration Management with ChainMap
from collections import ChainMap
import json
# Multi-layer config system for a web application
def load_config(config_path=None, cli_overrides=None):
# Layer 1: Hard-coded defaults
defaults = {
'host': '127.0.0.1',
'port': 8000,
'debug': False,
'db_pool_size': 5,
'log_level': 'WARNING',
'cors_origins': ['http://localhost:3000'],
}
# Layer 2: Config file
file_config = {}
if config_path:
with open(config_path) as f:
file_config = json.load(f)
# Layer 3: CLI overrides (highest priority)
cli = cli_overrides or {}
# ChainMap searches cli -> file_config -> defaults
return ChainMap(cli, file_config, defaults)
# Usage
config = load_config(cli_overrides={'debug': True, 'port': 9000})
print(config['debug']) # True (CLI override)
print(config['port']) # 9000 (CLI override)
print(config['db_pool_size']) # 5 (default)
print(config['log_level']) # WARNING (default)Recent Items Cache with deque
from collections import deque
class RecentItemsTracker:
"""Track the N most recent unique items."""
def __init__(self, max_items=10):
self.items = deque(maxlen=max_items)
self.seen = set()
def add(self, item):
if item in self.seen:
# Move to front by removing and re-adding
self.items.remove(item)
self.items.append(item)
else:
if len(self.items) == self.items.maxlen:
# Remove the oldest item from the set too
oldest = self.items[0]
self.seen.discard(oldest)
self.items.append(item)
self.seen.add(item)
def get_recent(self):
return list(reversed(self.items))
# Track recently viewed products
tracker = RecentItemsTracker(max_items=5)
for product in ['shoes', 'shirt', 'hat', 'shoes', 'jacket', 'belt', 'hat']:
tracker.add(product)
print(tracker.get_recent())
# ['hat', 'belt', 'jacket', 'shoes', 'shirt']Data Pipeline with namedtuple
from collections import namedtuple, Counter, defaultdict
# Define structured records
Transaction = namedtuple('Transaction', 'id customer product amount date')
transactions = [
Transaction(1, 'Alice', 'Widget', 29.99, '2026-02-01'),
Transaction(2, 'Bob', 'Gadget', 49.99, '2026-02-01'),
Transaction(3, 'Alice', 'Widget', 29.99, '2026-02-03'),
Transaction(4, 'Charlie', 'Gadget', 49.99, '2026-02-05'),
Transaction(5, 'Alice', 'Gizmo', 19.99, '2026-02-07'),
Transaction(6, 'Bob', 'Widget', 29.99, '2026-02-08'),
]
# Most popular products
product_count = Counter(t.product for t in transactions)
print("Popular products:", product_count.most_common())
# [('Widget', 3), ('Gadget', 2), ('Gizmo', 1)]
# Revenue by customer
revenue = defaultdict(float)
for t in transactions:
revenue[t.customer] += t.amount
print("Revenue:", dict(revenue))
# {'Alice': 79.97, 'Bob': 79.98, 'Charlie': 49.99}
# Convert to DataFrame for visualization
import pandas as pd
df = pd.DataFrame(transactions, columns=Transaction._fields)
print(df.groupby('customer')['amount'].sum())Visualizing Collection Data with PyGWalker
After processing data with Counter, defaultdict, or namedtuple, you often want to visualize the results. PyGWalker (opens in a new tab) turns any pandas DataFrame into a Tableau-style interactive visualization interface directly in Jupyter notebooks:
from collections import Counter
import pandas as pd
import pygwalker as pyg
# Process data with collections
log_data = ["ERROR", "WARNING", "ERROR", "INFO", "ERROR", "WARNING", "INFO", "INFO"]
counts = Counter(log_data)
# Convert to DataFrame
df = pd.DataFrame(counts.items(), columns=['Level', 'Count'])
# Launch interactive visualization
walker = pyg.walk(df)This lets you drag and drop fields, create charts, filter data, and explore patterns interactively -- without writing any visualization code. It is especially useful when you have large datasets processed through Counter or defaultdict grouping and want to explore the distributions visually.
For running these collection experiments interactively, RunCell (opens in a new tab) provides an AI-powered Jupyter environment where you can iterate on data processing pipelines with instant feedback.
Combining Multiple Collection Types
The real power of collections shows when you combine types in a single pipeline.
from collections import Counter, defaultdict, namedtuple, deque
# Named record type
LogEntry = namedtuple('LogEntry', 'timestamp level message')
# Simulated log stream
log_stream = deque([
LogEntry('10:01', 'ERROR', 'Connection timeout'),
LogEntry('10:02', 'INFO', 'Request processed'),
LogEntry('10:03', 'ERROR', 'Connection timeout'),
LogEntry('10:04', 'WARNING', 'High memory'),
LogEntry('10:05', 'ERROR', 'Disk full'),
LogEntry('10:06', 'INFO', 'Request processed'),
LogEntry('10:07', 'ERROR', 'Connection timeout'),
], maxlen=100)
# Count error types
error_counts = Counter(
entry.message for entry in log_stream if entry.level == 'ERROR'
)
print("Error types:", error_counts.most_common())
# [('Connection timeout', 3), ('Disk full', 1)]
# Group entries by level
by_level = defaultdict(list)
for entry in log_stream:
by_level[entry.level].append(entry)
for level, entries in by_level.items():
print(f"{level}: {len(entries)} entries")
# ERROR: 4 entries
# INFO: 2 entries
# WARNING: 1 entriesFAQ
What is the Python collections module?
The collections module is part of Python's standard library. It provides specialized container datatypes that extend the built-in types (dict, list, tuple, set) with additional functionality. The main classes are Counter, defaultdict, deque, namedtuple, OrderedDict, and ChainMap. Each one solves a specific category of data handling problems more efficiently than the built-in types alone.
When should I use Counter vs defaultdict(int)?
Use Counter when your primary goal is counting elements or comparing frequency distributions. It provides most_common(), arithmetic operators (+, -, &, |), and can count an entire iterable in a single constructor call. Use defaultdict(int) when counting is incidental to a broader data structure pattern, or when you need a general-purpose dictionary with integer defaults.
Is deque thread-safe in Python?
Yes. In CPython, deque.append(), deque.appendleft(), deque.pop(), and deque.popleft() are atomic operations due to the GIL (Global Interpreter Lock). This makes deque safe to use as a thread-safe queue without additional locking. However, compound operations (like check-then-act sequences) still need explicit synchronization. For thread-safe patterns using deque and queue.Queue, see our Python threading guide.
What is the difference between namedtuple and dataclass?
namedtuple creates immutable tuple subclasses with named fields. It is lightweight, supports iteration and unpacking, and uses minimal memory. dataclass (from the dataclasses module, Python 3.7+) creates full classes with mutable attributes by default, supports methods, properties, and inheritance. Use namedtuple for simple, immutable data records. Use dataclass when you need mutability, complex behavior, or extensive type annotations.
Does OrderedDict still matter in Python 3.7+?
Yes, in two specific cases. First, OrderedDict equality comparisons consider element order (OrderedDict(a=1, b=2) != OrderedDict(b=2, a=1)), while regular dicts do not. Second, OrderedDict provides move_to_end() for reordering elements, which is useful for implementing LRU caches and priority-based data structures. For all other use cases, regular dict is sufficient and more performant.
How does ChainMap differ from merging dictionaries?
ChainMap creates a view over multiple dictionaries without copying data. Lookups search each dictionary in order. Changes to the underlying dictionaries are reflected immediately in the ChainMap. In contrast, merging with {**d1, **d2} or d1 | d2 creates a new dictionary, duplicating all data. ChainMap is more memory-efficient for large dictionaries and preserves the layered structure for configuration and scoping patterns.
Can I use collections types with type hints?
Yes. Use collections.Counter[str] for typed counters, collections.defaultdict[str, list[int]] for typed defaultdicts, and collections.deque[int] for typed deques. For namedtuple, prefer typing.NamedTuple which supports type annotations directly in the class definition. All collections types are fully compatible with mypy and other type checkers. See our Python type hints guide for more on typing patterns.
Conclusion
Python's collections module provides six specialized container types that eliminate common boilerplate patterns. Counter replaces manual counting loops. defaultdict removes KeyError handling. deque gives you fast double-ended operations. namedtuple adds readable field names to tuples. OrderedDict handles order-sensitive comparisons and reordering. ChainMap manages layered dictionary lookups without data duplication.
Each type solves a specific problem better than the built-in containers. Learning when to use each one will make your Python code shorter, faster, and easier to maintain. The key is matching the data structure to the operation pattern: counting (Counter), grouping (defaultdict), queue/stack (deque), structured records (namedtuple), ordered operations (OrderedDict), and layered lookups (ChainMap). When you need to sort lists of these data structures, Python's sorted() function works seamlessly with collection types.
Related Guides
- Python dataclasses -- Modern alternative to namedtuple for data records
- Python type hints -- Type annotations for collection types
- Python sort list -- Sorting techniques for lists and collection data
- Python generators -- Memory-efficient iteration over collections