Python unittest: Write and Run Unit Tests (Complete Guide)
Updated on
You write a function that processes user data. It works when you test it manually with a few inputs. Then a coworker changes a helper function it depends on, and your code silently returns wrong results for three weeks before anyone notices. The bug makes it to production. Customer records are corrupted. The root cause was a one-line change that nobody verified against existing behavior. This is the exact failure mode that unit testing prevents. Python ships with unittest in the standard library -- a full-featured testing framework that catches regressions before they reach production, with zero third-party dependencies.
What Is unittest and Why Use It?
unittest is Python's built-in testing framework, modeled after Java's JUnit. It has been part of the standard library since Python 2.1, which means every Python installation already has it. No pip install. No dependency management. You write test classes, define test methods, and run them from the command line.
Unit tests verify that individual pieces of code (functions, methods, classes) behave correctly in isolation. When every unit works on its own, integrating them is far less likely to produce hidden bugs.
Here is what unittest provides out of the box:
- Test case classes with automatic test discovery
- Rich assertion methods (equality, truthiness, exceptions, warnings)
- setUp/tearDown hooks at both the method and class level
- Test suites for organizing and grouping tests
- Mocking via
unittest.mock(added in Python 3.3) - Command-line test runner with verbosity controls
Your First Unit Test
Start with a function to test and a corresponding test class.
# calculator.py
def add(a, b):
return a + b
def divide(a, b):
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b# test_calculator.py
import unittest
from calculator import add, divide
class TestCalculator(unittest.TestCase):
def test_add_positive_numbers(self):
self.assertEqual(add(2, 3), 5)
def test_add_negative_numbers(self):
self.assertEqual(add(-1, -1), -2)
def test_add_zero(self):
self.assertEqual(add(0, 0), 0)
def test_divide_normal(self):
self.assertEqual(divide(10, 2), 5.0)
def test_divide_by_zero_raises(self):
with self.assertRaises(ValueError):
divide(10, 0)
if __name__ == "__main__":
unittest.main()Run it:
python -m unittest test_calculator.py -vOutput:
test_add_negative_numbers (test_calculator.TestCalculator) ... ok
test_add_positive_numbers (test_calculator.TestCalculator) ... ok
test_add_zero (test_calculator.TestCalculator) ... ok
test_divide_by_zero_raises (test_calculator.TestCalculator) ... ok
test_divide_normal (test_calculator.TestCalculator) ... ok
----------------------------------------------------------------------
Ran 5 tests in 0.001s
OKEvery method whose name starts with test_ is automatically detected and executed. The class inherits from unittest.TestCase, which provides all assertion methods.
Assertion Methods Reference
unittest.TestCase includes a comprehensive set of assertion methods. Each produces a clear failure message when the check does not pass.
Equality and Identity Assertions
| Method | Checks | Example |
|---|---|---|
assertEqual(a, b) | a == b | self.assertEqual(result, 42) |
assertNotEqual(a, b) | a != b | self.assertNotEqual(result, 0) |
assertIs(a, b) | a is b | self.assertIs(singleton, instance) |
assertIsNot(a, b) | a is not b | self.assertIsNot(obj1, obj2) |
assertIsNone(x) | x is None | self.assertIsNone(result) |
assertIsNotNone(x) | x is not None | self.assertIsNotNone(user) |
Boolean and Membership Assertions
| Method | Checks | Example |
|---|---|---|
assertTrue(x) | bool(x) is True | self.assertTrue(is_valid) |
assertFalse(x) | bool(x) is False | self.assertFalse(has_errors) |
assertIn(a, b) | a in b | self.assertIn("admin", roles) |
assertNotIn(a, b) | a not in b | self.assertNotIn("deleted", status) |
assertIsInstance(a, b) | isinstance(a, b) | self.assertIsInstance(result, dict) |
Numeric and Collection Assertions
| Method | Checks | Example |
|---|---|---|
assertAlmostEqual(a, b) | round(a-b, 7) == 0 | self.assertAlmostEqual(0.1 + 0.2, 0.3) |
assertGreater(a, b) | a > b | self.assertGreater(len(results), 0) |
assertLess(a, b) | a < b | self.assertLess(latency, 1.0) |
assertCountEqual(a, b) | Same elements, any order | self.assertCountEqual([3,1,2], [1,2,3]) |
assertListEqual(a, b) | Lists are equal | self.assertListEqual(result, expected) |
assertDictEqual(a, b) | Dicts are equal | self.assertDictEqual(config, defaults) |
Exception and Warning Assertions
import unittest
import warnings
class TestExceptions(unittest.TestCase):
def test_raises_value_error(self):
"""assertRaises checks that the exception is raised."""
with self.assertRaises(ValueError):
int("not_a_number")
def test_raises_with_message(self):
"""assertRaisesRegex checks both the exception and its message."""
with self.assertRaisesRegex(ValueError, "invalid literal"):
int("not_a_number")
def test_warns_deprecation(self):
"""assertWarns checks that a warning is issued."""
with self.assertWarns(DeprecationWarning):
warnings.warn("old function", DeprecationWarning)Always prefer specific assertions over assertTrue. Instead of self.assertTrue(result == 42), use self.assertEqual(result, 42). The specific version produces a clear failure message like 42 != 41, while assertTrue only says False is not true.
setUp and tearDown: Test Fixtures
Most tests need some initial state -- a database connection, a temporary file, or a pre-configured object. The setUp and tearDown methods run before and after each test method, giving every test a fresh starting point.
import unittest
import os
import tempfile
class TestFileProcessor(unittest.TestCase):
def setUp(self):
"""Runs before EACH test method."""
self.test_dir = tempfile.mkdtemp()
self.test_file = os.path.join(self.test_dir, "data.txt")
with open(self.test_file, "w") as f:
f.write("line1\nline2\nline3\n")
def tearDown(self):
"""Runs after EACH test method."""
os.remove(self.test_file)
os.rmdir(self.test_dir)
def test_read_lines(self):
with open(self.test_file, "r") as f:
lines = f.readlines()
self.assertEqual(len(lines), 3)
def test_file_exists(self):
self.assertTrue(os.path.exists(self.test_file))
def test_first_line_content(self):
with open(self.test_file, "r") as f:
first_line = f.readline().strip()
self.assertEqual(first_line, "line1")Each test method gets its own setUp call. If test_read_lines modifies the file, test_first_line_content still sees the original content because setUp recreates it.
setUpClass and tearDownClass: One-Time Setup
Some resources are expensive to create -- database connections, large data fixtures, server processes. Use setUpClass and tearDownClass to create them once for the entire test class.
import unittest
import sqlite3
class TestDatabase(unittest.TestCase):
@classmethod
def setUpClass(cls):
"""Runs ONCE before all tests in this class."""
cls.conn = sqlite3.connect(":memory:")
cls.cursor = cls.conn.cursor()
cls.cursor.execute("""
CREATE TABLE users (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE NOT NULL
)
""")
cls.cursor.executemany(
"INSERT INTO users (name, email) VALUES (?, ?)",
[
("Alice", "alice@example.com"),
("Bob", "bob@example.com"),
("Charlie", "charlie@example.com"),
],
)
cls.conn.commit()
@classmethod
def tearDownClass(cls):
"""Runs ONCE after all tests in this class."""
cls.conn.close()
def test_user_count(self):
self.cursor.execute("SELECT COUNT(*) FROM users")
count = self.cursor.fetchone()[0]
self.assertEqual(count, 3)
def test_find_user_by_email(self):
self.cursor.execute(
"SELECT name FROM users WHERE email = ?",
("bob@example.com",),
)
name = self.cursor.fetchone()[0]
self.assertEqual(name, "Bob")
def test_unique_emails(self):
with self.assertRaises(sqlite3.IntegrityError):
self.cursor.execute(
"INSERT INTO users (name, email) VALUES (?, ?)",
("Duplicate", "alice@example.com"),
)| Hook | Runs | Decorator | Use Case |
|---|---|---|---|
setUp | Before each test method | None | Create fresh objects, reset state |
tearDown | After each test method | None | Clean up files, reset mocks |
setUpClass | Once before all tests in the class | @classmethod | Database connections, expensive fixtures |
tearDownClass | Once after all tests in the class | @classmethod | Close connections, delete shared resources |
Mocking with unittest.mock
Real applications depend on databases, APIs, file systems, and network services. You do not want your unit tests hitting a production API or requiring a running database. unittest.mock replaces those dependencies with controlled substitutes.
Basic Mock Object
from unittest.mock import Mock
# Create a mock object
api_client = Mock()
# Configure return values
api_client.get_user.return_value = {"id": 1, "name": "Alice"}
# Use it like a real object
user = api_client.get_user(user_id=1)
print(user) # {'id': 1, 'name': 'Alice'}
# Verify it was called correctly
api_client.get_user.assert_called_once_with(user_id=1)Patching with @patch
The @patch decorator replaces an object in a specific module for the duration of a test. This is the primary tool for isolating units from their dependencies.
# user_service.py
import requests
def get_user_name(user_id):
response = requests.get(f"https://api.example.com/users/{user_id}")
response.raise_for_status()
return response.json()["name"]# test_user_service.py
import unittest
from unittest.mock import patch, Mock
from user_service import get_user_name
class TestUserService(unittest.TestCase):
@patch("user_service.requests.get")
def test_get_user_name_success(self, mock_get):
"""Mock the requests.get call to avoid hitting the real API."""
mock_response = Mock()
mock_response.json.return_value = {"id": 1, "name": "Alice"}
mock_response.raise_for_status.return_value = None
mock_get.return_value = mock_response
result = get_user_name(1)
self.assertEqual(result, "Alice")
mock_get.assert_called_once_with("https://api.example.com/users/1")
@patch("user_service.requests.get")
def test_get_user_name_http_error(self, mock_get):
"""Verify that HTTP errors propagate correctly."""
import requests
mock_get.return_value.raise_for_status.side_effect = (
requests.exceptions.HTTPError("404 Not Found")
)
with self.assertRaises(requests.exceptions.HTTPError):
get_user_name(999)Patching as Context Manager
import unittest
from unittest.mock import patch
class TestConfig(unittest.TestCase):
def test_reads_environment_variable(self):
with patch.dict("os.environ", {"DATABASE_URL": "sqlite:///test.db"}):
import os
self.assertEqual(os.environ["DATABASE_URL"], "sqlite:///test.db")MagicMock vs Mock
MagicMock is a subclass of Mock that pre-configures magic methods (__len__, __iter__, __getitem__, etc.). Use MagicMock when the code under test calls dunder methods on the mocked object.
from unittest.mock import MagicMock
mock_list = MagicMock()
mock_list.__len__.return_value = 5
mock_list.__getitem__.return_value = "item"
print(len(mock_list)) # 5
print(mock_list[0]) # itemside_effect for Complex Behavior
side_effect lets a mock raise exceptions, return different values on successive calls, or run a custom function.
from unittest.mock import Mock
# Raise an exception
mock_db = Mock()
mock_db.connect.side_effect = ConnectionError("Database unreachable")
# Return different values on successive calls
mock_api = Mock()
mock_api.fetch.side_effect = [{"page": 1}, {"page": 2}, StopIteration]
print(mock_api.fetch()) # {'page': 1}
print(mock_api.fetch()) # {'page': 2}
# Custom function
def validate_input(x):
if x < 0:
raise ValueError("Negative input")
return x * 2
mock_fn = Mock(side_effect=validate_input)
print(mock_fn(5)) # 10Test Discovery
You do not need to manually list every test file. Python's test discovery finds and runs all tests that follow the naming convention.
# Discover and run all tests in the current directory tree
python -m unittest discover
# Specify a start directory and pattern
python -m unittest discover -s tests -p "test_*.py"
# Verbose output
python -m unittest discover -vTest discovery searches for files matching test_*.py (default pattern), imports them, and runs any class that inherits from unittest.TestCase.
Recommended Project Structure
my_project/
src/
calculator.py
user_service.py
utils.py
tests/
__init__.py
test_calculator.py
test_user_service.py
test_utils.py
setup.pyRun all tests from the project root:
python -m unittest discover -s tests -vOrganizing Tests with Test Suites
For fine-grained control over which tests run, build test suites manually.
import unittest
from test_calculator import TestCalculator
from test_user_service import TestUserService
def fast_tests():
"""Suite of tests that run quickly (no I/O, no network)."""
suite = unittest.TestSuite()
suite.addTest(TestCalculator("test_add_positive_numbers"))
suite.addTest(TestCalculator("test_add_negative_numbers"))
suite.addTest(TestCalculator("test_divide_normal"))
return suite
def all_tests():
"""Full test suite."""
loader = unittest.TestLoader()
suite = unittest.TestSuite()
suite.addTests(loader.loadTestsFromTestCase(TestCalculator))
suite.addTests(loader.loadTestsFromTestCase(TestUserService))
return suite
if __name__ == "__main__":
runner = unittest.TextTestRunner(verbosity=2)
runner.run(fast_tests())Skipping Tests and Expected Failures
Sometimes a test should only run under certain conditions -- a specific OS, a particular Python version, or when an external service is available.
import unittest
import sys
class TestPlatformSpecific(unittest.TestCase):
@unittest.skip("Temporarily disabled while refactoring")
def test_feature_under_construction(self):
pass
@unittest.skipIf(sys.platform == "win32", "Not supported on Windows")
def test_unix_permissions(self):
import os
self.assertTrue(os.access("/tmp", os.W_OK))
@unittest.skipUnless(sys.platform.startswith("linux"), "Linux only")
def test_proc_filesystem(self):
import os
self.assertTrue(os.path.exists("/proc"))
@unittest.expectedFailure
def test_known_bug(self):
"""This test documents a known bug. It SHOULD fail."""
self.assertEqual(1 + 1, 3)| Decorator | Effect |
|---|---|
@unittest.skip(reason) | Always skip this test |
@unittest.skipIf(condition, reason) | Skip if condition is True |
@unittest.skipUnless(condition, reason) | Skip unless condition is True |
@unittest.expectedFailure | Mark as expected to fail; reported as error if it passes |
Parameterized Tests with subTest
Testing the same logic with different inputs is common. Instead of writing separate test methods for each case, use subTest to run parameterized assertions within a single method.
import unittest
def is_palindrome(s):
cleaned = s.lower().replace(" ", "")
return cleaned == cleaned[::-1]
class TestPalindrome(unittest.TestCase):
def test_palindromes(self):
test_cases = [
("racecar", True),
("hello", False),
("A man a plan a canal Panama", True),
("", True),
("ab", False),
("madam", True),
("Nurses Run", True),
]
for text, expected in test_cases:
with self.subTest(text=text):
self.assertEqual(is_palindrome(text), expected)With subTest, a failure in one case does not stop the others from running. The output identifies exactly which sub-case failed:
FAIL: test_palindromes (test_palindrome.TestPalindrome) (text='Nurses Run')
AssertionError: False != TrueWithout subTest, the first failure would abort the entire method and you would not know which other cases also fail.
unittest vs pytest vs doctest
Python has three built-in or commonly used testing tools. Each serves a different purpose.
| Feature | unittest | pytest | doctest |
|---|---|---|---|
| Included in stdlib | Yes | No (pip install) | Yes |
| Test style | Class-based (TestCase) | Function-based (plain def test_) | Embedded in docstrings |
| Assertions | self.assertEqual, self.assertTrue, etc. | Plain assert statement | Expected output matching |
| Fixtures | setUp/tearDown, setUpClass | @pytest.fixture with dependency injection | None |
| Parameterized tests | subTest (limited) | @pytest.mark.parametrize (powerful) | One example per docstring |
| Mocking | unittest.mock (built-in) | unittest.mock + monkeypatch | Not applicable |
| Test discovery | python -m unittest discover | pytest (auto-discovers) | python -m doctest module.py |
| Output on failure | Basic diff | Detailed diff with context | Shows expected vs actual output |
| Plugins | None | 1000+ plugins (coverage, fixtures, etc.) | None |
| Learning curve | Moderate (OOP patterns) | Low (plain functions) | Very low |
| Best for | Standard library projects, no extra dependencies | Most Python projects, complex test setups | Simple examples in documentation |
When to choose unittest:
- You want zero external dependencies
- Your organization or project already uses unittest
- You need class-based test organization
- You want
unittest.mockwithout any extra setup
When to choose pytest:
- You want simpler syntax and better failure output
- You need powerful parameterization or fixtures
- You rely on the pytest plugin ecosystem (coverage, async, Django, etc.)
When to choose doctest:
- You want to verify that code examples in documentation still work
- The tests are simple input/output pairs
Note that pytest can run unittest-style tests without modification. Many teams start with unittest and switch to pytest as the runner while keeping their existing test classes.
Testing a Real-World Class: Complete Example
Here is a complete example testing a shopping cart implementation.
# cart.py
class Product:
def __init__(self, name, price):
if price < 0:
raise ValueError("Price cannot be negative")
self.name = name
self.price = price
class ShoppingCart:
def __init__(self):
self.items = []
def add(self, product, quantity=1):
if quantity <= 0:
raise ValueError("Quantity must be positive")
self.items.append({"product": product, "quantity": quantity})
def total(self):
return sum(
item["product"].price * item["quantity"]
for item in self.items
)
def remove(self, product_name):
self.items = [
item for item in self.items
if item["product"].name != product_name
]
def item_count(self):
return sum(item["quantity"] for item in self.items)# test_cart.py
import unittest
from cart import Product, ShoppingCart
class TestProduct(unittest.TestCase):
def test_create_product(self):
p = Product("Widget", 9.99)
self.assertEqual(p.name, "Widget")
self.assertAlmostEqual(p.price, 9.99)
def test_negative_price_raises(self):
with self.assertRaises(ValueError):
Product("Bad", -5.00)
class TestShoppingCart(unittest.TestCase):
def setUp(self):
self.cart = ShoppingCart()
self.apple = Product("Apple", 1.50)
self.bread = Product("Bread", 3.00)
def test_empty_cart_total(self):
self.assertEqual(self.cart.total(), 0)
def test_add_single_item(self):
self.cart.add(self.apple)
self.assertEqual(self.cart.item_count(), 1)
self.assertAlmostEqual(self.cart.total(), 1.50)
def test_add_multiple_items(self):
self.cart.add(self.apple, quantity=3)
self.cart.add(self.bread, quantity=2)
self.assertEqual(self.cart.item_count(), 5)
self.assertAlmostEqual(self.cart.total(), 10.50)
def test_remove_item(self):
self.cart.add(self.apple, quantity=2)
self.cart.add(self.bread)
self.cart.remove("Apple")
self.assertEqual(self.cart.item_count(), 1)
self.assertAlmostEqual(self.cart.total(), 3.00)
def test_remove_nonexistent_item(self):
self.cart.add(self.apple)
self.cart.remove("Nonexistent")
self.assertEqual(self.cart.item_count(), 1)
def test_add_zero_quantity_raises(self):
with self.assertRaises(ValueError):
self.cart.add(self.apple, quantity=0)
def test_add_negative_quantity_raises(self):
with self.assertRaises(ValueError):
self.cart.add(self.apple, quantity=-1)
if __name__ == "__main__":
unittest.main()Best Practices for unittest
1. One Assertion Per Concept
Each test method should verify one logical concept. Multiple assertions are fine if they all check different aspects of the same operation.
# GOOD -- multiple assertions about the same operation
def test_user_creation(self):
user = create_user("alice", "alice@example.com")
self.assertEqual(user.name, "alice")
self.assertEqual(user.email, "alice@example.com")
self.assertIsNotNone(user.id)
# BAD -- testing unrelated things in one method
def test_everything(self):
user = create_user("alice", "alice@example.com")
self.assertEqual(user.name, "alice")
users = list_all_users()
self.assertGreater(len(users), 0) # Unrelated assertion2. Use Descriptive Test Names
Test names should describe the scenario and expected outcome. When a test fails in CI, the name alone should tell you what went wrong.
# GOOD
def test_divide_by_zero_raises_value_error(self):
...
def test_empty_cart_returns_zero_total(self):
...
# BAD
def test_divide(self):
...
def test1(self):
...3. Tests Must Be Independent
No test should depend on another test's output or execution order. Each test should set up its own state and clean up after itself.
4. Keep Tests Fast
Unit tests should run in milliseconds. If a test needs a database, mock it. If it needs an API, mock it. Save slow integration tests for a separate suite.
5. Test Edge Cases
Always test boundary conditions: empty inputs, zero values, None, very large inputs, and invalid types.
def test_edge_cases(self):
test_cases = [
([], 0), # empty list
([0], 0), # single zero
([-1, -2], -3), # all negative
([999999999], 999999999), # large number
]
for inputs, expected in test_cases:
with self.subTest(inputs=inputs):
self.assertEqual(sum(inputs), expected)6. Do Not Test Implementation Details
Test the public interface and behavior, not internal state or private methods. If you refactor internals, your tests should still pass.
Running Tests from the Command Line
# Run a specific test file
python -m unittest test_calculator.py
# Run a specific test class
python -m unittest test_calculator.TestCalculator
# Run a specific test method
python -m unittest test_calculator.TestCalculator.test_add_positive_numbers
# Verbose output (shows each test name and result)
python -m unittest -v
# Discover all tests in a directory
python -m unittest discover -s tests -p "test_*.py" -v
# Stop on first failure (fail-fast mode)
python -m unittest -fWriting and Debugging Tests with RunCell
When you develop in Jupyter notebooks -- common in data science and exploratory programming -- writing and running unit tests feels awkward. Notebooks execute cells interactively, but unittest expects modules and test runners. You end up copying code between notebooks and test files, losing the interactive feedback loop.
RunCell (opens in a new tab) is an AI agent designed for Jupyter that bridges this gap. It can generate unittest-compatible test cases for functions you define in notebook cells, run them inside the notebook environment, and explain failures in context. If a mock is set up incorrectly or an assertion fails, RunCell inspects the live variables and shows you what the actual values were, not just the assertion error message. For data pipelines where you need to verify that DataFrame transformations produce the right output shape and values, RunCell can scaffold the test structure and assertions so you focus on the logic rather than the boilerplate.
FAQ
What is the difference between unittest and pytest?
unittest is Python's built-in testing framework with a class-based API. pytest is a third-party framework that uses plain functions and assert statements. pytest has a richer plugin ecosystem and better failure output, but requires installation. unittest works everywhere Python runs with no extra dependencies.
How do I run a single test method in unittest?
Use the command python -m unittest test_module.TestClass.test_method. For example: python -m unittest test_calculator.TestCalculator.test_add_positive_numbers. This runs only the specified method without executing other tests in the file.
What is the difference between setUp and setUpClass?
setUp runs before every individual test method, creating a fresh state each time. setUpClass runs once before all tests in the class and is a @classmethod. Use setUpClass for expensive setup like database connections. Use setUp for lightweight per-test state.
How do I mock an external API in unittest?
Use unittest.mock.patch to replace the function that calls the API. Patch the import path where the function is used, not where it is defined. For example, if user_service.py imports requests.get, patch user_service.requests.get, not requests.get.
Can pytest run unittest tests?
Yes. pytest is fully compatible with unittest-style test classes. You can run pytest in a project that uses unittest.TestCase classes without any modifications. This makes migration gradual -- you can write new tests in pytest style while keeping existing unittest tests.
How do I test that a function raises an exception?
Use self.assertRaises(ExceptionType) as a context manager. The test passes if the code inside the with block raises the specified exception, and fails if no exception or a different exception is raised. Use assertRaisesRegex to also check the exception message.
Conclusion
Python's unittest framework is a complete testing toolkit that ships with every Python installation. It provides test case classes, rich assertion methods, setup and teardown hooks at both the method and class level, mocking capabilities via unittest.mock, and built-in test discovery. You do not need to install anything to start writing reliable tests.
The fundamentals are straightforward: inherit from TestCase, name your methods with a test_ prefix, use specific assertion methods, and run with python -m unittest. As your project grows, add setUp/tearDown for test isolation, @patch for mocking external dependencies, and subTest for parameterized testing. Organize tests in a tests/ directory and let test discovery handle the rest.
Writing tests takes time upfront. It saves far more time downstream. Every regression caught by a unit test is a production incident that never happened, a customer complaint that never arrived, and a debugging session that never started. Whether you stick with unittest or eventually move to pytest, the testing habits you build around unittest's patterns -- isolation, clear assertions, mocked dependencies, and comprehensive edge case coverage -- apply universally across Python testing.