Writing Tests for Mastoscore¶
Each module in Mastoscore should have a corresponding test file in the tests/ directory. For example:
- mastoscore/config.py→- tests/test_config.py
- mastoscore/fetch.py→- tests/test_fetch.py
Test functions should follow this pattern:
- Arrange: Set up the test data and environment
- Act: Call the function being tested
- Assert: Verify the results
Example:
def test_read_config_with_valid_file():
    """Test reading a valid config file."""
    # Arrange
    with tempfile.NamedTemporaryFile(mode='w+', delete=False) as temp_file:
        temp_file.write("""
        [mastoscore]
        hashtag = testhashtag
        """)
        temp_file_path = temp_file.name
    # Act
    config = read_config(temp_file_path)
    # Assert
    assert config is not None
    assert config.get('mastoscore', 'hashtag') == 'testhashtag'
Using Fixtures¶
Fixtures provide a way to set up preconditions for tests. They are defined in conftest.py and can be used in any test file.
def test_analyse_function(sample_config, sample_journal_files, monkeypatch):
    """Test the analyse function."""
    # Use the sample_config fixture
    sample_config.set('mastoscore', 'journaldir', sample_journal_files)
    # Use monkeypatch to mock functions
    def mock_get_dates(config):
        # Mock implementation
        pass
    monkeypatch.setattr('mastoscore.analyse.get_dates', mock_get_dates)
    # Call the function and assert results
    results = analyse(sample_config)
    assert results is not None
Mocking¶
Use the pytest-mock fixture to mock external dependencies:
def test_fetch_dry_run(mocker, sample_config):
    """Test the fetch function in dry run mode."""
    # Mock the Tooter class
    mock_tooter = mocker.patch('mastoscore.fetch.Tooter')
    # Configure the mock
    mock_tooter.return_value.timeline_hashtag.return_value = [
        {"id": "1", "content": "Test toot"}
    ]
    # Call the function
    fetch(sample_config)
    # Assert the mock was called correctly
    mock_tooter.assert_called_once()
Testing File Operations¶
When testing functions that read or write files, use the temp_directory fixture:
def test_write_file(temp_directory):
    """Test writing a file."""
    # Create a path in the temporary directory
    file_path = os.path.join(temp_directory, "test_file.txt")
    # Call the function that writes to the file
    write_to_file(file_path, "test content")
    # Verify the file was written correctly
    with open(file_path, 'r') as f:
        content = f.read()
    assert content == "test content"
Testing Error Conditions¶
Don't forget to test error conditions and edge cases:
def test_read_config_with_missing_required_option():
    """Test reading a config file with missing required options."""
    # Create a config file with missing options
    with tempfile.NamedTemporaryFile(mode='w+', delete=False) as temp_file:
        temp_file.write("""
        [mastoscore]
        # Missing hashtag and other required options
        """)
        temp_file_path = temp_file.name
    # The function should return None for invalid config
    config = read_config(temp_file_path)
    assert config is None
Parameterized Tests¶
Use pytest's parametrize decorator to test multiple scenarios:
@pytest.mark.parametrize("hashtag,expected", [
    ("test", "test"),
    ("test-hashtag", "test-hashtag"),
    ("", None),  # Empty hashtag should return None
])
def test_validate_hashtag(hashtag, expected):
    """Test hashtag validation with different inputs."""
    result = validate_hashtag(hashtag)
    assert result == expected