Skip to content

E2E Workflow Tests Plan

Created: 2025-12-23 Target: Comprehensive Playwright regression tests for complex user workflows


Executive Summary

This plan outlines a series of Playwright E2E tests that validate complex, multi-step workflows in the CampaignBrain App. Unlike the existing surface-level tests that verify page loads and UI elements, these tests simulate real user scenarios end-to-end.


Current Test Coverage

Existing E2E Tests

Test File Coverage Type
test_page_loads.py All pages load without errors Surface
test_work_queue_workflow.py Work queue UI elements present Element
test_work_queue_tags.py TagInput component functions Component
test_segments_ui.py Segment CRUD modals CRUD
test_surveys_ui.py Survey CRUD and picker CRUD
test_chat_segment_creation.py AI chat creates segments Integration
test_screenshots.py Screenshot capture Visual

Gaps Identified

  1. No multi-user scenarios - All tests use admin; no field user tests
  2. No data flow validation - Tests check UI but not persisted data
  3. No complete workflows - Individual operations tested, not full journeys
  4. No cleanup/isolation - Tests may affect each other
  5. No negative testing - Only happy paths tested

Proposed Test Scenarios

Scenario 1: Field Worker Onboarding

Goal: Verify admin can create a field user who can then log in and access work queue.

┌─────────────────────────────────────────────────────────────────┐
│ Admin Login → Create User → Field Login → Access Work Queue     │
└─────────────────────────────────────────────────────────────────┘

Steps: 1. Admin logs in 2. Navigate to Users page 3. Click "New User" 4. Fill form: username, email, password, role=field 5. Save user 6. Verify user appears in list 7. Log out 8. Log in as new field user 9. Verify dashboard loads 10. Navigate to Work Queue 11. Verify appropriate access (not admin pages) 12. Cleanup: Delete test user

Assertions: - User creation succeeds (no errors) - User appears in user list with correct role - Field user can log in - Field user sees Work Queue - Field user cannot access /settings, /users (admin-only)


Scenario 2: Segment Creation and Population

Goal: Verify admin can create a segment with contacts.

┌──────────────────────────────────────────────────────────────────────┐
│ Admin Login → Create Contacts → Create Segment → Verify Population   │
└──────────────────────────────────────────────────────────────────────┘

Steps: 1. Admin logs in 2. Create 5-10 test contacts via API (faster than UI) 3. Navigate to Segments page 4. Click "New Segment" 5. Choose "Manual List" option 6. Select test contacts 7. Name segment "E2E Test Segment [timestamp]" 8. Save segment 9. View segment details 10. Verify contact count matches 11. Cleanup: Delete segment, delete test contacts

Assertions: - Contacts created successfully - Segment appears in list - Segment shows correct member count - Viewing segment shows correct contacts


Scenario 3: Survey Creation and Linking

Goal: Verify admin can create a survey and link it to a segment.

┌─────────────────────────────────────────────────────────────────────────┐
│ Admin Login → Create Survey → Create Segment → Link Survey to Segment   │
└─────────────────────────────────────────────────────────────────────────┘

Steps: 1. Admin logs in 2. Navigate to Surveys page 3. Click "Create Survey" 4. Fill: name, description 5. Add questions (optional - depends on YASP) 6. Save survey (in Draft status) 7. Publish survey 8. Navigate to Segments page 9. Edit existing segment (or create new one) 10. Open Survey Picker 11. Select the new survey 12. Save segment 13. Verify survey linked in segment view 14. Cleanup: Unlink survey, archive survey

Assertions: - Survey created in Draft status - Survey can be published - Survey appears in segment edit picker - Survey link persists after save - Survey visible in segment details


Scenario 4: Work Queue Assignment

Goal: Verify admin can assign a segment to a field user, creating a work queue.

┌─────────────────────────────────────────────────────────────────────────────┐
│ Setup → Assign Segment to User → Field User Sees Assignment in Work Queue   │
└─────────────────────────────────────────────────────────────────────────────┘

Prerequisites: - Field user exists - Segment with contacts exists

Steps: 1. Admin logs in 2. Navigate to Segments page 3. Edit segment with contacts 4. Open User Picker 5. Select field user 6. Save segment 7. Log out 8. Log in as field user 9. Navigate to Work Queue 10. Verify assignment appears 11. Click assignment 12. Verify first contact loads

Assertions: - User assignment saved successfully - Work queue item created for field user - Assignment shows in field user's Work Queue - Contact card displays correctly


Scenario 5: Complete Work Queue Contact Flow

Goal: Verify field user can work through contacts, logging actions and completing items.

┌────────────────────────────────────────────────────────────────────────────────┐
│ Field Login → Select Assignment → Work Contact → Log Action → Complete → Next  │
└────────────────────────────────────────────────────────────────────────────────┘

Prerequisites: - Assignment exists for field user with 3+ contacts

Steps: 1. Field user logs in 2. Navigate to Work Queue 3. Select assignment 4. Verify contact #1 loads 5. Test Call action: - Click "Call" action button - Verify result options appear - Select "Answered" - Select "Supportive" outcome - Add notes - Click "Complete" 6. Verify progress updates (1/N completed) 7. Verify next contact loads automatically 8. Test Skip action: - Click "Skip" - Enter skip reason - Confirm skip 9. Verify next contact loads 10. Test Door Knock action: - Click "Door" button - Complete with "Not Home" 11. Verify stats update (calls, doors, completed)

Assertions: - Action type buttons work and show correct results - Contact actions logged correctly - Progress bar updates - Stats counters update - Skip reason persists - Notes saved with action - Next contact loads after completion


Scenario 6: Tag Management in Work Queue

Goal: Verify field user can add/remove tags on contacts while working queue.

┌───────────────────────────────────────────────────────────────────────┐
│ Field Login → Work Contact → Add Tag → Remove Tag → Verify Persisted  │
└───────────────────────────────────────────────────────────────────────┘

Steps: 1. Field user logs in 2. Navigate to Work Queue with assignment 3. Contact loads 4. Add existing tag: - Click tag input - Type tag name - Select from suggestions - Verify tag chip appears 5. Add new tag: - Type new tag name - Press Enter or click "Create" - Verify new tag appears 6. Remove tag: - Click X on tag chip - Verify tag removed 7. Complete contact and move to next 8. Go back to original contact (via Audience page) 9. Verify tags persisted

Assertions: - Tag search shows suggestions - Tags can be added from suggestions - New tags can be created inline - Tags display as chips - Tag removal works - Tags persist after leaving page


Scenario 7: Survey Completion in Work Queue

Goal: Verify field user can complete linked surveys while working queue.

┌────────────────────────────────────────────────────────────────────────────────┐
│ Setup → Field Works Queue → Survey Button Active → Complete Survey → Validated │
└────────────────────────────────────────────────────────────────────────────────┘

Prerequisites: - Segment with survey linked - Assignment exists for field user

Steps: 1. Field user logs in 2. Navigate to Work Queue 3. Select assignment (segment with survey) 4. Contact loads 5. Verify Surveys section visible 6. Click survey link/button 7. Survey form opens (YASP integration) 8. Complete survey questions 9. Submit survey 10. Return to Work Queue 11. Verify survey marked as completed for this contact

Assertions: - Survey section visible when survey linked - Survey opens correctly - Survey submission works - Completion status tracked per contact


Scenario 8: Full Campaign Workflow (Integration Test)

Goal: End-to-end test of entire campaign workflow.

┌────────────────────────────────────────────────────────────────────────────────┐
│ Admin Setup → Field User Created → Contacts Imported → Segment Created →      │
│ Survey Created → Survey Linked → User Assigned → Field Works Queue →          │
│ Actions Logged → Survey Completed → Results Verified                           │
└────────────────────────────────────────────────────────────────────────────────┘

Duration: ~5-10 minutes

This is the "master" test that validates the entire system works together.


Test Infrastructure

New Fixtures Needed

# tests/e2e/conftest.py additions

@pytest.fixture(scope="module")
def api_client(base_url: str) -> requests.Session:
    """Authenticated API client for data setup/teardown."""
    session = requests.Session()
    # Login and store token
    response = session.post(f"{base_url}/api/auth/token", data={
        "username": "admin",
        "password": "admin123"
    })
    token = response.json()["access_token"]
    session.headers["Authorization"] = f"Bearer {token}"
    return session


@pytest.fixture(scope="function")
def test_contacts(api_client, base_url):
    """Create test contacts and clean up after test."""
    contacts = []
    for i in range(5):
        response = api_client.post(f"{base_url}/api/persons", json={
            "first_name": f"E2E Test {i}",
            "last_name": f"Contact {uuid.uuid4().hex[:6]}",
            "email": f"e2e.test.{uuid.uuid4().hex[:8]}@example.com",
            "city": "Austin",
            "state": "TX"
        })
        contacts.append(response.json())

    yield contacts

    # Cleanup
    for contact in contacts:
        api_client.delete(f"{base_url}/api/persons/{contact['id']}")


@pytest.fixture(scope="function")
def test_segment(api_client, base_url, test_contacts):
    """Create test segment with contacts."""
    response = api_client.post(f"{base_url}/api/segments", json={
        "name": f"E2E Test Segment {uuid.uuid4().hex[:8]}",
        "description": "Created by E2E test",
        "is_dynamic": False
    })
    segment = response.json()

    # Add contacts to segment
    for contact in test_contacts:
        api_client.post(
            f"{base_url}/api/segments/{segment['id']}/persons/{contact['id']}"
        )

    yield segment

    # Cleanup
    api_client.delete(f"{base_url}/api/segments/{segment['id']}")


@pytest.fixture(scope="function")
def test_field_user(api_client, base_url):
    """Create test field user and clean up after test."""
    user_data = {
        "username": f"e2e_field_{uuid.uuid4().hex[:6]}",
        "email": f"e2e.field.{uuid.uuid4().hex[:8]}@example.com",
        "password": "testpass123",
        "first_name": "E2E",
        "last_name": "Field User",
        "role": "field"
    }
    response = api_client.post(f"{base_url}/api/users", json=user_data)
    user = response.json()
    user["password"] = user_data["password"]  # Keep password for login

    yield user

    # Cleanup
    api_client.delete(f"{base_url}/api/users/{user['id']}")


@pytest.fixture(scope="function")
def field_authenticated_page(page: Page, base_url: str, test_field_user: dict):
    """Return a page logged in as the test field user."""
    page.goto(f"{base_url}/login")
    page.fill('input[name="username"]', test_field_user["username"])
    page.fill('input[name="password"]', test_field_user["password"])
    page.click('button[type="submit"]')
    page.wait_for_url(re.compile(r"^(?!.*login).*$"), timeout=60000)
    return page


@pytest.fixture(scope="function")
def test_assignment(api_client, base_url, test_segment, test_field_user):
    """Create assignment linking segment to field user."""
    response = api_client.post(
        f"{base_url}/api/assignments",
        json={
            "segment_id": test_segment["id"],
            "user_id": test_field_user["id"],
            "priority": 1
        }
    )
    assignment = response.json()

    yield assignment

    # Cleanup handled by segment deletion

Test Data Management

# tests/e2e/utils.py

import uuid
from datetime import datetime


def unique_name(prefix: str) -> str:
    """Generate unique name for test entities."""
    return f"{prefix}_{datetime.now().strftime('%H%M%S')}_{uuid.uuid4().hex[:4]}"


def wait_for_toast(page: Page, text: str, timeout: int = 5000):
    """Wait for toast notification with specific text."""
    toast = page.locator(f".toast:has-text('{text}'), [role='alert']:has-text('{text}')")
    toast.wait_for(state="visible", timeout=timeout)


def dismiss_toast(page: Page):
    """Dismiss any visible toast notifications."""
    toasts = page.locator(".toast-close, [role='alert'] button")
    if toasts.count() > 0:
        toasts.first.click()

Test Organization

File Structure

tests/e2e/
├── conftest.py                      # Fixtures (enhanced)
├── utils.py                         # Helper functions
├── test_page_loads.py               # [existing] Surface tests
├── test_work_queue_workflow.py      # [existing] Element tests
├── test_work_queue_tags.py          # [existing] Component tests
├── test_segments_ui.py              # [existing] CRUD tests
├── test_surveys_ui.py               # [existing] CRUD tests
├── workflows/                       # NEW: Complex workflow tests
│   ├── __init__.py
│   ├── test_user_onboarding.py      # Scenario 1
│   ├── test_segment_creation.py     # Scenario 2
│   ├── test_survey_linking.py       # Scenario 3
│   ├── test_queue_assignment.py     # Scenario 4
│   ├── test_queue_completion.py     # Scenario 5
│   ├── test_queue_tagging.py        # Scenario 6
│   ├── test_survey_completion.py    # Scenario 7
│   └── test_full_campaign.py        # Scenario 8 (integration)
└── screenshots/                     # Test screenshots on failure

Pytest Markers

# pytest.ini additions
markers =
    workflow: Complex multi-step workflow tests
    slow: Tests that take > 30 seconds
    integration: Full system integration tests
    cleanup: Tests that modify data (need cleanup)

Running Tests

Quick Validation (CI)

# Run surface-level tests only (~1 minute)
pytest tests/e2e/test_page_loads.py -v --base-url=https://testsite.nominate.ai

Component Tests

# Run UI component tests (~2-3 minutes)
pytest tests/e2e/ -v --ignore=tests/e2e/workflows/ --base-url=https://testsite.nominate.ai

Full Workflow Tests

# Run all workflow tests (~10-15 minutes)
pytest tests/e2e/workflows/ -v --base-url=https://testsite.nominate.ai

Full Regression Suite

# Run everything (~15-20 minutes)
pytest tests/e2e/ -v --base-url=https://testsite.nominate.ai

# With HTML report
pytest tests/e2e/ -v --html=report.html --base-url=https://testsite.nominate.ai

Headed Mode (Debugging)

# Watch tests run in browser
pytest tests/e2e/workflows/test_queue_completion.py -v --headed --base-url=https://testsite.nominate.ai

Implementation Priority

Priority Scenario Complexity Value
1 Queue Completion (5) Medium High - Core functionality
2 User Onboarding (1) Low High - Auth flows
3 Queue Assignment (4) Medium High - Data flow
4 Tag Management (6) Low Medium - Common action
5 Segment Creation (2) Medium Medium - Admin workflow
6 Survey Linking (3) High Medium - Integration
7 Survey Completion (7) High Medium - YASP dependency
8 Full Campaign (8) Very High High - Regression

Dependencies and Prerequisites

Test Environment

  • testsite.nominate.ai must be accessible
  • Admin user (admin/admin123) must exist
  • YASP integration configured (for survey tests)
  • At least one segment with contacts (for assignment tests)

Python Dependencies

playwright>=1.40.0
pytest>=7.0.0
pytest-playwright>=0.4.0
pytest-html>=4.0.0  # For reports
requests>=2.28.0    # For API setup

Browser Setup

# Install Playwright browsers
playwright install chromium

Success Criteria

  1. All workflow tests pass on clean testsite
  2. No data leakage - Tests clean up after themselves
  3. Idempotent - Tests can be run multiple times
  4. Isolated - Tests don't depend on each other
  5. Documented - Each test has clear docstrings
  6. Fast feedback - Quick tests run first in CI

Integration with CI/CD

# .github/workflows/e2e.yml (example)
name: E2E Tests

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  e2e:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: '3.12'

      - name: Install dependencies
        run: |
          pip install -r requirements.txt
          playwright install chromium

      - name: Run E2E tests
        run: |
          pytest tests/e2e/ -v --base-url=https://testsite.nominate.ai
        env:
          PLAYWRIGHT_BROWSERS_PATH: 0

Notes

  • Tests use testsite.nominate.ai (live environment)
  • Consider adding test data reset script for clean slate
  • May need to coordinate with infra for test windows
  • Screenshots on failure help debugging
  • Consider video recording for complex failures