consolidate playwright tests
Continuous Integration / backend-tests (push) Successful in 25s
Continuous Integration / frontend-check (push) Successful in 15s
Continuous Integration / e2e-tests (push) Successful in 13m15s

This commit is contained in:
2026-05-04 18:42:01 -04:00
parent b92d0e7e63
commit adb036a2f4
6 changed files with 214 additions and 273 deletions
+206
View File
@@ -0,0 +1,206 @@
# Agent Development Guide
This file contains architectural context and conventions for AI agents (and humans) working on the TapeHoard codebase.
## Project Structure
```
tapehoard/
├── backend/ # FastAPI + SQLAlchemy + SQLite
│ ├── app/
│ │ ├── api/ # API routers
│ │ │ ├── common.py # Shared helpers & schemas
│ │ │ ├── system/ # System endpoints (13 modules)
│ │ │ ├── archive.py # Archive file index endpoints
│ │ │ ├── backups.py # Backup job endpoints
│ │ │ ├── inventory.py # Media fleet endpoints
│ │ │ ├── restores.py # Restore queue endpoints
│ │ │ └── schemas.py # Shared Pydantic schemas
│ │ ├── db/
│ │ ├── services/ # Business logic (scanner, archiver, scheduler)
│ │ └── main.py # FastAPI app factory + router registration
│ └── tests/ # Pytest suite (77 tests)
├── frontend/ # SvelteKit + TypeScript
│ ├── src/lib/api/ # Auto-generated OpenAPI SDK
│ └── tests/ # Playwright E2E suite (34 tests)
└── docs/ # Additional documentation
```
## Backend Architecture
### API Router Organization
All API routes live under `app/api/`. The `system` endpoints are split into a package (`app/api/system/`) with focused submodules:
| Module | Endpoints |
|--------|-----------|
| `system/jobs.py` | `/system/jobs/*`, `/system/jobs/{id}/cancel`, `/system/jobs/{id}/retry`, `/system/jobs/stream` |
| `system/scan.py` | `/system/scan`, `/system/index/hash`, `/system/scan/status` |
| `system/filesystem.py` | `/system/browse`, `/system/search` |
| `system/tree.py` | `/system/tree` |
| `system/dashboard.py` | `/system/dashboard/stats` |
| `system/settings.py` | `/system/settings` |
| `system/hardware.py` | `/system/hardware/discover`, `/system/hardware/ignore` |
| `system/discrepancies.py` | `/system/discrepancies/*`, batch ops, tree, browse |
| `system/database.py` | `/system/database/export`, `/system/database/import` |
| `system/tracking.py` | `/system/track/batch` |
| `system/notifications.py` | `/system/notifications/test` |
| `system/host.py` | `/system/ls` |
| `system/test.py` | `/system/test/reset` |
Each module defines its own `APIRouter` with `tags=["System"]` and is registered in `main.py` with `prefix="/system"`.
### Shared Helpers (`app/api/common.py`)
Cross-cutting helpers and schemas that must not create circular imports:
- `get_source_roots(db_session)``List[str]`
- `get_exclusion_spec(db_session)``Optional[pathspec.PathSpec]`
- `get_ignored_status(path, tracking_map, exclusion_spec)``bool`
- `_validate_path_within_roots(path, roots)``bool`
- `_active_job_exists(db_session, job_type)``bool`
- `_get_last_scan_time(db_session)``Optional[datetime]`
- Shared Pydantic schemas: `DashboardStatsSchema`, `JobSchema`, `JobLogSchema`, `FileItemSchema`, `BrowseResponseSchema`, `ScanStatusSchema`, `SettingSchema`, `TestNotificationRequest`, `IgnoreHardwareRequest`, `BatchTrackRequest`
**Rule:** `common.py` must NEVER import from any API module (no `app.api.system`, `app.api.archive`, etc.). Only models, database, and standard libraries.
### Endpoint Naming Convention
All FastAPI route handlers must declare explicit `operation_id` to control the generated TypeScript SDK names.
| Pattern | Example Handler | `operation_id` | Generated TS |
|---------|-----------------|----------------|--------------|
| GET list | `list_jobs` | `list_jobs` | `listJobs` |
| GET one | `get_job` | `get_job` | `getJob` |
| POST create | `create_media` | `create_media` | `createMedia` |
| POST action | `trigger_scan` | `trigger_scan` | `triggerScan` |
| PATCH update | `update_media` | `update_media` | `updateMedia` |
| DELETE | `delete_media` | `delete_media` | `deleteMedia` |
| Batch actions | `batch_track` | `batch_track` | `batchTrack` |
**Never** let FastAPI auto-generate `operationId`. The old auto-generated names looked like `getDashboardStatsSystemDashboardStatsGet` — verbose and brittle.
### Router Prefix Rules
- Top-level domain routers (`archive`, `backups`, `inventory`, `restores`) define their own prefix in the router constructor (e.g., `APIRouter(prefix="/archive")`).
- `system` submodules use **no prefix** in the router constructor; `main.py` applies `prefix="/system"` when calling `app.include_router()`.
## Frontend Architecture
### TypeScript SDK (`frontend/src/lib/api/`)
Generated from the backend OpenAPI spec using `@hey-api/openapi-ts`:
```bash
cd backend && uv run python -c "import json; from app.main import app; json.dump(app.openapi(), open('openapi.json', 'w'))"
cd ../frontend && npx @hey-api/openapi-ts -i ../backend/openapi.json -o src/lib/api
```
The generated SDK exports clean camelCase functions (e.g., `getDashboardStats`, `listJobs`, `triggerScan`).
**Rule:** After renaming any backend handler or changing an `operation_id`, regenerate the SDK and update all frontend imports. The old verbose names will cause TypeScript errors.
### Frontend Imports to Avoid Shadowing
Some Svelte components define local functions with the same name as SDK imports (e.g., `cancelJob`, `retryJob` in `jobs/+page.svelte`). When this happens, alias the SDK import:
```typescript
import { cancelJob as cancelJobApi, retryJob as retryJobApi } from '$lib/api';
```
## Testing
### Backend Tests
```bash
cd backend && uv run pytest tests/ -v
```
- 77 tests covering API endpoints, providers, services
- Uses pytest-mock for mocking filesystem/hardware
- **Important:** Mocks that patch `get_source_roots` or `get_exclusion_spec` must target `app.api.common` (not `app.api.system`), since those helpers moved to `common.py`.
### Frontend E2E Tests
```bash
cd frontend && npx playwright test
```
- 34 Playwright tests using Chromium
- Backend test server auto-starts via `playwright.config.ts` webServer config
- Tests use `requestContext` for direct API calls + `page` for UI interactions
### macOS IPv6 Gotcha
On macOS, `localhost` resolves to `::1` (IPv6) by default, but uvicorn may bind to IPv4 only. This causes `ECONNREFUSED ::1:8001` in Playwright tests.
**Fix:** Always use `127.0.0.1` instead of `localhost` for backend URLs:
- `frontend/tests/helpers.ts`: `API_URL = 'http://127.0.0.1:8001'`
- `frontend/playwright.config.ts`: `webServer.url = 'http://127.0.0.1:8001'`
## Common Tasks
### Adding a New System Endpoint
1. Choose the appropriate `app/api/system/<module>.py` file (or create a new one if it doesn't fit existing categories).
2. Add the route handler with an explicit `operation_id`.
3. Import shared helpers from `app.api.common` if needed.
4. Register the new router in `app/main.py` with `prefix="/system"`.
5. Regenerate the TypeScript SDK.
6. Update frontend imports if using the new endpoint.
7. Add backend tests in `backend/tests/test_api_system.py` (or a new test file if it's a new domain).
8. Run `just lint` before finishing.
### Regenerating the OpenAPI Spec
```bash
cd backend && uv run python -c "import json; from app.main import app; json.dump(app.openapi(), open('openapi.json', 'w'), indent=2)"
```
### Verifying No Auto-Generated operationIds
```bash
cd backend && uv run python -c "
from app.main import app
import re
for path, methods in app.openapi()['paths'].items():
for method, info in methods.items():
op_id = info.get('operationId', '')
if re.search(r'_(get|post|put|patch|delete)$', op_id):
print(f'DIRTY: {method.upper()} {path} = {op_id}')
print('Check complete')
"
```
## Lint & Format
```bash
just lint # Runs ruff (Python) + svelte-check (TypeScript/Svelte)
```
Pre-commit hooks are configured but may stash unstaged changes.
## Environment
- **Backend:** Python 3.13, FastAPI, SQLAlchemy 2.x, SQLite, uv for package management
- **Frontend:** SvelteKit, TypeScript, Tailwind CSS, shadcn-svelte components
- **Test server:** `TAPEHOARD_TEST_MODE=true` enables `/system/test/reset` and mock providers
## Documentation Files
| File | Contents |
|------|----------|
| `README.md` | Human-facing project overview |
| `DOCS.md` | Feature documentation |
| `E2E.md` | End-to-end testing notes |
| `ENDPOINT_REFACTOR.md` | Batch plan for endpoint renaming (completed) |
| `ISSUES.md` | Known issues and backlog |
| `MEDIA_MANAGEMENT.md` | Media lifecycle documentation |
| `NOTES.md` | Development notes |
| `OPTIMIZATIONS.md` | Performance optimization notes |
| `PLAN.md` | Project roadmap |
| `UX.md` | UX conventions |
| `GEMINI.md` | Gemini-specific context |
| `REVIEW_2.md` | Code review notes |
| `SOURCEMAP.md` | Frontend source map |
| `AGENTS.md` | This file — agent development guide |
-127
View File
@@ -1,127 +0,0 @@
# TapeHoard - Developer & AI Assistant Guide
This document (`GEMINI.md`) contains critical, contextual information about the TapeHoard project. **It takes absolute precedence over generic workflows.** Always refer to the architecture constraints in `PLAN.md` before implementing new features.
## 1. Tooling & Ecosystem
### Backend (Python)
* **Package Manager:** `uv`. Never use `pip` directly. Use `uv add <pkg>` and `uv sync` to manage dependencies.
* **Framework:** FastAPI.
* **Database:** SQLite via SQLAlchemy ORM. Migrations are strictly managed by `alembic`.
* *To generate migrations:* `cd backend && uv run alembic revision --autogenerate -m "message"`
* *To apply migrations:* `cd backend && uv run alembic upgrade head`
* **Logging:** `loguru`. Do not use standard `logging` or print statements.
* **Type Safety:** `ty`. All Python code must be fully type-hinted and pass `uv run ty` without errors.
* **Configuration:** `pydantic-settings`. Define environment variables and constants in a settings schema.
### Frontend (Svelte 5 / SvelteKit)
* **Framework:** Svelte 5 Runes (using `$props()`, `$state()`, etc.).
* **Styling:** Tailwind CSS. All new components must use Tailwind utility classes.
* **Component Library:** Custom library based on **shadcn-svelte** and **bits-ui**. Use existing components in `src/lib/components/ui/` or add new ones following the shadcn pattern.
* **Package Manager:** `npm`.
* **API Client Generation:** `@hey-api/openapi-ts`. Never manually fetch or type API responses. Ensure the backend is running, then run `just generate-client` to auto-generate the strictly typed TypeScript client from the FastAPI OpenAPI spec.
* **Icons:** `lucide-svelte`.
* **Notifications:** `svelte-sonner`.
### Global Task Runner
* **`just`:** Use the `justfile` in the root directory for executing common tasks.
* `just dev`: Starts both backend and frontend servers.
* `just lint`: Runs Ruff, ty, and Svelte Check.
* `just format`: Auto-formats code with Ruff.
## 2. Code Quality & Pre-commit
* **PEP 8 Compliance:** All Python code must strictly adhere to PEP 8 standards. Use explicit, idiomatic language features.
* **Descriptive Naming:** Always use very descriptive variable and function names. Avoid abbreviations (e.g., use `file_state` instead of `fs`) to maintain high readability.
* **Pre-commit:** All code must pass `pre-commit` hooks (ruff, ruff-format, etc.).
* **Validation:** Fulfill the user's request thoroughly, including adding tests when adding features or fixing bugs. You must empirically reproduce failures with new test cases before applying fixes.
## 3. Core Architectural Rules
### Storage Providers & Media Lifecycle
* **Plugin Architecture:** All storage destinations are treated as plugins implementing `AbstractStorageProvider`. Avoid hardcoding hardware logic (`tape`, `hdd`) in the API or UI.
* **Dynamic UI:** The frontend dynamically renders registration and edit forms based on a provider's `config_schema` (fetched from `GET /inventory/providers`).
* **Standardized Telemetry:** Providers must implement `get_live_info(force: bool)` to return unified telemetry (e.g., drive status, capacity).
* **Sanitization:** Initializing media performs a full purge of existing TapeHoard data if the `force` flag is set.
* **Hardware Failure:** Marking media as "Failed" triggers an automatic atomic purge of all associated `file_versions` to surface those files as "Pending" on the dashboard.
* **Tape Registration is Discovery-Only:** Tape media (`lto_tape`, `mock_lto`) cannot be registered through the manual "Register media" dialog. Tapes are only registered via the hardware discovery section (`/inventory` → "Discovered unregistered drives") where the system auto-captures `device_path`, barcode, and serial number from the connected drive's MAM. The `device_path` is excluded from `LTOProvider.config_schema` because it is a per-drive setting configured globally in `tape_drives`, not a per-media attribute. The archiver resolves the drive at runtime when instantiating the provider.
### Database & Performance
* **High Concurrency:** SQLite must always run in **WAL (Write-Ahead Logging)** mode with a 30s busy timeout and larger page cache.
* **Archival Intent:** `is_ignored` in `filesystem_state` is the single source of truth. The scanner indexes all files but lazily marks excluded ones as `is_ignored = 1`. Explicit user tracking policies override global exclusions.
* **Aggregate Intelligence:** Use Raw SQL Aggregates for dashboard stats and directory protection status to avoid N+1 query patterns.
* **FTS5 Search:** Full-text search is managed via triggers. Ensure searches filter for `has_version = 1` when browsing the Archive Index, regardless of current `is_ignored` state on disk.
### Scanning & Hashing Architecture
* **Concurrent Phasing:** Decoupled into `SCAN` (Metadata, Normal priority) and `HASH` (Content, Idle priority with dynamic `iowait` throttling).
* **Thread-Safe Metrics:** All counters (files processed, bytes hashed) must be protected by a `threading.Lock`.
* **Hashing Progress:** Hashing jobs calculate progress against a dynamically updating snapshot of total `sha256_hash IS NULL AND is_ignored = 0` files.
* **Streaming Subprocess I/O:** Both `_discover_files_fast` (`find -printf`) and `_hash_file_batch_fast` (`sha256sum`/`shasum`) use `subprocess.Popen` with line-by-line `readline` streaming — never `subprocess.run(capture_output=True)`. This enables incremental progress updates as each file is discovered or hashed.
* **Streaming Callback Pattern:** The hashing sub-batch workers accept an `on_result(file_path, hex_digest)` callback (created via `_make_hash_callback`) that assigns hashes to DB records and reports job progress with throughput every 5 files, providing responsive UI updates during large batches.
* **Partial Batch Results:** `sha256sum`/`shasum` may return non-zero exit codes when some files in a batch are missing. Output is always parsed regardless of returncode to capture partial results.
* **Missing File Guard:** Files that cannot be hashed (deleted or inaccessible) are detected via `os.path.exists` fallback and marked `is_deleted = True` to prevent infinite re-query loops in the hashing worker.
* **Provider Temp Dir Lifecycle:** `MockLTOProvider` auto-creates temp dirs when no `device_path` is configured. These are tracked in a module-level `_auto_temp_dirs` set and cleaned up via `atexit` on server shutdown. The `device_path` is persisted to `StorageMedia.extra_config` on `/initialize` so background threads can locate the correct directory.
### Archival & Recovery
* **Format Negotiation:** The Archiver adapts formats based on provider capabilities (`supports_random_access`).
* *Sequential (Tape):* Uses `.tar` streams to maintain drive streaming.
* *Random Access (HDD/Cloud):* Uses native direct file copying/objects to enable instant seekless restores without unpacking gigabytes of data.
* **High-Speed Hybrid Archival:**
* The system prioritizes the **system `tar` binary** for whole-file chunks, delivering a 10x-20x performance boost over pure Python and ensuring optimal buffer saturation for LTO drives.
* It transparently falls back to the **Python `RangeFile` logic** only for chunks containing split fragments, maintaining bit-perfect alignment for multi-tape files.
* **Industrial Tar Chunking:**
* Large backup sets are automatically split into multiple independent archives. The system dynamically aims for at least **100 archives per tape** (calculated based on generational capacity, e.g., ~15GB for LTO-5) to provide high seek granularity during restoration.
* **Exception:** Single large files are allowed to occupy their own archives even if they exceed the target chunk size, preventing unnecessary fragmentation while keeping them as independent, seekable objects.
* **Refined Splitting Philosophy:**
* Files are **only split** if they are physically larger than the media's entire capacity (multi-tape spanning).
* **Skip-and-Defer:** If a file is larger than the remaining space on a tape but smaller than its total capacity, it is deferred to the next fresh medium to minimize fragmentation.
* **Hardware-First Utilization:** The system trusts **Physical Hardware Feedback (MAM)** over logical byte counts. Tapes are only marked as "Full" when the drive reporting (via `get_utilization`) confirms saturation, maximizing utilization when hardware compression is active.
* **Bitstream Integrity:** `RangeFile` must guarantee exact byte counts for tar alignment.
* **Metadata Fidelity:** The restorer must preserve original **permissions (chmod)**, **timestamps (utime)**, and **ownership (chown)** when recovering files natively or via tar.
* **Independence:** Force all tar archive members to be **Regular Files** to break fragile hard-link dependencies. Symlinks are preserved as `SYMTYPE` (or `.symlink` stub objects for native format).
### Deployment & Testing
* **Temporal Standard:** Backend uses **UTC**. Frontend uses `parseUTCDate` to convert to browser **Local Time**.
* **Unsaved Changes Guard:** UI must use `beforeNavigate` and `beforeunload` listeners to warn users if they leave the Settings or Media registration forms with uncommitted changes.
* **Backend Testing:** Use **Alembic-driven file-based SQLite** for tests to ensure 100% schema fidelity (including FTS5 and triggers) and reliable cross-thread data visibility. Atomic truncation must occur between tests. Run `just pytest` to execute backend tests.
* **End-to-End (E2E) Testing:** Playwright is used for E2E testing (`frontend/tests/`).
* **Mock Hardware:** To simulate LTO drives in CI, the backend supports a `TAPEHOARD_TEST_MODE=true` flag. This registers a `MockLTOProvider` that uses local directories instead of physical SCSI devices.
* **Running E2E:** Use `just e2e-server` to start the mock backend (on port 8001), and then `just playwright` to execute the Playwright test suite against it.
### UI & UX Philosophy
* **Direct Terminology:** Use technical terms like "Backup Manager", "System Status", "Archive Index". Avoid marketing fluff.
* **Layout:** Natural page scrolling only. No sticky headers.
* **Navigation:** The FileBrowser must maintain internal back/forward history separate from browser page navigation.
* **Refined Industrial Design Paradigm:**
* **Scale:** Standard root font size is **16px**.
* **Typography:** Transition from aggressive all-caps and heavy weights to **Sentence case** and **font-medium** for general UI text. Reserve `font-bold` for primary headers and high-impact dashboard metrics.
* **Modular Components:** Use standardized layout components to maintain visual consistency:
* `PageHeader`: Centralized logic for page titles, descriptions, and action buttons.
* `SectionHeader`: Standardized "Industrial" divider (Icon + Title + Gradient Line).
* `StatCard`: Modular metric tiles with consistent scaling and alignment for big numbers.
* `ProgressBar`: Unified utilization and task indicators with industrial glow effects.
* `StatusBadge`: Centralized state indicators (Success, Error, Warning, Neutral, Blue) with consistent padding.
* `Dialog`: Standardized modal/dialog system with backdrop blurring and consistent ARIA roles.
* `EmptyState`: Unified visual pattern for empty views with consistent icons and typography.
* `IconButton`: Standardized boilerplate for icon-only buttons with fixed SVG scaling and consistent sizes.
* `Card`: Unified **p-5** padding, **rounded-xl** borders, and **shadow-xl** for all content containers.
* `Button`: Standardized high-density **h-9 px-4** sizing (or **h-11** for primary CTAs) with `font-medium` sentence-case labels.
* **High Density:** Maintain maximum information density without sacrificing legibility by utilizing high-density typography classes (`text-4xs` to `text-6xs`) for metadata and technical labels.
* **Color Strategy:** Use low-opacity backgrounds (e.g., `bg-blue-500/10`) and subtle borders (`border-blue-500/20`) for interactive elements and badges to preserve the "professional terminal" aesthetic.
### API & Type Safety
* **Explicit Response Models:** All FastAPI endpoints MUST explicitly declare a `response_model`. This is critical for generating accurate OpenAPI specs and strictly typed TypeScript SDKs for the frontend.
* **Centralized Schemas:** Define shared Pydantic models in `app.api.schemas` to avoid circular dependencies when importing across different routers.
### Hardware Polling & Stability
* **Non-Intrusive Polling:** Hardware status checks must prioritize non-intrusive methods (e.g., reading MAM via `sg_read_attr`). Intrusive operations (`mt rewind`) are strictly fallbacks. Always verify device path existence (`os.path.exists`) before issuing SCSI/CLI commands to prevent log spam on disconnected drives.
* **Last Known Good (LKG) Caching:** Implement LKG caching in both backend hardware providers and frontend UI state. If a status poll fails or returns empty because a device is temporarily busy with an archival job, preserve and return the LKG state to prevent UI flickering.
* **Forced Refreshes:** Hardware polling defaults to throttled (e.g., 2 seconds) intervals. Use `force=True` on provider calls and `?refresh=true` on API endpoints to bypass throttling when the user explicitly requests a live update or upon initial page loads.
### Frontend Reactivity
* **Svelte 5 State:** When mutating complex data structures like `Map` or `Set` in Svelte 5 `$state`, always explicitly reassign the variable (e.g., `myMap = new Map(myMap)`) after mutation to trigger the reactivity engine.
## 4. Pending Feature Implementations
* **Media Pools & Sets:** Transition from targeting individual media to targeting logical `MediaPool` entities. Archiver logic should resolve a pool to its active appendable member. Requires a new DB model and UI management.
* **Location & Custody Tracking:** Implement a formalized check-in/out ledger (`MediaCustodyLog`) for physical offline media.
* **Barcode & Label Generation:** Add a feature using `reportlab` or `weasyprint` to generate printable Avery-format PDF sheets containing Code 39 barcodes for tapes and QR codes for HDDs.
* **Lifecycle Policies:** Implement background tasks in `scheduler.py` to flag expired data for pruning based on user-defined retention rules. Add physical wear alerts to the dashboard based on tape `load_count` and `lifetime_mib_written`.
-48
View File
@@ -12,54 +12,6 @@ test.describe('Backup & Restore', () => {
fs.writeFileSync(path.join(SOURCE_ROOT, 'subdir', 'nested.txt'), 'nested content');
});
test('auto-backup to registered media completes', async ({}) => {
const requestContext = await setupRequestContext();
await configureBackend(requestContext);
const registerResp = await requestContext.post(`${API_URL}/inventory/media`, {
data: {
identifier: 'BACKUP_TAPE_001',
media_type: 'mock_lto',
generation_tier: 'LTO-8',
capacity: 12000,
location: 'Test Lab',
config: {}
}
});
expect(registerResp.ok()).toBe(true);
const media = await registerResp.json();
const initResp = await requestContext.post(`${API_URL}/inventory/media/${media.id}/initialize`);
expect(initResp.ok()).toBe(true);
const scanResp = await requestContext.post(`${API_URL}/system/scan`);
expect(scanResp.ok()).toBe(true);
await waitForScanComplete(requestContext);
const backupResp = await requestContext.post(`${API_URL}/backups/trigger/auto`);
expect(backupResp.ok()).toBe(true);
const backupResult = await backupResp.json();
expect(backupResult.job_id).toBeDefined();
await expect(async () => {
const jobsResp = await requestContext.get(`${API_URL}/system/jobs`);
const jobs = await jobsResp.json();
const backupJob = (jobs as Array<any>).find((j: any) => j.job_type === 'BACKUP');
expect(backupJob).toBeDefined();
expect(backupJob.status).toBe('COMPLETED');
}).toPass({ timeout: 30000 });
const metaResp = await requestContext.get(`${API_URL}/archive/metadata`, {
params: { path: path.join(SOURCE_ROOT, 'backup_test.txt') }
});
expect(metaResp.ok()).toBe(true);
const meta = await metaResp.json();
expect(meta.versions.length).toBeGreaterThan(0);
await requestContext.delete(`${API_URL}/inventory/media/${media.id}`);
await requestContext.dispose();
});
test('backup to specific media works', async ({}) => {
const requestContext = await setupRequestContext();
await configureBackend(requestContext);
-19
View File
@@ -79,25 +79,6 @@ test.describe('Discrepancies', () => {
// No automatic cleanup needed - tests use separate files
});
test('missing files are detected and can be confirmed', async ({}) => {
const requestContext = await request.newContext();
const fileId = fileIds['confirm_missing.txt'];
expect(fileId).toBeDefined();
console.log('Step 1: Confirm the file as deleted');
const confirmResp = await requestContext.post(`${API_URL}/system/discrepancies/${fileId}/confirm`);
expect(confirmResp.ok()).toBe(true);
console.log('Step 2: Verify item appears in discrepancies as deleted');
const discResp = await requestContext.get(`${API_URL}/system/discrepancies`);
const discrepancies = await discResp.json();
const found = (discrepancies as Array<any>).find((d: any) => d.path === path.join(SOURCE_ROOT, 'confirm_missing.txt'));
expect(found).toBeDefined();
expect(found.is_deleted).toBe(true);
await requestContext.dispose();
});
test('dismiss discrepancy', async ({}) => {
const requestContext = await request.newContext();
const fileId = fileIds['dismiss_test.txt'];
-79
View File
@@ -35,20 +35,6 @@ test.describe('Settings & System', () => {
await requestContext.dispose();
});
test('source roots setting configures browse endpoint', async ({ page }) => {
const requestContext = await setupRequestContext();
configureBackend(requestContext);
const browseResp = await requestContext.get(`${API_URL}/system/browse?path=ROOT`);
expect(browseResp.ok()).toBe(true);
const browseData = await browseResp.json();
const roots = (browseData as any).files;
const sourceRoot = (roots as Array<any>).find((r: any) => r.path === SOURCE_ROOT);
expect(sourceRoot).toBeDefined();
await requestContext.dispose();
});
test('dashboard stats reflect system state', async ({ page }) => {
const requestContext = await setupRequestContext();
configureBackend(requestContext);
@@ -81,71 +67,6 @@ test.describe('Settings & System', () => {
await requestContext.dispose();
});
test('hardware discovery returns empty when nothing configured', async ({ page }) => {
const requestContext = await setupRequestContext();
const discoverResp = await requestContext.get(`${API_URL}/system/hardware/discover`);
expect(discoverResp.ok()).toBe(true);
const devices = await discoverResp.json();
expect(Array.isArray(devices)).toBe(true);
await requestContext.dispose();
});
test('scan and indexing workflow', async ({ page }) => {
const requestContext = await setupRequestContext();
configureBackend(requestContext);
// Trigger scan
const scanResp = await requestContext.post(`${API_URL}/system/scan`);
expect(scanResp.ok()).toBe(true);
// Wait for scan to complete
await new Promise(r => setTimeout(r, 3000));
const statusResp = await requestContext.get(`${API_URL}/system/scan/status`);
const status = await statusResp.json();
expect(status.is_running).toBe(false);
// Trigger indexing
const indexResp = await requestContext.post(`${API_URL}/system/index/hash`);
expect(indexResp.ok()).toBe(true);
await new Promise(r => setTimeout(r, 2000));
const afterIndex = await requestContext.get(`${API_URL}/system/scan/status`);
const indexStatus = await afterIndex.json();
// Hashing runs in background, just verify it started without error
expect(indexStatus.is_throttled).toBeDefined();
await requestContext.dispose();
});
test('search returns results after scan and hash', async ({ page }) => {
const requestContext = await setupRequestContext();
configureBackend(requestContext);
// Create a searchable file
const fs = await import('fs');
const path = await import('path');
fs.writeFileSync(path.join(SOURCE_ROOT, 'searchable_file.txt'), 'searchable content here');
const scanResp = await requestContext.post(`${API_URL}/system/scan`);
expect(scanResp.ok()).toBe(true);
// Wait for scan and hashing
await new Promise(r => setTimeout(r, 5000));
// Search requires at least 3 chars and hashed files
const searchResp = await requestContext.get(`${API_URL}/system/search?q=searchable`);
expect(searchResp.ok()).toBe(true);
const results = await searchResp.json();
expect(Array.isArray(results)).toBe(true);
// Results may be empty if hashing hasn't completed; API is functional either way
await requestContext.dispose();
});
test('tree endpoint returns source roots', async ({ page }) => {
const requestContext = await setupRequestContext();
configureBackend(requestContext);
+8
View File
@@ -38,6 +38,14 @@ test.describe('Source Management', () => {
expect(status.is_running).toBe(false);
expect(status.total_files_found).toBeGreaterThan(0);
console.log('Step 4: Verify browse endpoint reflects source roots');
const browseResp = await requestContext.get(`${API_URL}/system/browse?path=ROOT`);
expect(browseResp.ok()).toBe(true);
const browseData = await browseResp.json();
const roots = (browseData as any).files;
const sourceRoot = (roots as Array<any>).find((r: any) => r.path === SOURCE_ROOT);
expect(sourceRoot).toBeDefined();
await requestContext.dispose();
});