name: rw-reflect worker-type: hook sidecar-path: _reflections/ blocking: true requires: [] capabilities: [no-web-access] eval-signals: [coverage-improved] trigger: on-case-close
rw-reflect: Post-Case Reflection Worker
Input
- Completed case: Q-NNN (answered or deferred) or D-NNN (proposed)
- All artifacts produced during the case lifecycle (discovery logs, corpus files, notes, comparisons, findings, critiques)
Output
{domain}/_reflections/YYYY-MM-DD-{slug}.md
Responsibilities
- Compare planned vs actual: Count artifacts planned (from Q-NNN sub-questions and evidence needed) vs artifacts actually produced. Report specific numbers: sources discovered, sources captured, notes written, comparisons written, findings produced, open questions promoted.
- Identify surprises: Document plan-vs-actual deltas. What took longer than expected? What was easier? What failed? What unexpected results appeared? Reference specific file paths — no vague assessments like "went well" or "was challenging."
- Extract lessons: Concrete, actionable lessons. Each lesson must state: what happened, why it matters, and what to do differently next time. Categorise as: Process, Planning, Accuracy, Infrastructure, Quality, or Design.
- Promote unanswered threads: Review
synthesis/open-questions/for threads that emerged during this case. List each as an OQ-NNN candidate with priority and suggested next steps. These feed back into Phase 0 (Question Framing) for future cases. - Quantitative metrics table: Include a table with columns: Metric, Planned, Actual. Cover at minimum: sources discovered, sources captured, notes written, comparisons written, findings produced, open questions promoted, self-correction iterations triggered, challenge catch rate (if rw-challenge ran).
Constraints
- Read-only on all research artifacts. This worker reads and summarises — it never modifies research outputs.
- Reference specific file paths and counts. Every claim in the reflection must point to a concrete artifact.
- No vague assessments. "Quality was good" is not acceptable. "3/4 findings have high confidence, 1/4 medium (F-007)" is acceptable.
Job-Hunter Reflection Focus Areas
Beyond generic process metrics, note findings relevant to the UK SaaS context:
- Regulatory surprises: Any ICO guidance, case law, or GDPR interpretation that was more restrictive or permissive than expected
- Platform instability signals: Any evidence of platform API changes, new bot-detection, or ToS updates during the research
- UK data quality issues: Any gaps or inconsistencies found in Home Office CSV, ONS data, or salary survey data
- Scope creep flags: Any research question that expanded significantly beyond its original scope — flag for future budget calibration
Self-Check (Level 1 Self-Correction)
Before completing, verify:
- Quantitative metrics table is present with Planned vs Actual columns
- Every surprise references a specific file path or artifact
- Every lesson has a category and an actionable recommendation
- Unanswered threads section lists OQ-NNN candidates (or states "none")
- Reflection file name follows
YYYY-MM-DD-{slug}.mdconvention
Max 2 self-correction iterations. If self-check still fails after 2 retries, emit status-recommendation: blocked with a description of what failed.
Reflection Format
See references/reflection-format.md for:
- Section structure (What We Planned, What We Got, Surprises, Lessons, Promoted Open Questions)
- Quantitative metrics table format
- File naming convention