Skip to content
K

Spec Retrieval (Prior Work)

When you scaffold a new spec, Monday Morning automatically surfaces the most-similar prior completed specs as “Prior Work” — a structured artifact of accumulated knowledge, surfaced at the moment you most need it. The completed work in your .mm/ folder compounds: each shipped spec becomes a retrievable artifact that primes future related specs.

You don’t have to do anything to make it happen. The system is part of the spec lifecycle.

In the desktop app, every spec detail view that has matches shows a Prior Work panel below the spec body:

  • Match cards — title, similarity score, tags, and a 1-2 line goal preview
  • One-click navigation — click any card to jump to that prior spec’s detail view
  • Refresh button — regenerate matches against the current spec.md (idempotent — re-runs produce a byte-stable file)
  • Stale hint — a soft “may be stale — refresh?” pill appears when files in the spec folder have been modified since the matches were generated
  • Discoverability badge — kanban cards and spec-list rows show an “N prior work” chip when matches exist

In Claude Code, the spec-writer and task-list-creator agents read prior-work.md automatically when authoring a new spec, so prior patterns and verify-spec lessons inform the spec itself, not just the human reading it after.

Three events drive the system end-to-end without any manual step:

EventWhat happens
Spec completed (mm_complete_task finishes the last task)The spec is added to .mm/index/specs.json with extracted goal, requirements, verify-spec convergence, T1–T4 severities, and affected file regions.
Spec scaffolded (/mm:spec or /mm:new-spec)Retrieval runs against the new spec’s content; the top matches are written to prior-work.md in the new spec folder.
Spec authored (spec-writer / task-list-creator subagents)The agents read prior-work.md so the spec they draft inherits patterns and avoids prior pitfalls.

You don’t run anything. The system maintains the index, generates the artifact, and consumes it. Your job is simply to read what’s surfaced.

Most of the time, the artifact is fresh and accurate. Two cases call for a manual step:

  1. The “stale” pill appears — files in the spec folder were modified after prior-work.md was generated. Click the pill (or the Refresh button) to regenerate.
  2. You created a spec outside /mm:spec — direct mm_create_spec calls don’t include the mid-authoring refresh that the slash command does, so the matches were computed against a stub. Click Refresh in the panel, or run /mm:retrieve-prior-work {spec-folder} from a Claude Code session.

Both produce a byte-stable file when content hasn’t changed; safe to re-run.

Similarity is keyword-based with structured-field boosting — fast, dependency-free, and runs locally:

  • Base score — Jaccard token overlap on goal (weight 0.4), requirements (0.4), and title (0.2). Tokens are lowercased, split on non-alphanumeric, stopword-filtered.
  • Boost layer:
    • +0.10 per shared tag, capped at +0.20
    • +0.05 when requirement counts are within ±2 of each other
    • +0.05 when the candidate spec’s verify-spec convergence is converged
  • Final scoremin(1.0, base + boosts)

Top N matches above a threshold are returned, sorted by score descending.

Two knobs in .mm/config.json:

{
"spec_retrieval": {
"similarity_threshold": 0.15,
"top_n": 3
}
}
KeyDefaultWhat it does
similarity_threshold0.15Minimum combined score to surface. Tune empirically — start permissive, raise as your index grows.
top_n3Maximum matches per prior-work.md.

Missing keys, missing file, or malformed JSON all degrade silently to defaults — retrieval will not block spec creation under any circumstance.

PathPurpose
.mm/index/specs.jsonThe spec index. Auto-generated. Don’t edit manually.
.mm/specs/{slug}/prior-work.mdThe surfaced matches for a given spec. Auto-generated. Read selectively — these are pointers, not requirements.
.mm/config.json (spec_retrieval block)Threshold and top-N tuning.

When prior-work.md exists but has zero matches, the panel renders an explanatory state keyed off the reason:

  • no_completed_specs — The workspace has no fully-completed specs yet. As specs ship, they’ll be indexed and surface here.
  • below_threshold — Specs exist in the index but none scored above your similarity_threshold. Lower the threshold to widen the net.
  • index_missing — The index could not be built (parse errors, etc.). Run /mm:retrieve-prior-work from Claude Code to diagnose.

The empty state is intentional — its presence vs. absence is unambiguous. An empty prior-work.md means “we ran retrieval and found nothing” rather than “retrieval didn’t run.”

Why structured fields, not LLM distillation?

Section titled “Why structured fields, not LLM distillation?”

A common alternative is to have an LLM read each completed spec and write abstract “skills” or “lessons learned.” Monday Morning explicitly does not do this. Reasons:

  • The spec is the distillation. Goal, requirements, verify-spec convergence, T1–T4 severities — these are already structured artifacts. Re-distilling them with an LLM adds cost without adding signal.
  • Skills decay. Abstract lessons drift as the underlying code changes; the originating spec doesn’t.
  • Retrievable provenance. Every match links back to the spec that produced it, so you can verify the context, not just trust a summary.

A future iteration may add vector embeddings for richer semantic matching (the keyword approach surfaces overlapping vocabulary; embeddings surface overlapping concepts). The structured-field foundation is intentionally simple and explainable, and it shipped first.

  • Your First Workflow — covers the /mm:spec lifecycle end-to-end
  • The .mm/ Data Model — where prior-work.md and specs.json live
  • mm_retrieve_prior_work — the MCP tool the Refresh button wraps