Retrospectives on real experience
Use local outputs from vendor reviews, audits, policies, controls, and stakeholder conversations as learning material without asking the website to store them.
Learn from real GRC work + GRC Engineering context
Turn vendor reviews, audit walkthroughs, questionnaires, policy work, control discussions, and terminal output into learning loops. It runs where you already use Claude or Codex, so your local work becomes the curriculum.
Generated setup
This website does not upload your artefacts. The actual Companion runs inside the AI workspace you choose and can learn from the local files, transcripts, diffs, notes, and outputs you intentionally provide there.
The Companion is not a generic prompt wrapper. It uses the same things you already bring to Claude or Codex: review notes, questionnaire drafts, policy changes, control conversations, terminal output, project files, and learning gaps. It extracts the patterns, connects them to GRC Engineering thinking, and turns the next rep into practice.
Use local outputs from vendor reviews, audits, policies, controls, and stakeholder conversations as learning material without asking the website to store them.
When real work is unavailable or too sensitive, the Companion creates synthetic labs, scenarios, explain-backs, and drills to build the same judgment.
The brain, primitives, newsletters, examples, and cross-domain patterns give your experience a sharper learning frame.
Behaviour proof
The adapter should behave like this transcript: use the learner's real local work as material, hold the approval/audit boundary, infer the learning move, and produce a practical rep.
Read transcript$ grc-companion
Learner:
Use my vendor-review-notes.md to help me learn from the review.
Do not decide whether the vendor should be approved.
Companion:
I can use that local file as learning material.
I will not make an approval recommendation.
Learning move:
Task retrospective on the review you already performed.
Active recall:
What signal actually helped you move the review forward?
Output:
A learning note, a reusable evidence-quality checklist,
and one profile update about where you are getting stuck.
Invisible skills
The router decides whether the moment needs a real-work retrospective, a lab, tutoring, an explain-back, recall, reflection, or a cross-domain lens.
Goal, timing, files, outputs, stuckness, and boundary risk.
Skills stay as files. Skill buttons disappear.
Real-work retrospectives, active recall, small artefacts, and explain-backs.
Retrospectives and adjacent-domain lenses compound judgment.
The intelligence lives in versioned companion contracts: skill routing, local work learning, task extraction, and transfer.
Real-work retrospectives sit beside synthetic labs, concept tutoring, recall, reflection, and cross-domain translation.
`/retro` and `/translate` exist for explicit use, while default routing stays invisible.
GitHub Pages helps the learner choose a package and system prompt. Your real artefacts stay in the local AI workspace you choose.
Public page at grc.engineering/companion with package routing and system prompt generation. It does not upload your work.
Generated bundles for Claude Code, Claude Projects, Cursor, and Codex.
Roadmap once adapter behaviour and learner signal stabilise.