Experimental demo

AI drafts the first release note. Humans decide what ships.

This page shows the workflow we want to validate: pull context from shipped work, generate a user-facing draft, and keep publish and email approval in human hands. The live product now has a testable GitHub-to-draft path in the release composer.

This is a validation demo, not a live MCP integration yet. The point is to test whether AI-first drafting plus human approval feels meaningfully better than starting from a blank editor. For local testing, the composer also includes a demo GitHub source so you can run import and draft generation without wiring a real account first.

Workflow

The value is not auto-send. The value is a faster first draft from real context.

The AI should start from GitHub, issue, and launch context that already exists. The team should still decide what belongs in a release note, what tone to use, and whether the update should be published or emailed.

1. Source context

What the AI reads before drafting

Merged 3 hours ago

GitHub PR #482

Adds role-based release visibility controls so workspace admins can hide internal-only updates from external subscribers.

Product note

Issue RH-219

Users said they were confused when internal fixes showed up in customer update emails. Need a cleaner way to mark audience visibility before publish.

Support input

Launch note

Top customer-facing benefit: teams can now share fewer noisy updates and make release emails easier to trust.

2. User-facing draft

What the AI produces for review

Draft title

More control over which updates customers actually see

Summary

You can now keep internal-only changes out of public release notes and subscriber emails, so customers only receive updates that matter to them.

What shipped

Added audience visibility controls for each release, made publish settings clearer before send, and reduced accidental noise in update emails.

Email intro

This update helps teams send cleaner release emails by making it easier to decide which changes belong in customer-facing announcements.

3. Human review gate

What must still be reviewed by a person

  • Rewrite internal implementation language into customer language
  • Remove low-signal fixes that do not belong in the announcement
  • Confirm whether this release should be emailed or only published to the changelog
  • Keep final approval with the human before anything goes live
What we want to learn

Questions this demo is meant to validate

  • Would small teams trust AI to write the first draft if final approval stayed human?
  • Which context sources matter most: PRs, issues, docs, or support feedback?
  • Is the pain mostly writing from scratch, or deciding what is worth announcing?
  • Does this feel more useful than a blank editor or generic AI prompt?