Todd Sinclair

Project: AI content pipeline

Below is a case study of the work I did to improve the workflow and reduce the delivery timeline for a writing team who managed content for the real estate division of a large corporation. It documents the challenges the team was facing, how I leveraged AI to improve their processes, and the results of the project.


CONTEXT

The client maintained over thirty help centers, each supporting a different process or tool. These systems evolved continuously, requiring frequent documentation updates. A team of seven writers was responsible for maintaining all the help centers, which ranged in size from 30 to 250+ articles.

The existing workflow was entirely manual:

  • Gather information about the requested update
  • Spend potentially hours identifying any existing content affected by the update
  • Draft new content, typically one to three days
  • Update existing content, up to a week depending on the number of affected articles
  • Send each new or updated item for peer review, taking up to half a day for each item
  • Update content based on peer feedback, taking up to a day for large projects
  • Send the article for Subject Matter Expert (SME) review, three days
  • Revise based on SME feedback, taking up to a few days for large projects
  • Stage the new content for publication, up to half a day
  • QA review and revision, up to a full day

Many of the issues flagged during the peer and QA reviews were avoidable mistakes or significant deviations from the client’s style guide, causing the team to spend unnecessary time reworking content. Gathering and ingesting information from the SMEs and scoping the work needed created the other significant bottleneck. Even moderate updates could take several days from end-to-end. Larger updates could stretch beyond a week. This created a persistent lag between product releases and accurate documentation.

OBJECTIVES

  • Reduce time required to scope, draft, and quality-check updates
  • Improve consistency in voice and style across all deliverables
  • Decrease peer review overhead
  • Establish a repeatable, scalable workflow for a distributed team of writers

APPROACH

I broke down the issues into how they could be addressed by the AI tools available to us (Gemini and NotebookLM).

1. Building a Shadow Knowledge Layer

The help centers we were working on were not exposed in a way that would let Gemini or NotebookLM access them directly. To address this, I made a one-time effort to create a duplicate version of each help center in NotebookLM. This gave us a dataset we could use with the AI for scoping changes and grounding new content.

I also developed a process that happened after the publication step to update the shadow help center in NotebookLM, and keep it up to date.

2. AI-Assisted Drafting Workflow

I designed a workflow that allowed writers to feed source materials such as meeting recordings, transcripts, PRDs, Figma designs, and other reference documents into Gemini and NotebookLM. The system could then:

  • Generate first-pass drafts for new articles
  • Ground new articles against existing articles in a help center
  • Scope the help center for articles in need of updating based on the feature change

This replaced a manual scoping and drafting process that previously took up to ten days depending on the depth of the feature change.

3. Prompt Library for Style and QA Enforcement

To address inconsistency and reduce review overhead, I worked with a subset of my team to create a prompt library aligned to the client’s internal style guide. We used these prompts to:

  • Validate the tone, structure, and style of a draft before sending it to peer review
  • Perform an automated QA review prior to publication

This improved our quality control and reduced the time spent both reviewing and revising content.

WORKFLOW: BEFORE AND AFTER

The original workflow followed a linear, fully manual process. Each step had an assigned time budget, and even a straightforward update could take a week or more from kickoff to publication.

StepOld ProcessNew ProcessTime Savings
SME IntakeProduct walkthrough, PRD discussion, screenshotsNo changeNo change
ScopingManually review help center to identify every affected article and determine level of effortFeed change details to NotebookLM; it identifies affected articles across the help centerOne full day down to less than an hour
DraftingDraft new or update existing content manually. New articles: 2 days. Updates: 1 day each.Writer provides materials to Gemini with style guide prompts. AI produces first draft; writer reviews and edits.Draft within minutes; no more than half a day for review and revision
Peer ReviewReviewer works from a checklist covering style guide, grammar, spelling, and accuracyReviewers receive cleaner drafts with fewer issues to correctHalf day down to less than one hour
RevisionWriter addresses peer review feedbackUsually minor at this stage; done by the writer without AIHalf day down to less than one hour
SME ReviewSME reviews and approves or requests changesNo change~3 days
SME RevisionWriter addresses SME feedbackWriter addresses feedback then passes draft through QA prompts once moreNo change
StagingArticle staged in the help centerNo changeNo change
QA ReviewFinal review from a checklist, similar to peer reviewNo change, but reviewers receive cleaner drafts with fewer issues to correctQuarter of a day down to less than one hour
Final Revision & PublicationManually making any final changes based on QA reviewStill manual. Gemini helped catch XML errors when needed.No change
Update Shadow Help CenterThis process didn’t existWriter updates the shadow help center in NotebookLMLess than one hour (new step)

RESULTS

MetricBeforeAfter
Scoping timeHours of manual searchUnder one hour
Drafting time1–3 daysHalf a day
Peer review effortHalf a day reviews were commonUnder one hour in many cases
End-to-end updatesUp to 10 daysLess than a week
Style consistencyInconsistent across writersStandardized via prompt-based QA

Bonus Benefit

Beyond the time savings and quality improvement, the system also enabled faster onboarding of new writers to the team. Thanks to NotebookLM’s ability to create audio and video summaries of a help center, aided by the ability to ask the AI questions, a new writer could get up-to-speed much faster with the products they were writing for.

WHAT I LEARNED

  • AI delivers the most value when embedded in a structured, repeatable workflow
  • Good prompt design can function as first-line governance for editorial standards
  • Assistance from an AI grounded in domain-specific content dramatically improves both the speed and accuracy of human writers
  • Human oversight remains essential
  • A well-designed process continues to hold its value even as the underlying tools evolve