Todd Sinclair

AI-powered editing with Gemini Gems

Below is a case study of a training session I lead for project managers. The topic was how to use agentic AI to be more efficient.


RoleInstructional Designer & Presenter
FormatLive virtual session (Microsoft Teams), 45 minutes
Audience~50 project leads at a digital engineering consultancy
ContextBreakout session within a larger event on agentic AI tools
DeliverablesSlide presentation, live demo, hands-on exercise, supporting materials

CONTEXT

As part of a larger event exploring agentic AI tools, I was invited to lead a breakout session for approximately 50 project leads on the topic of creating a Gemini Gem to automate repeatable tasks. The audience was primarily people who managed teams and projects. Their familiarity with generative AI varied widely.

The organizer reached out to my manager looking for someone to lead a breakout session. My manager knew I had been developing AI-assisted workflows for my own content work and suggested me for the task.

The request was to show this audience how AI could be applied to a practical, everyday task they could relate to. I had 45 minutes in a virtual group setting with an audience I didn’t normally interact with.

THE CHALLENGE

The session needed to accomplish several things within a tight window:

  • Introduce a new AI capability (Gemini Gems) to people who may not have used it before
  • Make the concept relevant for the audience
  • Move from concept to hands-on practice within a single session
  • Leave participants with something they could immediately apply to their own work

DESIGN APPROACH

I structured the session as a progression from passive learning to active practice, moving from a demonstration of an AI agent solving a common task, to having the participants make an agent of their own that addressed a problem or repeatable task relevant to their work.

LIVE DEMO

I started by giving the audience an example of a problem in my daily workflow I use agentic AI to solve. I explained that first-round editing passes on content drafts took too long and took me away from other tasks. They frequently had the same common problems, which were easy but time consuming to fix. I showed them how I automated the process with a Gemini Gem, configured with a few instructions and grounded with a specific style guide; doing the work of reviewing and revising content in a matter of moments. Then I walked the audience how I built that Gem in real-time using a fictional scenario I’d created for the session. Rather than using confidential client materials, I built a self-contained content package for a fictional client from scratch for use in my demonstration.

The client

HomeBot Inc is a fictional smart home robotics company with a short, clear style guide covering tone, terminology, typography, and writing conventions.

The “Auto Kitchen” article

I created a deliberately flawed product description for HomeBot’s automatic kitchen. The article was seeded with specific violations of the HomeBot style guide and broke several writing conventions. This gave the demo Gem something concrete and obvious to catch.

The style guide prompt

I wrote instructions for the Gem to review any submitted text against the HomeBot style guide and return a three-column table showing each sentence in need of correction, a brief description of the violation, and a suggested revision. By structuring the output like this, the Gem’s feedback was specific and immediately actionable.

This scenario served multiple purposes. It was easy to understand without domain knowledge, it made the Gem’s value visible within seconds of testing, and it gave participants a ready-made exercise they could follow step-by-step.

HANDS-ON EXERCISE

After the demo, participants opened Gemini and built their own Gems. They could follow the demo scenario I’d presented or choose to tackle a problem they encounter frequently. A common task for this audience was extracting key data from a weekly report, and many chose to write agents that would automate it for them.

Most of the participants were able to craft an agent within the session that produced useable results and left with an understanding of how they could continue to refine their prompts to get the best results.

Additional Supporting Materials

I also created a set of supplemental materials that participants could use during or after the session. These were examples similar to what I’d shown in the demo. This let them follow along during the presentation or serve as a reference later.

A passive voice detection prompt

A carefully constructed prompt with specific inclusion and exclusion rules for identifying passive voice. This demonstrated that prompt design is a craft, not just a simple instruction, and that precision in prompt writing directly affects output quality.

A second sample text
Another flawed product description sample that participants could use to test a Gem using the passive voice prompt.

DESIGN DECISIONS AND TRADE-OFFS

Why a fictional scenario instead of a real one

Using real client materials would have required context-setting that would eat into the 45-minute window, and confidentiality would have limited what I could show. A purpose-built scenario let me control exactly what the audience saw and ensured the demo would work cleanly every time.

Why structured output (the three-column table)

A vague prompt that directed the Gem to “review this text” produces inconsistent, narrative-style feedback. Defining the exact output format made the Gem’s responses specific and actionable. It demonstrated to the audience that how you design the prompt directly determines how useful the output is.

What I’d change

If I’d had a better sense of the audience and their common struggles, I’d have prepared potentially more relevant example. I also would have spent more time covering prompt iteration, since the real skill with prompting develops over time, not in a single session.

OUTCOME

The session was well received by the audience. Given that this was a one-time breakout room within a larger event, and the participants were not people I worked with regularly, I don’t have data on long-term adoption. But the session accomplished what it was designed to do. It took an audience largely unfamiliar with agentic AI or Gemini Gems and walked them from concept to building their own in under 45 minutes.

WHAT THIS PROJECT DEMONSTRATES

Rapid instructional design
I went from concept to presentation for a 45-minute session, including all supporting materials built from scratch in less than a week.

Audience-aware design
The presentation was structured for an audience with varying AI familiarity and grounded in a task everyone could understand.

Scenario design
I created a self-contained fictional scenario that made the learning objective visible and testable within the session.

MaterialPurpose
Slide presentation (HTML/PDF)Session content: problem framing, Gems introduction, use cases, demo walkthrough, hands-on instructions
HomeBot Style GuideFictional style guide used as Gem context during demo and exercise
Auto Kitchen” sample textDeliberately flawed article for testing the style guide Gem
Auto-Scrub Shower” sample textSecond product documentation for additional practice
Passive voice detection promptExample of precision prompt engineering for a specific editorial task
Step-by-step Gem creation instructionsReference guide for the hands-on exercise