Bouncing Bomb Overview

🔁 Hybrid Testing: AI-Informed, Human-Validated
As a board game enthusiast, I set out to explore how far AI tools like ChatGPT, Lovable, and Replit could take a simple card game idea, from scratch to a playable prototype.

With these AI-generated game rules and card lists as a foundation, I ran targeted human playtesting sessions with my board game group. Feedback was immediate and emotional — players flagged pacing issues, confusion with reverse cards, and unclear win conditions, none of which AI alone had caught.

Outcome: A faster, hybrid testing workflow that cut iteration time and produced a more engaging, balanced game experience.

  1. 🧢 Role

    Designer

  2. 🙌 Collaborator

    Board Game Friends

  3. 🗓️ Date

    2025

Bouncing bomb card game design workflow

I mapped out a high-level workflow to track how ideas moved from sketch → AI generation → digital prototype → physical cards → playtesting.

Figure 1: Design workflow diagram

Sketch

To kick things off, I sketched the core mechanics of how the game might play. These rough ideas became the starting point for AI-powered iteration.

Figure 2: Sketch and initial prompt

Game rules

Using ChatGPT, I ran through multiple iterations to refine the game loop, number of cards for 3–8 players, and balance the mechanics. This resulted in Version 1 of the rules.

Figure 3: Version 1 rule

Figure 4: ChatGPT testplay result

Figure 5: ChatGPT testplay feedback

AI Prototype (Online)

I translated the rules into digital prototypes using AI-driven platforms like Replit and Lovable. Replit came closest to what I envisioned, enabling me to playtest online.

Physical Card prototype

I designed simple card layouts in Figma, printed them, and created a physical deck for in-person playtesting.

Figure 6:  Version 1 physical card design

User feedback

Early feedback from my board game group:
💣 Too many “fluffy bomb” cards caused rounds to drag
💣 Reverse cards confused new players
💣 Overall potential felt strong, but tweaks were needed before the next round of testing

Figure 7: Version 1 testplay session

Figure 8:  Comparing testplay feedback between real users and ChatGPT’s observations

Next steps

My aim is to reduce confusion and streamline gameplay. I’ll create a Version 2 and conduct further testplay sessions to gather more feedback.

What I Learned

Instead of treating AI and human testing separately, I combined them in a feedback loop: player insights informed new AI prompts, which refined rules, rebalanced probabilities, and tweaked mechanics more efficiently. This hybrid process reduced iteration time while keeping the game fun, intuitive, and grounded in real-world interaction.