AI in Exhibition Design: What’s Improving and What Still Needs Work
Welcome to Part 1 of “AI as a Visual Design Tool for Exhibitions” Week.
MtM usually focuses on timeless topics, but this week is about right now. This article might only be up-to-date for about 45 seconds.
Here’s the current situation in our studio:
Visual AI is improving at showing what things look like, but it still struggles with how things actually work.
Let’s break that down.
For 3D Design:
Text-to-image tools like Google’s Nano Banana or OpenAI’s Midjourney are now good sometimes at creating atmospheric images, such as how a gallery might feel, lighting, and scenery. However, they still can’t handle core deliverables like floor plans, ADA clearances, or construction details.
For Graphic Design:
AI is now good sometimes at handling style, like color palettes, illustration styles, and large wall graphics. But it still struggles with organizing information, such as text hierarchy, wayfinding systems, and timelines.
For Media Design:
AI is now good sometimes at generating images and short videos, like motion graphics, background loops, and storyboard frames. But it still can’t handle making media work, such as AV system design, projection setups, or interactive systems.
Here’s the thing:
AI is now a powerful assistant for one-off visual tasks. But it’s not yet ready to replace architects, graphics experts, or media designers.
(Check back in 45 seconds.)
Next time: using AI in the concept phase.
Warmly,
Jonathan
- - - - - - - - - - - -
MtM Word of the Day:
Text-to-image. Using written descriptions to create images with AI visual tools. In exhibition design, these tools help quickly make concept visuals, gallery atmospheres, exhibit ideas, and scenic environments. Right now, they’re mostly used in early design phases, before detailed layouts, models, or production drawings are made.