AI for Pre-Rendering
Welcome to Part 3 of “AI as Visual Design Tool for Exhibitions” Week.
These posts are looong because of the topic. We’ll be back to normal length next week.
Part 1 covered what AI text-to-image tools can now do well, and where they still fall short. Then Part 2 covered AI visualization during the early concept phase. (Find all past articles at makingthemuseum.com.)
Today, the second moment when text-to-image AI can now help: mid-design pre-rendering.
After the early concept phase, something important happens for any team using 3D software: the digital white model.
We model walls, doors, and windows first. Then maybe cases, scenic, and media surfaces. Eventually, smaller elements. Everything is white or gray. No color. No graphics. No lighting. Just shapes and space.
Traditionally, if we wanted a colorful, realistic-looking rendering after that, which we always do in my studio, we had to open up rendering software, add graphic textures, assign materials, set up lights, place realistic people, render, adjust, and render again.
It takes a long time just to get the first decent result. And if you later need to change what you just added, or explore very different ideas, you have to build a whole new set of rendering assets each time.
Enter AI.
Now, we can use AI to quickly pre-render options for the final rendering whenever we need. By exporting a simple view from the white model, we can use AI to test materials, lighting, graphics, people, and atmosphere quickly and easily. This lets us see several options for the finished rendering in a short time, and explore ideas we wouldn’t have had time for before.
This means we can see more views, more options, and more visuals, faster and earlier. That’s helpful. With the right explanation, these pre-renderings can even be shown to collaborators and clients.
Some teams, especially those that don’t do many renderings, might even stop at this stage. We don’t, but that’s us.
Truly Important Note:
Even though these pre-renders look realistic, they are often hazy and not always repeatable. If you need a fully rendered model later, you’ll still have to do all the usual work. AI pre-rendering just helps you get useful results sooner, but it doesn’t replace the real work that still needs to be done.
Here’s the thing:
Rendering and modeling are here to stay, but the long waits to just see one idea might not be. The current workflow could look like this: start with the white model, then generate AI pre-renders, and finish with the final rendering.
===
That wraps up our AI visual design week. How did we do? Are you using AI visual tools at these stages yet? Do you use them in other ways? Or does AI make you break out in a cold sweat?
Hit reply and let me know!
Warmly,
Jonathan
P.S. Bonus tip: If you’re an advanced modeler working on case layouts, check out the photo-to-object AI tool Meshy.
- - - - - - - - - - - -
MtM Word of the Day:
White model. An all-white or grey 3D model of an exhibition space showing walls, cases, scenic, and media surfaces without materials, graphics, or lighting. It is the first step in creating rendered views, and is used to study layout, circulation, sightlines, and spatial relationships before applying additional elements and color.