Designing with VR

by Sarah Bonser

Visualizing architecture has become increasingly critical to delivering great projects. For us at HEWITT, finding ways to use information to empower our design thinking is paramount. In our office, we’ve found a great workflow to use visuals and virtual reality to iterate through design ideas more rapidly.

222 Dexter (Early Design): Screenshot out of Revit

222 Dexter (Early Design): Screenshot out of Enscape

It builds confidence with clients, informs design goals to the full project team, and functions as one more check for designers. Perspective renderings have become much more available and prevalent. A typical pipeline for this would be to use a rendering engine like VRay or Revit Render to create a still image.

A major innovation in visualization is the physically based render engine. Render engines instantaneously render images. As a result, cameras can be controlled within the program through keyboards and VR headsets. If you are thinking “This seems complicated”, you are correct.  There’s a reason the early 90’s Virtual Reality didn’t take off.  Recently, a lot of the previously overwhelming barriers have been manageable. If we’re looking at what is literally required to view geometry in virtual reality we can boil it down to three things. It has to be accurate, look good, and is easy to use.

  • Accurate: Precise materials applied to up-to-date geometry- We’re seeing what we designed, not something similar to our design.
  • Look Good: Rendering (2) 1920 x 1080 frames at 70 fps is a minimum requirement for comfortable viewing- Switching from rendering ahead of time and providing a still image or movie to “just in time” or real time rendering.
  • Easy to Use: Software to Hardware communication must be consumer friendly- We shouldn’t have to call a rep or specialist every time we want to update the design.

As designers, we have an additional goal- iteration with this tool. A fourth constraint is introduced: Limit Post-processing time.  We shouldn’t have to do a lot to get it from its “working” state to its “viewing” state.


You can build a VR ready computer for about $900, the key is a dedicated graphics card. As far as VR headsets go, the Vive is about $500 and the Oculus is $400 at time of writing. Because Vive and Oculus headsets are both consumer-facing, the set up is less complicated than installing a printer. These take care of the “easy to use” problem.


Revit’s built in and cloud render services do not support an interactive camera. A game engine is often used to gain this feature, the most common program is Unity. Unity is a free software originally meant for game development, recently shifting to include AEC industries. Architects historically have used it for its render engine, creating geometry in a program like Revit. This split leads it to work very well as a documentation and proof of concept tool.  Unfortunately, it does not work as well from an iteration workflow. Keeping two files up to date with all changes is stressful for the team and at times can require specialists for certain programs. Limiting post-processing, materials reassignment, and exporting models is important to using this as an iterative tool.

This workflow risks a lag between what is current in the working file and what is visible in the most recently exported file. Visualizations can quickly become a burden as designs change and the team shifts to documentation. Testing to see the impacts of changes or VE efforts in late stages of design requires another export and applying materials again. While there is a time and a place for this type of visualization, it does not lead to that iterative workflow we were looking for.

We are currently using Enscape, Revit, and a Vive headset. Enscape is an appx. $600 render engine plug-in for Revit, using native Revit materials and live geometry. This solves that additional problem we were looking at- “Limit Post-Processing Time”. Materials in Revit can be tagged and used for graphics throughout the construction set. Visual changes can be made live with Enscape running. Using Enscape requires a dedicated graphics card and a single button press in Revit.  Tools are shown as a transparent overlay, favorite views are assigned in a tab and displayed in the viewer.  If a designer knows how to use Revit, they know how to use Enscape.  The out of the box settings are pretty good and allow for a variety of “capture” sizes. Updates can be pushed live and take only moments to swap out materials or move elements. This is a departure from previous V-Ray and Revit Render constraints. From there, to get into VR is simply plugging in 2-3 cables and pressing a button.  No export- Revit will also send live updates to the headset if toggled on.

At Hewitt, we strive to balance creativity with constructability and are finding that this tool helps immensely.  There are plenty of combinations of software and hardware that work, these are just the tools that currently work best for us. The first two are used almost every day by a variety of teams.  The headset itself is used about once every other week, sometimes more frequently around team charettes. Because it is live, designers have freedom to roam into less curated spaces- identifying problem areas earlier. We’ve found that having visualizations that relate to room scale and eye level are incredibly helpful for critiquing out own work.  It also provides clients with a better view of what we’re imagining.  In my experience that has lead to more empathy for design goals and more attention to detail when looking for solutions.  Virtual Reality has become, not only a visualization tool, but also an iterative design tool.

Sarah is an architect at HEWITT currently working on 222 Dexter, a mixed-use high-rise in South Lake Union. Sarah graduated from the Ohio State University with a Master’s Degree of Architecture. She embraces art, architecture, and technology. In her spare time, Sarah enjoys making digital art and does mentoring for FIRST – helping to inspire young people’s interest and participation in science and technology.