Virtual production and in-camera VFX the Lux Machina way
In 2018 virtual production had been in use for a while in the video games world for virtual scouting and lining up shots for cut scenes, but The Mandalorian took it to a new level. The show’s production team pioneered the combination of huge LED screens with camera tracking to produce in-camera VFX, changing the way many films and TV shows would be made. The studio behind this system was Lux Machina, a Vicon customer that’s one of the leaders in the field of motion capture for the entertainment industry.
At GDC 2024, India Vadher-Lowe, Lead Motion Capture Specialist for Lux Machina, which has worked on other blockbuster projects including House of the Dragon and Barbie, explained what the studio does and how it does it for an audience at Vicon’s booth.
“We first developed our pipeline for the Mandalorian— a full production that was built on reflections in armor,” said Vadher Lowe. Lighting is one of the key reasons for using an LED volume rather than traditional green screen. By wrapping stages in huge, high definition LED walls, studios are able to not only give actors something to react to, but also to make sure their footage is correctly lit by the virtual environments on the screens. On productions such as The Mandalorian, with its highly reflective armor, or subsequent jobs such as a Chemical Brothers music video and Apple’s Masters of the Air series, which feature a reflective airstream trailer and airplane fuselages respectively, getting those lighting effects right is critical.
To make in-camera VFX (ICVFX) work, having rock solid data on the camera’s position is essential.
“The primary focus is to define the exact positional data of the camera,” Vadher-Low explained. “You want the position and orientation of the camera within 3D space in relation to that virtual environment. That way, you can seamlessly integrate your real-world footage with your virtual elements. But we’re not just capturing and tracking what the camera is doing. We want to know what the camera is seeing. And that is camera frustum.
“The frustum is the virtual viewpoint of the camera. Essentially, it’s a truncated cone shape that refers to the area in 3D space that is visible to the camera. And that’s affected by certain parameters, whether that’s the camera’s field of view, the aspect ratio, near and far clipping planes, or the position and orientation of the camera and space.”
As Vadher-Lowe put it, it’s not only about the actors’ performance—it’s also about rendering performance.
“If you have a very weighty unreal scene, you need a considerable amount of processing power to be able to render that out to a higher resolution. By tracking the camera, you’re able to do what we call frustum culling and render out the viewpoint of the camera. You’re essentially rendering only what the camera can see in real-time, because anything outside of that is not contributing to the final image.”
To facilitate camera tracking, Lux Machina has developed its own active tracking crown. “We’ve designed it to counteract the difficulties that you have working with motion capture and with traditional filming techniques. Your key rule when working with motion capture is that you want to minimize occlusion. Because the cameras are working by line-of-sight, they need to be able to see the object to capture data.

“You also want an environment that has minimal reflections, because any reflections can cause added noise. When you’re on a film set, everything is reflective and everything causes occlusion, so we’ve built a tracking crown which counteracts that, because we want to be visible to as many cameras as possible in that area.
“So, how do we attach it? We calibrate the nodal point of the lens in relation to where that tracking crown is. It’s really important that we have a really accurate calibration of that nodal point, because then you’re able to get the correct parallax between your foreground and your background.”
Vicon’s suite of VFX motion capture tools are an essential part of the process.
Getting the right tool for the job.
“My job is really to choose the correct tracking system for the job,” said Vadher-Lowe. “There are so many ways that you can track a camera and you really need to take into account the positives and negatives. With Vicon, you’ve got quick, low-latency mode. You’ve got Object Tracker, which is really quickly processing grayscale data and throwing it down a low latency port, and that ensures that we are able to get the data from Shōgun into Unreal as quickly as possible.
“We’ve also got flexibility. I spoke earlier about lens calibration—you have the ability to make changes to that, you can de-rig and re-rig the camera. You also have accuracy. With Vicon you have submillimeter accuracy, which equates to sub-pixel accuracy with LEDs.”
The calibration process is also key to Lux Machina’s workflow.
“We use the Shōgun video camera calibration, and we calibrate it as if we’re calibrating motion capture cameras,” said Vadher-Lowe. “You’re collecting wand samples, and as you collect them it’s able to determine the radial distortion, your principal points, the entrance pupil threshold, and the skew of the camera—the position and the orientation. It’s a quick process that can take anywhere between 10 and 20 minutes per lens. When you’re working on a traditional film set where you may have a heavy lens package, knowing that you have the ease of a quick lens calibration means that you can get through that very quickly. You’re not holding a film set up.
“We also build our own servers. This is heavily pushed towards optimizing real-time tracking, especially in Unreal Engine.
“These are some of the great ways that you can really maximize your camera tracking.”
For more on the work of Lux Machina, see our recent spotlight in The Standard. To learn more about Virtual Production, click here.
Watch the full library of GDC presentations from the Vicon booth here:
WATCH NOWFAQ's
What is in-camera vfx in virtual production?
In-camera VFX in virtual production refers to the real-time creation of visual effects directly within the camera’s view during shooting. Instead of adding effects in post-production, in-camera VFX merges live action with digital backgrounds, environments, and effects using LED walls, real-time rendering, and camera tracking.
This approach allows filmmakers to see final-quality visuals while filming, delivering a more immersive experience for talent and creative teams and reducing the need for extensive post-production compositing.
How does camera tracking enable in-camera vfx workflows?
Camera tracking enables in-camera VFX workflows by capturing the precise position, orientation, and movement of the physical camera in real time. This tracking data is synchronised with virtual camera movement inside the real-time rendering engine, ensuring that virtual backgrounds and CG elements stay perfectly aligned with the live camera view.
Accurate camera tracking is essential for maintaining perspective, parallax, and lighting consistency between real and virtual elements — making in-camera VFX look seamless on set.
What role does motion capture play in virtual production?
Motion capture plays a key role in virtual production by capturing human movement and performance data that can be used to animate digital characters, creatures, and avatars in real time. Motion capture systems record body, face, and hand motion, enabling live performances to be translated into digital assets during filming.
By integrating motion capture with camera tracking and real-time rendering, virtual production teams can direct and preview character animation, interactions, and effects during the shoot rather than in post.
What are the benefits of in-camera vfx for film and broadcast?
In-camera VFX offers several benefits for film and broadcast production:
- Real-time visualization: Directors and talent see finished visuals on set.
- Faster creative decisions: Changes can be made instantly without waiting for post.
- Improved performance: Actors interact naturally with virtual environments.
- Reduced post workloads: Many tasks typically done in compositing are handled live.
- Consistent lighting and reflections: LED volumes provide realistic lighting cues that match the virtual scene.
These advantages help production teams deliver higher-quality visuals more efficiently and with greater confidence.
What challenges are involved in implementing in-camera vfx systems?
Implementing in-camera VFX systems involves several challenges, including:
- Technical complexity: Synchronising LED volumes, camera tracking, real-time rendering, and live capture systems requires expert integration.
- Calibration and accuracy: Camera tracking and motion capture must be precisely calibrated to maintain visual consistency.
- Cost and infrastructure: LED walls, real-time engines, and high-performance hardware represent a significant investment.
- Skilled operators: Production teams need specialised training to operate and troubleshoot virtual production systems.
Despite these challenges, the creative and production benefits of in-camera VFX are driving wider adoption in film, broadcast, and commercial workflows.