The eye-sore magenta of Netflix’s AI-assisted green screen covers actors.

Compositing, or putting actors in front of a background that doesn’t exist, is as old as filmmaking itself and has always been difficult. Netflix has another procedure that depends on AI to do a portion of the difficult work, yet it requires lighting entertainers in an ostentatious fuchsia.

Chroma keying, in which actors stand in front of a brightly colored background (originally blue, later green), was the simplest method of compositing for decades. This background was easy to identify and could be replaced with anything from a weather map to a fight with Thanos. The background is a transparent “alpha” channel that can be manipulated in addition to the red, green, and blue channels, and the foreground is referred to as “matted.”

Although it is simple and inexpensive, there are a few drawbacks, including issues with transparent objects, fine details like hair, and, of course, anything else that is the same color as the background. But because it is usually good enough, attempts to replace it with more expensive, sophisticated methods like a light field camera have stalled.

However, Netflix researchers are attempting it with a mix of old and new techniques that could result in simple, flawless compositing, albeit at the expense of a terrible lighting setup on set.

Their “Magenta Green Screen,” which basically puts the actors in a lighting sandwich, produces impressive results, as described in a recent paper. Bright green (actively lit, not a background) behind them; a mix of red and blue in the front, creating striking contrasts between the colors. Even the most experienced post-production artist will probably be offended by the resulting on-set appearance. Conventionally you need to light your entertainers splendidly with a genuinely regular light, so in spite of the fact that they could require a little punching up to a great extent, their in-camera appearance is generally typical. However, assuming they are lit solely with red and blue light, it totally contorts that look, since obviously ordinary light doesn’t have an immense lump of its range removed.

However, the method is also clever in that separating the foreground and background is made easier by using only green and red/blue colors. A standard camera that would regularly catch those tones rather catches red, blue and alpha. By not having to separate a full-spectrum input from a limited-spectrum key background, the mattes that come out of this are extremely accurate.

Naturally, they appear to have just added another problem: Compositing is now simple, but restoring the green channel to subjects lit in magenta is difficult.

A “naive” linear approach to injecting green results in a washed-out, yellowish appearance despite the fact that subjects and compositions differ. As a result, it needs to be done in a systematic and adaptive manner. How might it be computerized? AI saves the day!

The team used their own “rehearsal” takes of similar scenes that were lit normally to train a machine learning model. The convolutional brain network is given patches of the full-range picture to contrast with the fuchsia lit ones, and fosters a cycle for rapidly reestablishing the missing green divert in a more wise way than a straightforward calculation.

So, although the color can be restored in post surprisingly well (it is “virtually indistinguishable” from an in-camera ground truth), the actors and the set still need to be lit in this terrible way. Imagine working in front of a green screen under harsh, inhuman lighting, which many actors already complain makes them feel unnatural.

However, the paper discusses the possibility of “time-multiplexing” the lighting—essentially turning on and off the magenta and green lighting multiple times per second—to address this issue. It is distracting and even dangerous to do this 24 times per second, which is the framerate at which most movies and television shows are shot; however, if the light is turned up faster—144 times per second—it appears to be “nearly constant.”

This anyway requires a complicated synchronization with the camera, which should just catch light during the short minutes the scene is fuchsia. Additionally, they must account for motion’s missing frames… As you can see, this is still very experimental. But on the other hand it’s a fascinating approach to taking on a well established issue in media creation with a new, cutting edge approach. This could never have been conceivable a long time back, and keeping in mind that it might get embraced on set, it’s plainly worth testing.