Decoding A Fake NTSC Signal

Previous: Generating a Fake NTSC Signal


In the previous step, we took an RGB input image:

and turned it into a set of reference phases (left) and an image texture (right):

Now we want to take this fake NTSC signal and turn it (back) into an RGB signal. Let's get started!

A quick aside: the decoder will work with information from a real composite or S-Video signal as well, it doesn't have to be a fake signal! To use a true signal it would just have to be pre-processed to get the colorburst phase, as well as trimming down to just the actual visible scanline portion.

Luma/Chroma Separation

This stage can be skipped for S-Video signals, but if we have a composite signal (like in our example), we need to first separate out the luma and chroma information.

There are many ways over the years that various TVs used to separate the chroma and luma information. As electronics got better, TVs got smarter about separating the two, reducing the artifacts introduced by them (even going as far as using neighboring scanlines and successive frames as 2D or 3D comb filters). However, Cathode Retro is explicitly designed to have these artifacts, so instead, it uses one of the simplest possible filters: a box filter.

A box filter just means that, for each output texel, do an average of the N input texels centered on that output texel (where "N" is the size of the filter). If we choose a filter width that equals our color carrier frequency (in our example, this is 4 texels), running a box filter on our signal gives us the luma (note that this looks effectively like the composite image, but without the chroma stripes):

Why does this work?

The above diagram has a width of a single wavelength. If we divide it up into 8 sections, find the sine values at each, and then average them, they average to 0. In this example, we have:

-0.3826 - 0.9238 - 0.9238 - 0.3826 + 0.3826 + 0.9238 + 0.9238 + 0.3826 == 0

This works no matter how the sine wave is positioned:

This is entirely because the first half and second half samples are perfect opposites of each other no matter the wave positioning.

So now that we've pulled the luma out, we can subtract the luma from the original signal to get the chroma (note that the example image has been biased so that 0 is at 0.5, so that the wave is visible at both positive and negative positions):

This image is pretty subtle, but if you look closely you can see the chroma wave ripples in places where there is color, and none where there is not.

In practice, these live in the same texture, with luma as the red channel and chroma as the green channel, so the actual output of this process looks more like (again, with chroma biased by 0.5):

YIQ Returns

Now that we have separate luma and chroma information, it's time to go back into the YIQ color space. We have Y already (it's the luma channel), but we need to get the IQ channels.

We'll return once again to quadrature amplitude modulation to do this, this time pulling the image back apart.

The first step here is to modulate the chroma with the carrier and quadrature waves that we have (which will be exactly the same waves that were used in the generation process, since we use the same reference scanline phases to generate them).

If we multiply the carrier and quadrature waves by our chroma (the "modulation" part of "quadrature amplitude modulation"), and then do another box filter (average) of those modulated values (this time at 2x the color wavelength, which helps reduce artifacts at color changes), the average of chroma * carrier becomes the I component, and the average of chroma * quadrature becomes the Q component, in what is a rough inversion of the process of generating the original wave.

Now we have our IQ components (and Y, of course), it's time to get the RGB output!

RGB Output

Converting from YIQ back into RGB is as well-documented as going the other way:

            R = Y + 0.9469*I + 0.6236*Q
            G = Y - 0.2748*I - 0.6357*Q
            B = Y - 1.1000*I + 1.7000*Q
          

Once we've done that, and we clip off the extra padding that was added in the generation phase, we end up with our decoded RGB image:

If you ignore the clearly (and intentionally) wrong aspect ratio, this is very representative of the input, just roughed up a little bit through the process.

Temporal Aliasing

This is great for a single frame, but for moving images some systems (like the NES and SNES) jittered their per-scanline reference phases every frame to alleviate some of the harshness related to the signal.

This jitter was great on a CRT running at a locked ~60fps, but on modern LCDs in emulators that aren't always at a rock-solid framerate, it can be beneficial to reduce that aliasing by leveraging the fact that we're emulating these signals on a GPU.

For more information on the specifics of how the shaders are set up for this step, check out the Decoder Shaders page.

Next: Reducing Emulated Temporal Aliasing