Generating a Fake NTSC Signal

A key component of Cathode Retro is running an input image through an emulation of an NTSC signal (so that we can decode it back to RGB later, with all of the NTSC artifacts applied).

We do not need to generate the entire color NTSC scanline, since we are not emulating a full NTSC signal decode. Instead, we need just two things:

  1. The visible portion of each scanline (a conversion of the input image)
  2. A per-scanline value that represents the reference phase for the given scanline (the equivalent of the colorburst in the back porch of an NTSC scanline)

To help illustrate this process, we are going to take the following image through the generation pipeline:

Reference Phase

The process starts by generating the per-scanline phase values. These are generated based on timing values of the hypothetical machine that is being used to generate this faux-NTSC picture (for instance, the timing values of a SNES and a Sega Genesis are completely different and result in different artifacts).

Each phase value is a value in the range of [0..1), measured in wavelengths of the color carrier frequency. This phase value is used as the baseline for a color carrier wave that is generated along every output scanline with a wavelength of some multiple of the horizontal output resolution.

That is, for a given reference phase p, output texel x coordinate x, and color cycles per texel value of c, generating the carrier wave for that texel is:

              sin(2*π * (p + x / c))
            

Ultimately, you get a texture that looks something like this (this texture has been rotated 90° then expanded vertically for visibility, normally it's a 1xN texture with one entry per output scanline):

Every element of that texture is a phase value, and is used as a lookup during generation and decode. In this example, the reference phase changes every scanline in an emulation of the way the SNES outputted its color. These phases would look different on a Genesis, or a Apple II, etc.

The Image Output

Once we have our reference phase texture, we can convert the input image into a scanline texture (a texture where every row of texels is a row of the input encoded as if it were the visible part of an NTSC scanline), which will end up looking like this (you may need to zoom in to see details):

What's going on in this image?

Now let's see how this image gets generated!

YIQ

The first step when converting an RGB image is to convert it into the YIQ color space.

YIQ is the color space used by NTSC: Y is the luma component (corresponding to the perceptual brightness of the image, i.e. "how would this color look on a black & white TV), and I and Q together make up the chroma component of the color space.

The good news is that going from RGB to (FCC NTSC Standard SMPTE C) YIQ is well documented:

              Y =  0.30*R + 0.59*G + 0.11*B
              I = -0.27*(B - Y) + 0.74(R - Y)
              Q =  0.41*(B - Y) + 0.48(R - Y)
            

Doing that for each texel gives us a YIQ color which can then be turned into a composite (or S-Video) signal.

The Signal

Composite and S-Video signals both work with a luma baseline and a chroma carrier wave. In the composite case, they're simply added together to create the final signal, but they're left separate in S-Video.

Once the color is in YIQ form, we already have the luma component: it's just Y. But we need to take the IQ components and convert them into the chroma signal. We'll use a process called quadrature amplitude modulation. This is done by taking the carrier wave sine equation from above (using reference phase p, texel x coordinate x, and color carrier wavelength c), and also calculating the (negative) cosine:

            carrier = sin(2*π * (p + x / c))
            quadrature = -cos(2*π * (p + x / c))
          

This generates the carrier wave and its quadrature (which is really just a fancy way of saying "same wave as the carrier, delayed 90°). The generated carrier wave for the output image looks like this (scaled and biased to ensure visibility, quadrature is not displayed):

This looks somewhat uneven as an example, here, but that's mostly a moiré-like effect owing to the way the phase shifts at fractional offsets per scanline, but it's correct.

To encode the IQ color as a wave, we multiply the I at each texel times the carrier (pictured above) and add that to Q times the quadrature:

            chroma = carrier * I + quadrature * Q
          

Now we have our luma (Y) signal and our chroma signal. If we're generating an S-Video output, we write those out as two separate texture channels, otherwise we add them together and write out a single channel texture (which is what was done for the generated signal example image above).

Artifacts

Optionally, at this point, we can decide "this signal is just too clean" and we want to add some noise to it. This is straightforward: we add a little bit of static to each signal texel (may need to zoom in to see details):

Now What?

For more information on the specifics of how the shaders are set up for this step, check out the Generator Shaders page.

Now that we have our two outputs from the generator (the reference phase texture and the generated signal texture), we can run it through the decoder!

Next: Decoding A Fake NTSC Signal