Decoder Shaders
Overview
The Decoder starts with a Signal Texture and a Phases Texture (which may have come from the Generator, but could also be from a preprocessed, real NTSC signal) and converts that into an RGB Texture.
(That's right, running the Generator followed by the Decoder is an RGB -> RGB process, but the RGB that comes out of the decoder will have the artifacts of a decoded NTSC signal -- since that's basically what it is).
For more specifics on how these pieces all work, read the Decoding A Fake NTSC Signal page.
Optional: Composite to S-Video
If the Signal Texture is a composite signal (as opposed to an S-Video signal), the Decoder starts by using a filter to separate the luma (brightness) and chroma (color) information using the composite-to-svideo shader. The output of that is a S-Video Texture.
(If the Signal Texture was already S-Video it will be used directly as the S-Video Texture).
See the composite-to-svideo shader documentation for all of the shader inputs.
Modulated Chroma
For performance reasons (to turn a bunch of sines and cosines per texel into a single sine and cosine per), the chroma channel from the S-Video Texture needs to be pre-processed by running it and the Phases Texture through the svideo-to-modulated-chroma shader, which performs the first part of the QAM process to get the actual color channels out as the Chroma Texture.
See the svideo-to-modulated-chroma shader documentation for all of the shader inputs.
S-Video to RGB
After that, both the S-Video Texture (for the luma information) and the preprocessed Chroma Texture (from the previous step) are sent to the svideo-to-rgb shader, which turns the luma and chroma information into a set of YIQ color space values, then those are converted to RGB and output as the RGB Texture.
This shader takes information about the various signal levels in the input. When we get a signal that came from the Generator, these are consistent values: a black level of 0 and a white level of 1. However, these values can be used to decode a real NTSC signal (after it has been broken up into scanlines) by setting these level values equal to the voltages of black and white (additionally, the user saturation value would need to be multiplied with the colorburst amplitude from the code that processes the scanlines).
See the svideo-to-rgb shader documentation for all of the shader inputs.
Output & Next Step
The output of this whole process is the RGB Texture, which can be used as-is (without the CRT emulation), or it can be fed into the next (and final) main step: the CRT Emulator.