Generator Shaders
Overview
The generator's purpose is to convert an RGB Input Texture into an emulated set of NTSC scanlines, so that the Decoder can then convert it back into an RGB texture, but with all of the artifacts that come from the NTSC encoding.
An NTSC scanline has many parts, but for purposes of the decode, there are two parts that matter (which are the outputs of the Generator):
- The Signal Texture: The visible part of each scanline signal: which contains the luma (Brightness) and chroma (color) image for the scanline
- The Phases Texture: The phase of the colorburst wave, which tells us the reference hue for the colors encoded in the scanline.
For much more detail on how the steps of this process actually work, read the Generating A Fake NTSC Signal page.
Generating the Phases Texture
The first step of the generator is to run the gen-phase shader, which generates the second of those two parts (the colorburst phase), based on timing information fed to the shader. That generates the Phases Texture which is one of the outputs from the Generator.
The state fed to the gen-phase shader includes timing information about the hypothetical machine that is generating the emulated signal. This includes the phase of the colorburst at the start of the frame as well as the change in that phase per scanline.
See the gen-phase shader documentation for all of the shader inputs.
Generating a Signal Texture
The next step is to take the RGB Input Texture (the texture that the generator is converting) and the just-generated Phases Texture and pass them to the rgb-to-svideo-or-composite shader, which turns the RGB pixel information into an emulated NTSC scanline texture (Equivalent to item #1 in the above list), which we'll call the Signal Texture.
This shader uses the Phases Texture to determine how to generate the color carrier waves to represent the colors from the input texture.
This shader can also pad out the generated signal on the sides (generating extra signal on the left and right sides of each scanline) to reduce visible artifacting at the edges of the screen where clamped addressing would do the wrong thing without extra texels to sample from.
This signal texture, depending on the parameters sent to the shader, can either be an emulation of a composite video signal (a classic NTSC signal where the luma and chroma are carried in the same signal), or a S-Video signal (where the chroma and luma are in separate channels, greatly increasing the fidelity of the decoded signal).
See the rgb-to-svideo-or-composite shader documentation for all of the shader inputs.
Optional: Applying Artifacts
The Signal Texture could be sent directly out of the generator, but if we want to add any artifacts (noise and ghosting to emulate a less-than-perfect signal), the generator will finish by running the apply-artifacts shader and using the output of that as the actual Signal Texture.
This shader takes parameters that are predominantly in two categories: controls for the ghosting being applied, and controls for the noise (snow) being applied.