NTSC and PAL are very fun. Using quadrature amplitude modulation to add color to the monochrome signal in a backward compatible way was so ingenious. PAL added things to make it better after some learnt lessons with NTSC, foremost the 180° phase shift with each lines, which cancels out phase errors.
(Fun facts: Because of that, countries with PAL instead of NTSC never knew what a "tint control" on the TV was, and the nostalgic 80s VHS look is also missing that blueish/purple color shift there.)
I wrote both an NTSC+PAL decoder in Matlab, and a PAL encoder in an FPGA (also fixed point), which I use in an actual project as composite video output. Decoders are harder and less straightforward than encoders, with lots of optimizing potential, because you need to filter the color signal out (I look forward to seeing what this code did). If you really, truly understand how NTSC and PAL work in their mathematical details, you may get your mind blown away by reading about this decoder, which is likely the best PAL decoder that ever existed: http://www.jim-easterbrook.me.uk/pal/
Writing my own encoder and decoder was super enlightening, and there are tons of details to nerd out on, e.g. when you get into data that's encoded into the sync gap for Videotext, closed captions, and other stuff.
In my dabblings with SDR, playing around with analog video was one of the highlights. Something about this technology that was such a prominent yet opaque feature of my youth and now being able to control it from the inside out felt extremely empowering.
I wrote my own decoder in python (nowhere near realtime) because at the time I couldn't find one that handled color. The deep dive on the design tradeoffs and signal processing hacks for NTSC was fascinating.
I can see just by the artifacts gif that you've definitely been down the same rabbit hole. Kudos for actually pulling through on also adding some scrambling, I only read about that (and since I don't have any descrambling hardware, I would have had to do that part, too).
I share the sadness about those signals being effectively gone. How fun it would have been to work with real live broadcast ones at their heyday. There still are a few ones, but they're mostly pretty basic now.
I know what NTSC is, and what Matlab is, what would be interesting to hear about now is how you got an NTSC signal turned into Matlab data. Was there some kind of video input hardware to digitise composite video for you, or were you just working from example data? Sounds like fun either way.
It's a good question! I simply sampled the composite output of my Blu-ray player using my oscilloscope, and exported it as CSV. The bandwidth for NTSC or PAL is 6MHz at best, so even the most crummy digital oscilloscope should do (provided it has enough memory to hold at least a full frame's worth of data).
The only Blu-ray I had lying around was Back to the Future II, so that's what I used. Here is one of the very first pictures from my decoder, you can see dot crawl artifacts all over the place from my shoddy filtering (if I filtered at all at that stage): https://i.imgur.com/k2ZiCrH.png
You can also see the insanely overkill sample rate I used (whatever my oscilloscope was set to, it could do 300MHz) by the numbers on the bottom. That's horizontal resolution, as sampled. But you won't get more than at most ~700 really discernible pixels with SD composite video's bandwidth.
Finally, this picture shows everything: The (green) color burst on the left, the data in the sync gap at the top, and the vsync pulses at the bottom.
Heh yeah, I know what you mean. It's also just really interesting to me how analog video degrades over bad transmissions[1]. With digital video, you usually don't get much more than just a disappearing stream, but analog video has/had all sorts of different failure modes, and they revealed a lot about the underlying encoding. And they all had their particular "look and feel" to it.
There was also a third color modulation scheme, SECAM, which was very different from NTSC/PAL (PAL is really just "NTSC 2.0"). It added the two color signals using plain FM. When degrading, it gave rise to something sometimes called "SECAM fire", and anyone who spent some time in France back then, me included, might subconsciously remember it: https://www.scheida.at/scheida/TV_SEITE/SECAM_Screenshot_Sec...http://www.scheida.at/scheida/TV_SEITE/SECAM_Screenshot_Seca... (Those pictures are not from me, just found on the Internet.)
To the casual onlooker that probably just looks like noisy pictures, what's the big deal... But to me it's interesting because the noise is different.
[1] Of course, my picture is not a bad transmission, just bad decoding.
Yep, that's the reason! The ingenious idea of using quantum amplitude modulation unfortunately also meant that slight phase shifts--which happen easily in broadcast TV--appear as color errors.
PAL fixed it by flipping the phase around every line. By subtracting the current line from the previous line, the phase error would cancel out (but is also more expensive since you need to remember the last line, in a delay line or memory[1]). So, always the same color. (But not ATSC, that's something else entirely...)
[1] Well, there wer supposedly "PAL simple" receivers in the very beginning, where no processing of the phase flip was done and your eye was supposed to average the difference out. Probably looked terrible unless you were sitting really far away...
Old PAL typically used a quartz prism that served as a folded bulk acoustic wave delay line, using an ultrasonic transducer and exciting the low syndic range of a composite signal to get by with cheap transducer electronics.
I guess you could take a massive amount of them, and some capacitors, and use them for sample and hold. But you already covered insanity like that by saying that there is no good way...
This is one of this things I always found funny about PAL vs NTSC arguments.
There simply wasn’t a better option, at the time. Both options were engineered with limited resources to best fit their respective needs. NTSC had a better refresh rate and easier hardware integration, plus 70+ years of backwards/forwards compatibility (a 1941 NTSC B+W television set could be used to watch the series finale of Friends in 2004 using a bog standard antenna for over the air signals); PAL had almost 30 years of experience and fixed some of the major issues with NTSC (primarily the Phase signaling) and had increased resolution at the cost of refresh rates.
Both were pushing analog transmission bandwidth to the limits and did admirably until digital options were available and became the norm.
That was a problem early on, maybe, but probably by the 1980s and certainly by the 1990s, it was a dead issue in terms of consumer TVs that weren't antiques or damaged somehow.
I find it hilarious (in a mildly sad way) that NTSC (or any other analog system) is a far better system for final delivery when conditions aren't effectively perfect. Guarantee you that a digital broadcast will break down far before any analog broadcast will.
If only digital could be used for 99% of distribution, with OTA still being at least some sort of hybrid system for better resilience. Digital broadcasting has very low resilience to interference. Speaking from a US perspective here, where we got saddled with 8VSB (from my understanding, COFDM is much more resilient).
That's not right. Digital transmission systems will cope with the errors and regenerate a perfect signal upto a point called the digital cliff. With that amount of noise in an analog signal will be a very snowy, staticy, picture.
In an analog signal, a bright picture will be 700mV, a less bright picture will be 500mV, a dark one at 200mV and a very dark one at 50mV.
Imagine that you get a 300mV error, that loses a lot of data - the dark one will be sort of bright, or vice versa, or the sort-of bright will be really-bright, etc.
Now lets say you have a 1 represented by an amplitude of 700mV, and a 0 represented by 50mV. A 300mV error wlll have it at 400mV and 350mV. Your system will go "hey this is 350mV, it's a zero", and reclock it to 50mV, and vice versa. The recovered signal will be identical upto the point that the errors flip the bits - the digital cliff.
(In reality of course the signal isn't just high/low but uses various techniques to encode the ones and zeros to prevent DC voltages accruing - Manchester Encoding for example.
Push an analog video signal down a cable and it will degrade the longer the cable is. The same will happen in the digital domain -- you can see this on a waveform monitor by looking at the digital eye, but as long as the eye is open enough the signal can be retrieved to it's original state. With an analog signal, the signal is lost, and can't be retrieved.
Like I said, part of my gripe is really with 8VSB, and not digital OTA in general. At least in the US, digital OTA signals break down far quicker into an unwatchable state than analog (analog might be snowy and staticy, but our brains are wonderful at poking through that and understanding the content).
This sounds amazing! To hear that you all have created software for decoding NTSC analog video using digital software. I'm a member of the Domesday project and community where developers are currently trying to improve decoding of RF signals that are captured directly from the head of a Laserdisc player. Have you heard of this?
The same author, LMP88959/EMMIR, writes this really cool indie game called King's Crook. It uses a lot of the rendering, modeling, texturing, and sound design style of Jagex from back when the brothers Gower first worked on RuneScape 2.
Really cool, mind-blowing stuff. I think this guy is one of the few people in the world creating the software for that specific content creation pipeline.
He's got the exact MIDI style down and everything.
Wow thanks for mentioning that. I was hugely in to RuneScape back in the early 2000s, but stopped around RS2. I still run a classic server from time to time so I can relive that. Kings Crooks looks really cool will definitely check that out.
Amigas used a CRT as a display (had to do that on more than one occasion). Amigas also had a few different expansion cards that made doing TV work possible. The Toaster was the biggie for doing actual TV work and not just displaying a signal. There were other cards that I'm losing the names of right now. The card that allowed you to move your animation frame sequences to the hard drive built onto the card to play back in real time as component video. Then there was the Kitchen Sync that allowed a couple of non-genlocked inputs as usable/stable sources for the Toaster.
Edit: By CRT I meant just any monitor with RCA composite video input compared to a monitor with a VGA type input.
Genlock support in the chipset was quite neat. The genlock contained a Phase Locked Loop that provided the 28.63636 MHz clock input on the video connector so that the entire system ran off that clock which was synchronized to the video stream. One of the bit planes then controlled if the Amiga's video would overlay the external video source. That was incredibly forward thinking of Jay Miner compared to the offerings of other platforms at the time.
Are you saying Jay Miner created Genlock or just that it was forward thinking to include on the Amiga?
I'm not sure I'd agree with either, as I don't recall the Amiga having Genlock abilities at all. Only 3rd party expansion cards gave the Amiga the ability to Genlock to external video sources so that it was in sync with the rest of the house. For proper Genlock, you have to have some sort of input (typically a black video signal) to use as a sync source, and I just don't recall an Amiga natively having any kind on input like that.
The horizontal and vertical syncs[1] (along with the system clock) can be inputs on the video connector. That's what enabled all of the 3rd party genlock hardware. It was designed into the Agnus from the start.
Without that you'd need an external framebuffer like an infinite window TBC, which is much more expensive.
No I get what's meant though. The fact that the Amiga apparently had the native ability to sync its PLL to incoming sync signals (and presumably other clocks are derived from that?) is, for the time and for a general purpose micro at least, nothing short of amazing to me.
That may be true for the author, but it's pretty common to use fixed point integers for DSP. It's less expensive in hardware, and very convenient. You don't need an FPU or a floating point DSP, it's easy to implement in FPGAs (which often have integer adders and multipliers as extra blocks also), and fixed point integer algorithms are pretty easy to characterize in terms of SNR and other tradeoffs (MATLAB has an entire "Fixed Point" toolbox for that, and it's great).
Often in DSP, there is just no good reason to use floating point instead of fixed point integers.
Also fixed point math is more accurate than floating point math in computers, also it is easy to accomplish. Even 32bit numbers can be very big/small when a struct of 2 32bit integers is used, one for the number and one for the exponent.
It's true that that's not an exponent, though, and that was what they were replying to. Exponents make floating point. I think the poster before that meant to say "fractional part" instead.
Either you've got a 64 bit fixed point number, 64Q32 or you just invented your own floating point format, and it's not clear which you're trying to describe.
Back in the day, this was the case for the ARMv5 CPUs used by a major phone manufacturer and the software JPEG decoder couldn’t use floats. IIRC you only really needed them when doing YUV -> RGB conversions, so that was all done through integer arithmetic instead.
Those type of emulations are missing one little touch - real time emissive lighting and screen space reflections that affects the bezel (unless I'm mistaken it doesn't do that right?)
I notice these effects in videos of real old CRTs and my own old CRT systems.
stuff like this is one argument I have against language version deprecation. Python 3.* is great for example but in the year 2199 people should still be able to write and use python2.7. That doesn't mean someone has to keep adding features or fixing non-security interpreter bugs. It just means the last release version is made available by python.org and support by OSes forever. Same with perl, rust,golang,etc...
The first step for any NTSC decoder is to go to YIQ. From there, you go to any other color space of your choice.
This is mainly for decoding the composite video signal generated by a computer or console. For example, the NES generates a 12-step square wave for its video output, and you'd decode that to YIQ then convert that to RGB.
I am confused about this myself. On the surface it seems to just be NTSC-izing (adding artifacts to) an existing image in PPM format.
Cool I guess.
usage: ./a.out -m|o|f|p|h outwidth outheight noise infile outfile
sample usage: ./a.out -op 640 480 24 in.ppm out.ppm
sample usage: ./a.out - 832 624 0 in.ppm out.ppm
-- NOTE: the - after the program name is required
------------------------------------------------------------
m : monochrome
o : do not prompt when overwriting files
f : odd field (only meaningful in progressive mode)
p : progressive scan (rather than interlaced)
h : print help
It appears to encode and decode in one go, optionally adding noise. Should be easy to pull it apart and do either, in which case it looks like it would be able to produce sampled composite video, or process sampled data respectively.
(Fun facts: Because of that, countries with PAL instead of NTSC never knew what a "tint control" on the TV was, and the nostalgic 80s VHS look is also missing that blueish/purple color shift there.)
I wrote both an NTSC+PAL decoder in Matlab, and a PAL encoder in an FPGA (also fixed point), which I use in an actual project as composite video output. Decoders are harder and less straightforward than encoders, with lots of optimizing potential, because you need to filter the color signal out (I look forward to seeing what this code did). If you really, truly understand how NTSC and PAL work in their mathematical details, you may get your mind blown away by reading about this decoder, which is likely the best PAL decoder that ever existed: http://www.jim-easterbrook.me.uk/pal/
Writing my own encoder and decoder was super enlightening, and there are tons of details to nerd out on, e.g. when you get into data that's encoded into the sync gap for Videotext, closed captions, and other stuff.