So, you've got an audio signal contaminated with a continuous tone at about 233 Hz, with a really strong second harmonic at 465 and some others throughout the audio spectrum. It sounds like a swarm of angry bees and makes the main signal hard to listen to. You can notch it out - that means applying a filter that simply removes the frequencies in question - but since 465 Hz is right in the important part of the speech band, the result is going to sound really bad. Any simple frequency-notch filter that blocks the interference is going to destroy things you want to keep, too. Look around the Net these past few days and you can read a lot of broadcast audio people complaining about this issue.
Well, with all due respect to Pafnuty Chebyshev, this isn't the 19th Century anymore, it's the 21st, and we have DSPs now. How hard is it to implement the following?
Digitize your audio and block it into 100 ms windows, that is ten of them per second. Either do an FFT on it, or just multiply it by a few precomputed sines and cosines, whichever is faster, the point being that you want to get the complex amplitude of the signal in each frequency bucket that covers an harmonic of the interfering tone.
Now, you could simply take those amplitudes, set them to zero, and un-FFT. The result would be a digital version of the aforesaid notch filter, and it would sound bad. So here's what you do instead: for each frequency band under consideration, you keep an eye on the magnitude of that amplitude in successive windows, covering a period of the last five seconds. Then for your output, you subtract not the current amplitude, but the minimum amplitude you saw in the last five seconds.
The interfering signal is more or less constant. It gets subtracted. But your desired speech signal consists of rapidly changing pulses and silences. If the desired signal happens to contain something that sounds like a 465 Hz tone briefly, it won't contain that all the time, so it will stand out above the five-second floor and it won't get subtracted.
This kind of technqiue - gather a baseline and subtract it - is something for which you pretty much need to use digital processing, though no doubt the old-time analog wizards found ways to implement it in the bad old days too. The usual kind of analog filter is multiplicative rather than additive: the output at any given frequency is a constant multiple of the input (a very small multiple, in the case of things it's trying to block) and so you can't increase the proportional difference between the peaks and troughs of amplitude that way. An additive transformation is different because it can make a bigger proportional difference in smaller interfering signals than in larger desired signals.
People sometimes apply this style of noise removal across the board, to all frequencies, to remove generic "noise." The result doesn't sound all that good. However, I'm proposing to apply it just on the harmonics of a known interfering signal, and I think it'd work pretty well under the circumstances.
There is a hidden gotcha that the interfering background signal had better be really constant. If its volume increases, then you'll hear it until five seconds after it stops increasing. If its volume decreases, then you get some notching of the desired signal until five seconds after the decrease stops. I suggest that that may actually be a desirable feature: if the interfering signal carries any information (for instance, about sports fans' excitement level) then maybe you want its dynamic changes to be audible because they add to the atmosphere; you just don't want that constant background drone that drives away television viewers.
Any broadcast networks that want to pay large licensing fees for this idea, well, you know where to email.