Q
Quack
- Jan 1, 1970
- 0
Hi,
I am thinking about ways to make a video switch that would switch more
cleanly (than just switching at random points within the signal).
After searching around, it appears to be quite a complex thing to
achieve.
The main method i have seen is to keep some kind of buffer of each
incoming signal to be used as slack to have each one sync'ed together,
so that when the output is switched, it doesnt break up the signal.
In my application, 4 inputs, 1 output - i dont see the need to have
each stream synch'ed together, why not instead switch 'just before' a
sync, would that not work ?.
for example,
if you had a PIC chip (or other faster processor) receiving commands
to switch, lets say its asked to switch to input 4.
It would then set itself up to monitor input 4, and wait for a SYNC,
once it gets a sync, it would then start counting, and just BEFORE a
new sync occurs (time based calculation), it would switch input 4 ON
(and the original input, whatever it WAS, OFF) - this would allow
input 4's SYNC to be the first thing OUT the output once switched
(**even though the last frame may not have been complete**).
Would this 'incomplete frame' of the previous signal cause much
disturbance in the process of video capture ?.
It seems logical that it would be less of a problem than getting half
of one frame, then possibly another half (or more than) another frame
etc due to random switching times.
Would the improvement be noticable/worth the effort ?
Currently the technique used is to switch at the time the command is
received and understood (whenever that may be in the signal). This
creates a SYNC problem in the video capture process, and the first few
frames after a switch are usually white/black/all over the place. As
you would expect.
To get around this, we insert a pause after a switch command in the
encoding process, so that we dont include these bad frames in the
end-encoded stream.
Using this 'switch just before sync' technique, would it atleat make
this pause time less ?
Perhaps im way off here, any opinions ?
After all that - i think my question boils down to something simpler;
whats better - a video signal with a frame thats too long (greater
than 625 scan lines) or a video signal thats too short (less than 625
scan lines), in between switching.
Alex.
PS: this is only in regards to PAL signals.
I am thinking about ways to make a video switch that would switch more
cleanly (than just switching at random points within the signal).
After searching around, it appears to be quite a complex thing to
achieve.
The main method i have seen is to keep some kind of buffer of each
incoming signal to be used as slack to have each one sync'ed together,
so that when the output is switched, it doesnt break up the signal.
In my application, 4 inputs, 1 output - i dont see the need to have
each stream synch'ed together, why not instead switch 'just before' a
sync, would that not work ?.
for example,
if you had a PIC chip (or other faster processor) receiving commands
to switch, lets say its asked to switch to input 4.
It would then set itself up to monitor input 4, and wait for a SYNC,
once it gets a sync, it would then start counting, and just BEFORE a
new sync occurs (time based calculation), it would switch input 4 ON
(and the original input, whatever it WAS, OFF) - this would allow
input 4's SYNC to be the first thing OUT the output once switched
(**even though the last frame may not have been complete**).
Would this 'incomplete frame' of the previous signal cause much
disturbance in the process of video capture ?.
It seems logical that it would be less of a problem than getting half
of one frame, then possibly another half (or more than) another frame
etc due to random switching times.
Would the improvement be noticable/worth the effort ?
Currently the technique used is to switch at the time the command is
received and understood (whenever that may be in the signal). This
creates a SYNC problem in the video capture process, and the first few
frames after a switch are usually white/black/all over the place. As
you would expect.
To get around this, we insert a pause after a switch command in the
encoding process, so that we dont include these bad frames in the
end-encoded stream.
Using this 'switch just before sync' technique, would it atleat make
this pause time less ?
Perhaps im way off here, any opinions ?
After all that - i think my question boils down to something simpler;
whats better - a video signal with a frame thats too long (greater
than 625 scan lines) or a video signal thats too short (less than 625
scan lines), in between switching.
Alex.
PS: this is only in regards to PAL signals.