Maker Pro
Maker Pro

Comparing phase of physically distant signals

D

Don Y

Jan 1, 1970
0
Hi,

Cisco has ieee 1588 gear, but you need both host cards and switch. There
are easily half a dozen companies making the chips, but the hardware
hasn't become mainstream yet.

Mine isn't a "compliant" implementation. Much less expensive for
the same level of performance (because I have other constraints that
I can impose on The System -- that Cisco et al. can't!)
For GPS timing, you need to ignore satellites at the horizon. Using a
GPS for timing is a bit different than using one for position. Some of
the GPS timing antennas are designed to ignore the horizon, but this
isn't universal.

You can also synchronize from cellular signals, especially LTE.

I'm not sure how well those external sources would work in an arbitrary
structure, etc. E.g., I can't operate any of my GPS units indoors,
here (residence) -- all of them need a clear view of the sky. I
suspect they wouldn't work in a conference room deep inside some
generic building -- undoubtedly fabricated out of lots of METAL!

[OTOH, a radio *designed* for this (my) purpose would have to take
that into account]
 
G

George Herold

Jan 1, 1970
0
Hi,



[posted to S.E.D in the hope that a hardware analog might exist]



I synchronize the "clocks" on physically distributed processors

such that two or more different machines can have a very finely

defined sense of "synchronized time" between themselves.



During development, I would measure this time skew (among other

factors) by locating these devices side-by-side on a workbench

interconnected by "unquantified" cable. Then, measuring the

time difference between to "pulse outputs" that I artificially

generate on each board.



So, I could introduce a disturbance to the system and watch to

see how quickly -- and accurately -- the "clocks" (think FLL and

PLL) come back into sync.



How do I practically do this when the devices are *deployed*

and physically distant (vs. "electrically distant" as in my

test case)?



Two ideas come to mind:

1) two equal length cables to connect the "pulse outputs"

from their respective originating devices to the test gear.

2) two *radios* to do the same thing -- after accounting

for different flight times



[Though I wonder how hard it is to qualify two different

radios to have the same delay, etc. Far easier to trim

two long lengths of wire to the same length!]



Of course, I would like to minimize the recurring cost of

any solution as it is just present for system qualification

(and troubleshooting) and offers no run-time advantage to

the design.



Thx,

--don

Don, If you don't find an answer here you might try the "time nuts" forum.

George H.
 
P

Paul E. Bennett

Jan 1, 1970
0
josephkk said:
On Sat, 03 Aug 2013 00:45:41 -0700, Don Y wrote:
[%X]
Of course, I would like to minimize the recurring cost of any solution
as it is just present for system qualification (and troubleshooting) and
offers no run-time advantage to the design.

Thx,
--don

Sounds like a use case for NTP (network time protocol) which already
keeps millions of computers synchronized to a few milliseconds across the
whole planet.

Don didn't mention any numbers with specific required tolerances but the NTP
solution may work to an extent. If Don wanted the finer time difference
granularity then he should look at LXI as a time protocol method (used in
finely timed instrumentation) which can, I believe, attain better than 40ns.

--
********************************************************************
Paul E. Bennett...............<email://[email protected]>
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
 
J

Joerg

Jan 1, 1970
0
Don said:
Hi Joerg,

[...]
With calibrated radios that's a piece of cake to find out. Pulse-echo.
Or let the whole link oscillate.

With calibrate network interconnects, the problem wouldn't exist! :>
I'm trying to reduce the complexity of the measurement (verification?)
system to *a* device -- preferably something non-esoteric (so folks
don't have to buy a special piece of gear to verify proper operation
and diagnose problems).

The interconnects would calibrate themselves during the test. It doesn't
matter whether it's a radio link or a wired link.

I.e., if the distance to device 1 is A and the distance to device 2
is B, then there will be a difference between the signals arriving
that corresponds to the difference between A and B. Even after you
compensate for any differences in the radios themselves.

[The same is true with a single radio at device 1 "received" at
device 2 -- how much time did the RF signal take to get to device
2 cuz it's arrival will occur after device 2's notion of the
device 1 event with which it is intended to correlate.]

That's the appeal between "two equal lengths of wire" -- the
delay through each can be made "identical".

But radio can measure them. How much precision do you need?

As I said (elsewhere), I can currently verify performance at the
~100-200ns level "with a scope". [I am trying to improve that
by an order of magnitude.] But, that's with stuff sitting on
a bench, within arm's reach (physically if not electrically).

So, I need a (external) measurement system that gives me at least
that level of performance. Without necessitating developing
yet another gizmo.

I am afraid you will have to develop a gizmo. Because there ain't enough
of a market for what you are trying to do and I doubt there is
off-the-shelf stuff. Except for very expensive test equipment that can
probably be programmed to do this job (the stuff that costs as much as a
decent car).

[E.g., the 'scope approach is great: set timebase accordingly;
trigger off pulse from device 1; monitor pulse from device 2; count
graduations on the reticule! This is something that any tech
should be able to relate to...]

But you need a link to each unit. And this is also easy if a scope
approach is acceptable:

a. Provide a full-duplex radio link to each unit, meaning two different
frequencies each. Can purchased. Send tone burst out, measure phase
difference when it comes back.

b. Do the same with unit B.

c. Listen for timer tick and measure delta.

d. Subtract difference found in steps a and b.

You might want to take a look at Daqarta. I found it to be very precise
in measurement precision when it comes to phase delays:

http://daqarta.com/dw_0o0p.htm

Not sure how far down in precision this app would go but it'll even give
you a histogram of timing errors:

http://daqarta.com/dw_psth.htm

The author is very responsive and knowledgeable when it comes to
specialty applications you want to pursue. Speaking from experience here.
 
D

Don Y

Jan 1, 1970
0
Hi,

You need to run NTP and a GPSDO (GPS disciplined oscillator). You can

My protocol is essentially PTP (much finer-grained control than NTP).
A GPS-based time reference *could* work -- if signal was always
available inside building (steel construction, etc.)

But, I would still need some way of interconnecting the reference
with the UUT -- i.e., a low jitter PPS output that I could use as
a reference against which to compare the corresponding reference from
the UUT.

Then, repeat the experiment at other node and hope nothing in The
System has changed in the intervening time.
buy GPSDOs on ebay. Symetricom, Meinberg, etc. These days around $100 to
$300 a box. Note the wireless companies have tossed thousands of these
in the crusher. I got two from a cellular tech and have kicked myself
for not wiping the guy out once I found out what they go for on ebay.

Check out ebay item 300943155756

I can tell from the text of the ad, this person knows what they are
doing. Lady Heather is the free program you can use to monitor your GPSDO.

You will also have to tune up NTP. If you are using windows, you need to
know that windows time is NOT NTP. Meinberg has a free windows NTP program.

Again, I'm using PTP (sort of). And, have no particular concern
as to how well the "System Time" (as experienced on each of the
nodes in question) relates to "Wall (Clock) Time". The point is
to have a common timebase and reference throughout the system
(then, worry about how this relates to "Human" time)
 
D

Don Y

Jan 1, 1970
0
Hi George,

Don, If you don't find an answer here you might try the "time nuts" forum.

Thanks! I'll dig through their archives and see what sorts
of subjects they address (along with getting a feel for the
SNR, zealotry, etc.)!

--don
 
T

TunnelRat

Jan 1, 1970
0
Hi,

[posted to S.E.D in the hope that a hardware analog might exist]

I synchronize the "clocks" on physically distributed processors
such that two or more different machines can have a very finely
defined sense of "synchronized time" between themselves.

During development, I would measure this time skew (among other
factors) by locating these devices side-by-side on a workbench
interconnected by "unquantified" cable. Then, measuring the
time difference between to "pulse outputs" that I artificially
generate on each board.

So, I could introduce a disturbance to the system and watch to
see how quickly -- and accurately -- the "clocks" (think FLL and
PLL) come back into sync.

How do I practically do this when the devices are *deployed*
and physically distant (vs. "electrically distant" as in my
test case)?

Two ideas come to mind:
1) two equal length cables to connect the "pulse outputs"
from their respective originating devices to the test gear.
2) two *radios* to do the same thing -- after accounting
for different flight times

[Though I wonder how hard it is to qualify two different
radios to have the same delay, etc. Far easier to trim
two long lengths of wire to the same length!]

Of course, I would like to minimize the recurring cost of
any solution as it is just present for system qualification
(and troubleshooting) and offers no run-time advantage to
the design.

Thx,
--don

You might want to look into how NTP (the Network Time Protocol) works.
It is based on having a fairly consistent delay (or making multiple
measurements to average out the variations).

As I remember, the basic technique is to start with the two units A and
B having their own idea of what time is, time is advancing at the same
rate for the two units, but there may be a skew between them.

Unit A sends a message to B, with the time stamp of when it sent the
message.
Unit B then receives the message, marks it with the received time stamp,
and then resends it to A, adding again the time stamp of the
transmission, and when A get the answer back, it adds this time as the
forth and final time stamp to the message.

From the 4 time stamps, you can calculate the time it took for a message
to get between A and B, and the difference in their clocks, allowing A
to adjust his clock to be in synchrony with B. The math assumes that the
flight time in each directions is the same, any difference contributes
in an increase resultant skew in clocks.


Just examine the way the NIST used to do it in the POTS modem days.

Theu account for your machine delays and everything. All taken from
the send/receive pings and returns and timing those delays, etc.

we used to use an app called "Timeset" on our modem, and they could
resolve and set your time accurately to less than 0.1 second with that,
consistently.

Doing it over the net is likely a bit more difficult, but the same
process, nonetheless.
 
D

Don Y

Jan 1, 1970
0
Hi Joerg,

Tooth settled down, yet?
The interconnects would calibrate themselves during the test. It doesn't
matter whether it's a radio link or a wired link.

You'd have to locate a piece of kit at each end of each link.
I.e., three devices (assuming a common reference point) to
test two links. (Or, hope nothing changes in the minutes
or fractional hours that it takes you to redeploy for the
"other" link)

[E.g., the timing protocol updates itself at ~1Hz just to deal
with things like drift in the local oscillator]
I.e., if the distance to device 1 is A and the distance to device 2
is B, then there will be a difference between the signals arriving
that corresponds to the difference between A and B. Even after you
compensate for any differences in the radios themselves.

[The same is true with a single radio at device 1 "received" at
device 2 -- how much time did the RF signal take to get to device
2 cuz it's arrival will occur after device 2's notion of the
device 1 event with which it is intended to correlate.]

That's the appeal between "two equal lengths of wire" -- the
delay through each can be made "identical".

But radio can measure them. How much precision do you need?

As I said (elsewhere), I can currently verify performance at the
~100-200ns level "with a scope". [I am trying to improve that
by an order of magnitude.] But, that's with stuff sitting on
a bench, within arm's reach (physically if not electrically).

So, I need a (external) measurement system that gives me at least
that level of performance. Without necessitating developing
yet another gizmo.

I am afraid you will have to develop a gizmo. Because there ain't enough
of a market for what you are trying to do and I doubt there is
off-the-shelf stuff.

That's what I've feared. Once you start having to support "test kit"
your TCO goes up, fast! ("authorized service personnel", etc.)

Part of the appeal of legacy devices (60's) was that a "tech"
could troubleshoot problems with common bits of test equipment.
E.g., the original ECU's could be troubleshot (?) with a VOM...
(no longer true though that market is large enough that OBD
tools are relatively inexpensive)
Except for very expensive test equipment that can
probably be programmed to do this job (the stuff that costs as much as a
decent car).
[E.g., the 'scope approach is great: set timebase accordingly;
trigger off pulse from device 1; monitor pulse from device 2; count
graduations on the reticule! This is something that any tech
should be able to relate to...]

But you need a link to each unit.

Yes. And, relies on a *wired* link (or equivalent hooks in a wireless
implementation)
And this is also easy if a scope
approach is acceptable:

a. Provide a full-duplex radio link to each unit, meaning two different
frequencies each. Can purchased. Send tone burst out, measure phase
difference when it comes back.

b. Do the same with unit B.

This has to happen "nearly coincident" with "a)." to be able to
ignore short term variations in the signals. E.g., if you
were troubleshooting an analog PLL, you wouldn't measure the
phase of the input signal against some "reference"; then, some
seconds later, measure the phase of the synthesized signal against
that same reference. Rather, you would measure the synthesized
signal against the input signal "directly" so the measurements
appear to be concurrent.
c. Listen for timer tick and measure delta.

Also coincident with a & b. I.e., you want to deploy to pairs
of radios and this "tick measurement" device to take/watch
an instantaneous measurement free of jitter, and short term
uncertainty, etc.

I'm not claiming it can't be done. Rather, I'm trying to show
the height at which it sets the bar for a "generic technician".

It may turn out that this becomes a specialized diagnostic
procedure. I would just like to avoid that, where possible,
as it makes such a system "less ubiquitous" in practical
terms.

(Alternatively, *guarantee* that the system always knows when
things are not in sync and have *it* report this problem)
d. Subtract difference found in steps a and b.
You might want to take a look at Daqarta. I found it to be very precise
in measurement precision when it comes to phase delays:

http://daqarta.com/dw_0o0p.htm

Not sure how far down in precision this app would go but it'll even give
you a histogram of timing errors:

http://daqarta.com/dw_psth.htm

The author is very responsive and knowledgeable when it comes to
specialty applications you want to pursue. Speaking from experience here.

At 256KHz sample rates, I think it is probably too slow to get the
sort of resolution I would need (I'll examine the site more carefully
to see if there is some "extended precision" scheme that could be
employed)

--don
 
J

Joerg

Jan 1, 1970
0
Don said:
Hi Joerg,

Tooth settled down, yet?
The interconnects would calibrate themselves during the test. It doesn't
matter whether it's a radio link or a wired link.

You'd have to locate a piece of kit at each end of each link.
I.e., three devices (assuming a common reference point) to
test two links. (Or, hope nothing changes in the minutes
or fractional hours that it takes you to redeploy for the
"other" link)

[E.g., the timing protocol updates itself at ~1Hz just to deal
with things like drift in the local oscillator]

Yes, either that or you have to make it part of the standard on-board
equipment (which I understand you don't want). There is no other way.
The locations have to broadcast their timer ticks and that requires
hardware.

I.e., if the distance to device 1 is A and the distance to device 2
is B, then there will be a difference between the signals arriving
that corresponds to the difference between A and B. Even after you
compensate for any differences in the radios themselves.

[The same is true with a single radio at device 1 "received" at
device 2 -- how much time did the RF signal take to get to device
2 cuz it's arrival will occur after device 2's notion of the
device 1 event with which it is intended to correlate.]

That's the appeal between "two equal lengths of wire" -- the
delay through each can be made "identical".

But radio can measure them. How much precision do you need?

As I said (elsewhere), I can currently verify performance at the
~100-200ns level "with a scope". [I am trying to improve that
by an order of magnitude.] But, that's with stuff sitting on
a bench, within arm's reach (physically if not electrically).

So, I need a (external) measurement system that gives me at least
that level of performance. Without necessitating developing
yet another gizmo.

I am afraid you will have to develop a gizmo. Because there ain't enough
of a market for what you are trying to do and I doubt there is
off-the-shelf stuff.

That's what I've feared. Once you start having to support "test kit"
your TCO goes up, fast! ("authorized service personnel", etc.)

Part of the appeal of legacy devices (60's) was that a "tech"
could troubleshoot problems with common bits of test equipment.
E.g., the original ECU's could be troubleshot (?) with a VOM...
(no longer true though that market is large enough that OBD
tools are relatively inexpensive)

I find it's even better today. Back in the 60's, just the thought of
schlepping a Tektronix "portable" (as in "has a handle but take a Motrin
before lifting") could make you cringe. Then you had to find a wall
outlet, plug in, push the power button ... TUNGGGG ... thwock ... "Ahm,
Sir, sorry to bug you but where is the breaker panel?"

Nowadays they already have laptops for reporting purposes and all you
need to do is provide a little USB box and software. TI has a whole
series of ready-to-go radio modules, maybe one of those can be pressed
into service here. The good old days are ... today. Doing this
wirelessly was totally out of the question in the 60's. The wrath of FCC
alone was reason enough not to.

Except for very expensive test equipment that can
probably be programmed to do this job (the stuff that costs as much as a
decent car).
[E.g., the 'scope approach is great: set timebase accordingly;
trigger off pulse from device 1; monitor pulse from device 2; count
graduations on the reticule! This is something that any tech
should be able to relate to...]

But you need a link to each unit.

Yes. And, relies on a *wired* link (or equivalent hooks in a wireless
implementation)

Wired works, as long as the infrastructure in the wiring won't get in
the way too much (Hubs? Switches? Routers?).

This has to happen "nearly coincident" with "a)." to be able to
ignore short term variations in the signals. E.g., if you
were troubleshooting an analog PLL, you wouldn't measure the
phase of the input signal against some "reference"; then, some
seconds later, measure the phase of the synthesized signal against
that same reference. Rather, you would measure the synthesized
signal against the input signal "directly" so the measurements
appear to be concurrent.

Not really. How should an RF path change much in just a few minutes?
Unless someone moves a massive metal file cabinet near the antennas, but
you'd see that.

Also coincident with a & b. I.e., you want to deploy to pairs
of radios and this "tick measurement" device to take/watch
an instantaneous measurement free of jitter, and short term
uncertainty, etc.

Since it seems you are not after picoseconds I don't see where the
problem is. You can do that simultaneously, it's not problem, but it
isn't necessary.

I'm not claiming it can't be done. Rather, I'm trying to show
the height at which it sets the bar for a "generic technician".

It may turn out that this becomes a specialized diagnostic
procedure. I would just like to avoid that, where possible,
as it makes such a system "less ubiquitous" in practical
terms.

Take a look at SCADA softare. Something like this could be pieced
together and then all the tech would need to be told is "Hook all this
stuff to the units, plug this gray box into a USB port your laptop,
click that icon over here, wait until a big green DONE logo appears,
then retrieve all the stuff".

(Alternatively, *guarantee* that the system always knows when
things are not in sync and have *it* report this problem)

That would be by far the best solution and if I were to make the
decision that's how it would be done.

At 256KHz sample rates, I think it is probably too slow to get the
sort of resolution I would need (I'll examine the site more carefully
to see if there is some "extended precision" scheme that could be
employed)

Mostly this only samples at 44.1kHz, 48kHz or 96kHz, depending on the
sound hardware you use. Unless I misunderstand your problem at hand,
that isn't an issue. AFAIU all you are after is a time difference
between two (or maybe some day more) events, not the exact occurrence of
an event in absolute time. So if each event triggers a sine wave at a
precise time you can measure the phase difference between two such sine
waves transmitted by two different units. Combining it with a complex
FFT of sufficient granularity you can calculate the phase difference
down to milli-degrees. 1/10th of a degree at 10kHz is less than 30nsec
and to measure that is a piece of cake.

You can get much more accurate than that. In fact, one of the big
projects I am involved in right now (totally different from yours) fully
banks on that and we have our first demo behind us. Since the system
aced it so well I tried to push it, measuring the phase shift in a
filter when a capacitor goes from 100000pF to 100001pF. It worked.
 
M

mpm

Jan 1, 1970
0
Bad news. Any measurement between two points is going to involve some

medium in between. The trick is to not have the medium

characteristics change during the measurement. That makes copper and

dielectric a rather bad choice, and shoveling bits through repeaters,

hubs and switches a good source of additional errors. Going though

the air seems to offer the least drift, error, and jitter.



Please note that CAT5 or 6 is not much of an improvement over coax

cable. The major source of drift is the elongation of the copper

conductors with temperature. Whether that error is significant

depends on the lengths involved, which you haven't disclosed.



I like Joerg's idea. Two reference pulses, sent by each end, measured

at some known central location. You can also do it backwards. Have

the known central location send a single reference pulse, and then

have the two end points store and return the pulses after a stable and

fixed delay.



Incidentally, that works nicely for playing radio location via

hyperbolic navigation (same as LORAN). I used it to create a line of

position for a VHF/UHF mobile radio talking through a repeater. I

used two identical scanner receivers to listen to the repeater input

and the repeater output. The delay through the repeater is fairly

stable and easily measured. Therefore, the mobile transmitter was

somewhere along a line of position defined by a constant time

difference between the repeater and my receivers. Today, I would have

a computer calculate the hyperbolic line of position. In 1979(?), I

used a road map, two push pins, a loop of string, and a pencil. I

still can't decode what you're trying to accomplish, but this might

give you some ideas.









Yep. If you figure out how to reflect the injected signal, you can

probably live without having to transmit back any timing information.



What I'm not seeing are any distances involved or accuracy

requirements. Also, what equipment or devices you have to work with.

I'm floundering with bad guesses as to your intentions.



--

Jeff Liebermann [email protected]

150 Felker St #D http://www.LearnByDestroying.com

Santa Cruz CA 95060 http://802.11junk.com

Skype: JeffLiebermann AE6KS 831-336-2558

It's late, so pardon that I've not really though this idea through completely...
But I'm just wondering if one could cobble-up a phase lock loop (or loops) using one or more broadcast FM stations as the "timebase". FM receivers are cheap. Of course, the solution relies upon 3rd party actions you can't control so this might not be the optimum solution.
 
It's late, so pardon that I've not really though
this idea through completely...

I do my best work under cover of darkness.
But I'm just wondering if one could cobble-up a phase lock
loop (or loops) using one or more broadcast FM stations
as the "timebase". FM receivers are cheap. Of course, the
solution relies upon 3rd party actions you can't control
so this might not be the optimum solution.

Sure. One doesn't really need any manner of specialized signal from
the FM, TV, WWV, paging, VHF WX, or whatever radio signal. All that's
needed is a common reference signal (i.e. start timer) that can be
heard by both ends of the measuring system. The reference doesn't
even need to be accurately time. A random pulse or carrier shift will
suffice. That's the nice thing about using an independent third
reference... there's no accuracy requirement. However, some care must
still be paid to avoid noise induced jitter and trigger drift.

As you suggest, one could also phase lock to an FM broadcast carrier.
With a narrow BW PLL, that will also produce a suitable reference that
can be heard indoors. However, there's no guarantee two such
receivers with identical delay through the IF filter. It can probably
be done with a narrow IF filter with a Gaussian filter response
(minimum group delay) and an AFC to center the FM signal. Keeping FM
modulation artifacts out of the IF bandpass may be a problem[1]. It's
also more complicated than just a simple pulse trigger system and
timer, but it can be made to work.

The analog TV-signal was a good source of timing reference, locked to
some high quality network rubidium oscillator and used a complex
structure from which to find easy reference points. For TV reception,
an outdoor antenna was often used, reducing the problem with
multipath.

However with indoor reception, there are often quite deep multipath
nulls, some so narrow it takes out only few of the broadcast FM
sidebands, causing severe distortion in audio or taking out a whole
communication quality FM channel (12.5-25 kHz).

At the null point two multipath signals cancel each other at 180
degree path difference. Only a few kHz below this null, one of the
multipath is stronger, while at a frequency above the null, the other
multipath is stronger and it has different path length.

A practical, but not so accurate way of using the FM signal is to
monitor the 19 kHz pilot tone, but still you need some method of
finding out which cycle is which. Some data service at 57 kHz might be
usable to identify cycles. Frequent phase disruptions in the pilot
tone is an indication of multipath nulls and a better antenna
placement may be needed.
 
D

Don Y

Jan 1, 1970
0
Hi Joerg,

Don Y wrote:
With calibrated radios that's a piece of cake to find out. Pulse-echo.
Or let the whole link oscillate.

With calibrate network interconnects, the problem wouldn't exist! :>
I'm trying to reduce the complexity of the measurement (verification?)
system to *a* device -- preferably something non-esoteric (so folks
don't have to buy a special piece of gear to verify proper operation
and diagnose problems).

The interconnects would calibrate themselves during the test. It doesn't
matter whether it's a radio link or a wired link.

You'd have to locate a piece of kit at each end of each link.
I.e., three devices (assuming a common reference point) to
test two links. (Or, hope nothing changes in the minutes
or fractional hours that it takes you to redeploy for the
"other" link)

[E.g., the timing protocol updates itself at ~1Hz just to deal
with things like drift in the local oscillator]

Yes, either that or you have to make it part of the standard on-board
equipment (which I understand you don't want). There is no other way.
The locations have to broadcast their timer ticks and that requires
hardware.

I think that last sentence is the key. The devices already have that
capability. And, already do it -- in a cryptic way (i.e., if you
examined the control packets for the protocol, you could infer
this information just like the "resident hardware/software" does
in each node.

So, maybe that's the way to approach it!

I.e., the "pulse" output that I mentioned is a software manifestation.
It's not a test point in a dedicated hardware circuit. I already
*implicitly* trust that it is generated at the right temporal
relationship to the "internal timepiece" for the device.

[This is also true of multi-kilobuck pieces of COTS 1588-enabled
network kit: the "hardware" that generates these references is
implemented as a "finite state machine" (i.e., a piece of code
executing on a CPU!)]

So, just treat this subsystem the same way I would treat a medical
device, pharmaceutical device, gaming device, etc. -- *validate*
it, formally. Then, rely on that "process" to attest to the
device's ability to deliver *whatever* backchannel signals I deem
important for testing!

At any time, a user can verify continued compliance (to the same
specifications used in the validation). Just like testing for
characteristic impedance of a cable, continuity, line/load
regulation for a power supply, etc.

Then, instead of delivering a "pulse" to an "unused" output pin
on the board, I can just send that "event" down the same network
cable that I am using for messages! (i.e., its a specialized
message, of sorts)
Except for very expensive test equipment that can
probably be programmed to do this job (the stuff that costs as much as a
decent car).

[E.g., the 'scope approach is great: set timebase accordingly;
trigger off pulse from device 1; monitor pulse from device 2; count
graduations on the reticule! This is something that any tech
should be able to relate to...]

But you need a link to each unit.

Yes. And, relies on a *wired* link (or equivalent hooks in a wireless
implementation)

Wired works, as long as the infrastructure in the wiring won't get in
the way too much (Hubs? Switches? Routers?).

These are almost always in "accessible" locations (the actual
*nodes* that are being tested may be far less accessible: e.g.,
an RFID badge reader *protected* in a wall.

And, for high performance deployments they are "special" devices
that are part of the system itself. E.g., the equivalent of
"transparent switches" and "boundary switches" -- to propagate the
timing information *across* the nondeterministic behavior of the
switch (hubs are friendlier in this sort of environment!).
Not really. How should an RF path change much in just a few minutes?
Unless someone moves a massive metal file cabinet near the antennas, but
you'd see that.

Can you be sure that you can get from point A to point B in
"just a few minutes"? (This was why I was saying you have to
deploy *all* the test kit simultaneously -- what if A is in
one building and B in another)

What if an elevator happens to move up/down/stops its shaft
(will you be aware of it?)
Since it seems you are not after picoseconds I don't see where the
problem is. You can do that simultaneously, it's not problem, but it
isn't necessary.


Take a look at SCADA softare. Something like this could be pieced
together and then all the tech would need to be told is "Hook all this
stuff to the units, plug this gray box into a USB port your laptop,
click that icon over here, wait until a big green DONE logo appears,
then retrieve all the stuff".

I'd *prefer* an approach where the tech could just use the
kit he's already got on hand instead of having to specially
accommodate the needs of *this* system (does every vendor impose
its own set of test equipment requirements on the customer?
Does a vendor who *doesn't* add value to the customer?)
That would be by far the best solution and if I were to make the
decision that's how it would be done.

See above (validation). I think that's the way to go.

I.e., how do you *know* that your DSO is actually showing you
what's happening on the probes into the UUT? :>

[I've got a logic analyzer here that I can configure to
"trigger (unconditionally) after 100ms" -- and, come back
to it 2 weeks later and see it sitting there still saying
"ARMED" (yet NOT triggered!)]

Define, *specifically*, how this aspect of the device *must*
work AS AN INVARIANT. Then, let people verify this performance
on the unit itself (if they suspect that it isn't performing
correctly)
Mostly this only samples at 44.1kHz, 48kHz or 96kHz, depending on the
sound hardware you use. Unless I misunderstand your problem at hand,
that isn't an issue. AFAIU all you are after is a time difference
between two (or maybe some day more) events, not the exact occurrence of
an event in absolute time. So if each event triggers a sine wave at a
precise time you can measure the phase difference between two such sine
waves transmitted by two different units. Combining it with a complex
FFT of sufficient granularity you can calculate the phase difference
down to milli-degrees. 1/10th of a degree at 10kHz is less than 30nsec
and to measure that is a piece of cake.

The "pulses" are currently just that: pulses (I am only interested in
the edge). Currently, these are really *infrequent* (hertz) -- though
I could change that.
You can get much more accurate than that. In fact, one of the big
projects I am involved in right now (totally different from yours) fully
banks on that and we have our first demo behind us. Since the system
aced it so well I tried to push it, measuring the phase shift in a
filter when a capacitor goes from 100000pF to 100001pF. It worked.

Yeah, I worked with a sensor array that was capable of detecting a few
microliters (i.e., a very small drop) of liquid (blood) in any of 60
test tubes for a few dollars (in very low quantities). It had to
handle 60 such sensors simultaneously as you never knew where the blood
might "appear".

Interesting when you think of unconventional approaches to problems
that would otherwise appear "difficult" and suggest "expensive"
solutions!
 
J

Joerg

Jan 1, 1970
0
Don said:
Hi Joerg,

Don Y wrote:
With calibrated radios that's a piece of cake to find out.
Pulse-echo.
Or let the whole link oscillate.

With calibrate network interconnects, the problem wouldn't exist! :>
I'm trying to reduce the complexity of the measurement (verification?)
system to *a* device -- preferably something non-esoteric (so folks
don't have to buy a special piece of gear to verify proper operation
and diagnose problems).

The interconnects would calibrate themselves during the test. It
doesn't
matter whether it's a radio link or a wired link.

You'd have to locate a piece of kit at each end of each link.
I.e., three devices (assuming a common reference point) to
test two links. (Or, hope nothing changes in the minutes
or fractional hours that it takes you to redeploy for the
"other" link)

[E.g., the timing protocol updates itself at ~1Hz just to deal
with things like drift in the local oscillator]

Yes, either that or you have to make it part of the standard on-board
equipment (which I understand you don't want). There is no other way.
The locations have to broadcast their timer ticks and that requires
hardware.

I think that last sentence is the key. The devices already have that
capability. And, already do it -- in a cryptic way (i.e., if you
examined the control packets for the protocol, you could infer
this information just like the "resident hardware/software" does
in each node.

So, maybe that's the way to approach it!

I.e., the "pulse" output that I mentioned is a software manifestation.
It's not a test point in a dedicated hardware circuit. I already
*implicitly* trust that it is generated at the right temporal
relationship to the "internal timepiece" for the device.

Then you may be home already, almost for free.

[This is also true of multi-kilobuck pieces of COTS 1588-enabled
network kit: the "hardware" that generates these references is
implemented as a "finite state machine" (i.e., a piece of code
executing on a CPU!)]

So, just treat this subsystem the same way I would treat a medical
device, pharmaceutical device, gaming device, etc. -- *validate*
it, formally. Then, rely on that "process" to attest to the
device's ability to deliver *whatever* backchannel signals I deem
important for testing!

At any time, a user can verify continued compliance (to the same
specifications used in the validation). Just like testing for
characteristic impedance of a cable, continuity, line/load
regulation for a power supply, etc.

Then, instead of delivering a "pulse" to an "unused" output pin
on the board, I can just send that "event" down the same network
cable that I am using for messages! (i.e., its a specialized
message, of sorts)

A way to do something like this is signature analysis. You watch each
unit while it sends out a certain pattern that's always the same. The
pattern has to come in response to some other pattern that gets sent to
it, and the response time (turn-around time) must be very determined and
validated. Now you can correlate the heck out of it and get almost any
precision you want.

Except for very expensive test equipment that can
probably be programmed to do this job (the stuff that costs as much
as a
decent car).

[E.g., the 'scope approach is great: set timebase accordingly;
trigger off pulse from device 1; monitor pulse from device 2; count
graduations on the reticule! This is something that any tech
should be able to relate to...]

But you need a link to each unit.

Yes. And, relies on a *wired* link (or equivalent hooks in a wireless
implementation)

Wired works, as long as the infrastructure in the wiring won't get in
the way too much (Hubs? Switches? Routers?).

These are almost always in "accessible" locations (the actual
*nodes* that are being tested may be far less accessible: e.g.,
an RFID badge reader *protected* in a wall.

Yeah, but you don't want to have to temporarily re-shuffle a customers
wiring installation. It'll be labor-intense and also interrupt their
normal business.

And, for high performance deployments they are "special" devices
that are part of the system itself. E.g., the equivalent of
"transparent switches" and "boundary switches" -- to propagate the
timing information *across* the nondeterministic behavior of the
switch (hubs are friendlier in this sort of environment!).


Can you be sure that you can get from point A to point B in
"just a few minutes"? (This was why I was saying you have to
deploy *all* the test kit simultaneously -- what if A is in
one building and B in another)

What if an elevator happens to move up/down/stops its shaft
(will you be aware of it?)

Tricky but not impossible. That's why the test should run for a longer
time, to see if anything changes. You also have another tool, RSSI. If
the RSSI is markedly different from when the system was installed then
something in the path has changed.

[...]

I'd *prefer* an approach where the tech could just use the
kit he's already got on hand instead of having to specially
accommodate the needs of *this* system (does every vendor impose
its own set of test equipment requirements on the customer?
Does a vendor who *doesn't* add value to the customer?)

If this doesn't add value to the customer, why do it in the first place?
If calibration is required then that is of value to the customer.

See above (validation). I think that's the way to go.

Yup. Didn't know that it was leagally or technically possible in your case.

[...]

The "pulses" are currently just that: pulses (I am only interested in
the edge). Currently, these are really *infrequent* (hertz) -- though
I could change that.

The method above (sine wave trains) is usually better. Lower bandwidth,
more SNR, better accuracy, much cheaper hardware.

Yeah, I worked with a sensor array that was capable of detecting a few
microliters (i.e., a very small drop) of liquid (blood) in any of 60
test tubes for a few dollars (in very low quantities). It had to
handle 60 such sensors simultaneously as you never knew where the blood
might "appear".

Interesting when you think of unconventional approaches to problems
that would otherwise appear "difficult" and suggest "expensive"
solutions!


That's where engineering begin to be fun :)

The best comment I ever got after finishing a prototype that then did
exactly what the client wanted, after one of their engineers looked into
the rather sparse collectioon of parts: "You mean, THAT's IT?"
 
J

Joerg

Jan 1, 1970
0
Joerg said:
Don said:
Hi Joerg,
[...]

Can you be sure that you can get from point A to point B in
"just a few minutes"? (This was why I was saying you have to
deploy *all* the test kit simultaneously -- what if A is in
one building and B in another)

What if an elevator happens to move up/down/stops its shaft
(will you be aware of it?)

Tricky but not impossible. That's why the test should run for a longer
time, to see if anything changes. You also have another tool, RSSI. If
the RSSI is markedly different from when the system was installed then
something in the path has changed.

It is actually easier than I thought this morning:

You can perform round-trip calibration and timing in rapid succession,
for both units. Unless the elevator has a rocket motor and goes at
Mach-2 it should be accurate enough.

Ok, if an F-16 roars through the path at full throttle you'd have a
problem :)
 
M

mpm

Jan 1, 1970
0
Rarely was the TV studio feed & the network in perfect sync & phase.


We (I'm probably dating myself now...) used to switch everything on Line-10, in the blanking interval.
Perfect hiding space for slop.

To your point, even more rarely, were the test patterns 6 MHz wide at your bench, having gone through a dozen D/A's and 500 feet of coax. For that matter, I'm not even sure it was 6 MHz at the generator's output.
 
P

Paul E Bennett

Jan 1, 1970
0
Don said:
Hi,

[posted to S.E.D in the hope that a hardware analog might exist]

I synchronize the "clocks" on physically distributed processors
such that two or more different machines can have a very finely
defined sense of "synchronized time" between themselves.

During development, I would measure this time skew (among other
factors) by locating these devices side-by-side on a workbench
interconnected by "unquantified" cable. Then, measuring the
time difference between to "pulse outputs" that I artificially
generate on each board.

So, I could introduce a disturbance to the system and watch to
see how quickly -- and accurately -- the "clocks" (think FLL and
PLL) come back into sync.

How do I practically do this when the devices are *deployed*
and physically distant (vs. "electrically distant" as in my
test case)?

Two ideas come to mind:
1) two equal length cables to connect the "pulse outputs"
from their respective originating devices to the test gear.
2) two *radios* to do the same thing -- after accounting
for different flight times

[Though I wonder how hard it is to qualify two different
radios to have the same delay, etc. Far easier to trim
two long lengths of wire to the same length!]

Of course, I would like to minimize the recurring cost of
any solution as it is just present for system qualification
(and troubleshooting) and offers no run-time advantage to
the design.

You might find this of interest and useful if you had a couple of fibres in
your installed bundle. I was chatting to one of the guys visiting JET from
CERN today and he stated that the timing accuracy is 1ps on phase and 1ns on
time synchronicity.

<http://www.ohwr.org/projects/white-rabbit/documents>

--
********************************************************************
Paul E. Bennett IEng MIET.....<email://[email protected]>
Forth based HIDECS Consultancy.............<http://www.hidecs.co.uk>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
 
M

mpm

Jan 1, 1970
0
It never was 6 MHz wide, even at the camera. There were guard bands,
and the aural channels(s) 4.2 Mhz was normal for the video, dependent on
how much slop the FCC let a station get away with, on the unused
sideboard in the filtering & diplexer.

I was at Intergroup, and all our D/A's and input matrix designs (switching) looked the same as GV or Utah. I think we all used the same design engineers at one point or another.

But back to my earlier 6 MHz comment - I left out the fact that the sales and marketing folks claimed linearity out to 20 MHz (which of course, we could never verify on the bench - even though it probably met that spec.)
 
Top