M
Mark
- Jan 1, 1970
- 0
I'm using a LEM current transducer to measure pulsed currents. The
current sensor produces a current output which I feed to a resistor
producing a voltage that represents the current flow. The voltage feeds
an opamp for offset correction and buffering. The sensor has good
accuracy (+/- 0.5%), but suffers from an initial offset
(+/- 0.15mA) and worse, a thermal induced offset (+/- 0.35 mA over temp).
Since I know when the current pulses are coming, I want to to force the
offset to zero before taking a measurement. I've seen circuits to do
this using the following techniques:
1. An A/D followed by a D/A. With zero current input, the offset is
driven to zero, and the offset correction is latched in the D/A until
the next "calibration" cycle.
2. A/D and a digital pot controlled by a processor, works similiar as
above.
What are the merits of either method. I prefer not to have a processor
involved, but do have access to FPGA resources.
Is ther a simple way to do this?
Mark
current sensor produces a current output which I feed to a resistor
producing a voltage that represents the current flow. The voltage feeds
an opamp for offset correction and buffering. The sensor has good
accuracy (+/- 0.5%), but suffers from an initial offset
(+/- 0.15mA) and worse, a thermal induced offset (+/- 0.35 mA over temp).
Since I know when the current pulses are coming, I want to to force the
offset to zero before taking a measurement. I've seen circuits to do
this using the following techniques:
1. An A/D followed by a D/A. With zero current input, the offset is
driven to zero, and the offset correction is latched in the D/A until
the next "calibration" cycle.
2. A/D and a digital pot controlled by a processor, works similiar as
above.
What are the merits of either method. I prefer not to have a processor
involved, but do have access to FPGA resources.
Is ther a simple way to do this?
Mark