OK, weird title.
What I want to do is to make a variable power supply controllable by a microcontroller. This is a switching (DC-DC) power supply, and the generic circuit is as follows:
See the datasheet here. The Adj pin feeds into a comparator which compares the voltage on this pin to an internal 1.25V reference. The traditional method of making this supply variable is to change R1 into a variable resistor.
And while I imagine I could use a chip which generates a variable resistance in place of R1, the output voltage could never be reduced below the reference voltage of 1.25V.
So... My idea is to use a D2A to generate a voltage which is added to the voltage at the junction of R1 and R2. As the added voltage increases, the required voltage across R2 will fall. This will enable the output voltage to be varied all the way down to zero.
Here's my idea:

For simplicity I'm not showing any absolute values.
Summing amplifiers are inverting, so the output of the first op-amp (U3) is 3.75V (1.25V is 1.25V lower than 2.5V (the reference) so the output is 1.25V higher than the reference. The input to the other summing input is the same as the reference voltage, so it has no effect on the output.
With an input voltage of 3.75V into the second op-amp (U2), the output is 1.25V.
When the output is regulated, the output of U2 will always be 1.25V, therefore the output of U3 will always be 3.75V.
This means that the input voltages must always sum to 3.75V.
So, for the ratios shown, if I call the voltage from the voltage divider d, and the voltage from the D2A a, the output equations are:
V = 10d, and 1.25 = (d + a) - 2.5
so, d = 3.75 - a, and by substitution V = 37.5 - 10a, or a = (37.5 - V) / 10
If I want the output to vary from 0 to 12V, then the default voltage must be 6V, and the gain from the D2A changed (by altering R8) so that a change in a of 2.5V results in the same change in output as a change of d of 1.25V. Simply stated, that means R8 needs to be 2xR9.
Thus, with R1 = 3.8xR2, and R8 = 2xR9, the equations become:
V = 4.8d, 1.25 = d + (a - 2.5)/2
so d = 2.5 - a/2, and by substitution V = 12 - 2.4a
If a has a range of 0 to 5V the output can vary from 12V to 0V. In my case the D2A is 12bit, so I can do this in steps of about 3mV.
So... 2 questions:
And why do I want to do that? It's for an optical chopper to go with my lock-in amplifiers.
What I want to do is to make a variable power supply controllable by a microcontroller. This is a switching (DC-DC) power supply, and the generic circuit is as follows:

See the datasheet here. The Adj pin feeds into a comparator which compares the voltage on this pin to an internal 1.25V reference. The traditional method of making this supply variable is to change R1 into a variable resistor.
And while I imagine I could use a chip which generates a variable resistance in place of R1, the output voltage could never be reduced below the reference voltage of 1.25V.
So... My idea is to use a D2A to generate a voltage which is added to the voltage at the junction of R1 and R2. As the added voltage increases, the required voltage across R2 will fall. This will enable the output voltage to be varied all the way down to zero.
Here's my idea:

For simplicity I'm not showing any absolute values.
- The voltage divider R1/R2 is in a ratio to give a normal output voltage 10x the reference voltage (or 12.5V in this case)
- The voltage divider R3/R4 has equal value resistors giving a 2.5V split supply to the op-amps (which happen to operate from 5V (not shown))
- R5=R6=R7=R8=R9 so that the gain is 1
Summing amplifiers are inverting, so the output of the first op-amp (U3) is 3.75V (1.25V is 1.25V lower than 2.5V (the reference) so the output is 1.25V higher than the reference. The input to the other summing input is the same as the reference voltage, so it has no effect on the output.
With an input voltage of 3.75V into the second op-amp (U2), the output is 1.25V.
When the output is regulated, the output of U2 will always be 1.25V, therefore the output of U3 will always be 3.75V.
This means that the input voltages must always sum to 3.75V.
So, for the ratios shown, if I call the voltage from the voltage divider d, and the voltage from the D2A a, the output equations are:
V = 10d, and 1.25 = (d + a) - 2.5
so, d = 3.75 - a, and by substitution V = 37.5 - 10a, or a = (37.5 - V) / 10
If I want the output to vary from 0 to 12V, then the default voltage must be 6V, and the gain from the D2A changed (by altering R8) so that a change in a of 2.5V results in the same change in output as a change of d of 1.25V. Simply stated, that means R8 needs to be 2xR9.
Thus, with R1 = 3.8xR2, and R8 = 2xR9, the equations become:
V = 4.8d, 1.25 = d + (a - 2.5)/2
so d = 2.5 - a/2, and by substitution V = 12 - 2.4a
If a has a range of 0 to 5V the output can vary from 12V to 0V. In my case the D2A is 12bit, so I can do this in steps of about 3mV.
So... 2 questions:
- Do you think the circuit is viable?
- Is my math right?
And why do I want to do that? It's for an optical chopper to go with my lock-in amplifiers.