 Login Join Maker Pro

### Network # Resistance reference

#### (*steve*)

##### ¡sǝpodᴉʇuɐ ǝɥʇ ɹɐǝɥd
Moderator
Jan 21, 2010
25,505
Let's say I wanted to create a reference resistor and I have access to plenty of 1k 0.1% 25ppm resistors.

I realise that the math says the tolerance will remain the same no matter how I connect them (since they may all be 0.1% high, for example).

But if they are assumed to have a value which is normally distributed around the nominal value, will the frequency distribution change (i.e. the likelihood of a value closer to nominal will rise)?

I am contemplating building a reference resistor from 100 0.1% resistors and I am hoping for a value that is well within 0.1% of the nominal value. The resistors are 25ppm/degC, so that's a 0.0025% variation per degreeC. Should I place the reference in an oven to maintain a constant temperature? -- this may be going over the top (Or perhaps not. -- 20C change is 0.05% which is 1/2 of the resistor tolerance)

#### GreenGiant

Feb 9, 2012
842
The problem that I see with this is the 0.1% tolerance, nine times out of ten I see resistors reading slightly below their rated value, putting 100 of them together and you are going to get a very low value (most likely outside of the 0.1% tolerance)

Unless you measure each one exactly the same way and pair a low with a high and make sure that everything is balanced

#### (*steve*)

##### ¡sǝpodᴉʇuɐ ǝɥʇ ɹɐǝɥd
Moderator
Jan 21, 2010
25,505
Even if all of the resistors are 0.1% low, the result can only ever be 0.1% low.

#### GreenGiant

Feb 9, 2012
842
This is true... I was not thinking about it clearly BUT...

You will still never get closer than that .1% unless you offset them high with low, and low with high...

Maybe Im not getting what you mean

#### (*steve*)

##### ¡sǝpodᴉʇuɐ ǝɥʇ ɹɐǝɥd
Moderator
Jan 21, 2010
25,505
Maybe Im not getting what you mean

Well, I'm basing this on the assumption that the population average resistance is the nominal value.

If I take random samples from that population, the distribution of the averages of these samples will be normally distributed around the population average.

The average amount of variation in the samples will fall as sample size increases, eventually plateauing at the population variance.

What is more interesting is the variance of the average value as the sample size increases and whether or not we can say anything about the "sample" of (say) 100 consecutive resistors from a reel.

If you look at this, and take a look at the data, you can see the issue.

400 1k 1% resistors averaged 999.72462 ohms. that's within 0.0227% of the nominal value. Now, who can say if the population value is 1000 ohms, 9999.72 ohms, or some other value -- since we can't measure the entire population. However this is a very good estimate (arguably, if there is long term drift it isn't)

2 values were outside +/- 0.5%, being 0.055% low.

If we take samples of these resistors, each being 100 resistors, and for simplicity, the first, second, third and fourth hundred resistors form the 4 samples) we find that their average is also a good predictor of the population value, and an even better predictor of the value for the sample of 400.

For these resistors, a sample of 100 looks like it would give me a value within 0.04% of the nominal population value.

If we were to assume that these results could be used to predict the value of other samples of different resistors, then 100 0.1% resistors might be expected to be within 0.004% of the nominal value.

So that's an observation, but lacking (currently) the ability to measure 0.1% resistors this accurately, I can't duplicate the measurements.

What I'm interested in is the mathematical prediction based on assumptions about the distribution and nominal value. My stats are not what they used to be #### timothy48342

Nov 28, 2011
218
...
I realise that the math says the tolerance will remain the same no matter how I connect them (since they may all be 0.1% high, for example).

But if they are assumed to have a value which is normally distributed around the nominal value, will the frequency distribution change (i.e. the likelihood of a value closer to nominal will rise)?
...

Your math is correct, but let me allow me to put it in my own words.
math:
If you have different groups of a specific number of data points normally distributed around some value the average of the data points will vary from one group to the next.

The average for an unknown group will have a probability distribution that is normal and centered at the same value.

If you increase the number of data points for the groups, then the probability distribution for the average will become weighted more toward the center. (Standard deviation decreases.)
:endMath

opinion:
I think the problem with applying it to groups of resistors is in assuming that the distribution of values is centered around the nominal value. Of course you won't find a batch of resistors that are all high by .1% but you could get a batch that generally has more high than low.

What about instead use just 1 resistor of whatever value, say 1k and measure it accurately, then add additional 1 ohm's in series or parallel as nessessary to bring it as close to 100% as you need or as close as you are able to measure.

And possibly do it all in an oven, as well.
:endOpinion
--tim

Ah! shoot you posted while I was typing.

I think the problem is that for a specific manufacturer their resistors might come out more on one side of nominal more than the other.

Not just just by manufacturer. It could be that all resistors made on Tuesday tend to be higher than ones made on Monday. One ones made in the winter, or made with the air conditioners running, or made on the nightshift or whatever.

#### (*steve*)

##### ¡sǝpodᴉʇuɐ ǝɥʇ ɹɐǝɥd
Moderator
Jan 21, 2010
25,505
Yes, that is certainly true.

In some respects you could say that a sample taken at a point in time will have an average determined by the sum of two factors, firstly the variation due to the variation of the process over all time, plus any additional variation that may be effectively constant for this batch.

The example above illustrates that. The batch average appears to be about 0.0275% low. Samples within this average this value, but have an additional variation of (in the case of 100 unit sub-samples) around 0.0163% which is exactly what is predicted (1/10 of the estimated population variance)

I guess this effectively proves the point that the variation of the mean of samples will vary by an amount determined by three factors:

1) the population variance
2) the sample size
3) and short-term systematic variance.

for a sample size of 100, I would expect the average resistance to be the short term average of the process with a standard deviation 1/10th of the population variance.

In the case of these resistors, the (edit: estimate of) population variance is around 2.9 ohms, therefore the SdDev is around 1.7 ohms

For a sample of size 100, 63% of samples should be within .17 ohms of the short term average, and indeed 3 of the 4 sub-samples are.

I guess statistics DO work!

Replies
4
Views
512
Replies
1
Views
896
Replies
25
Views
1K
Replies
13
Views
733
Replies
6
Views
996