Pages

Showing posts with label Specifications. Show all posts
Showing posts with label Specifications. Show all posts

Monday, August 31, 2015

What is meant by a “fast” power supply?

We regularly get requests for a power supply with a “fast” output. This means different things to different people, so we always have to ask clarifying questions. Not only do we need to find out what change needs to happen quickly, but we need to quantify the need and find out how quickly it needs to change. For example, recently, a customer testing power amplifiers wanted to know how quickly a particular power supply could attain its output voltage. Two ways to look at this are:

1. How long does it take for a power supply output voltage to change from one value to another value?
2. How long does it take for a power supply output voltage to recover to its original value following a load current change?

This customer wanted to know the answer to question 1. Luckily, both of these answers can be found in our specifications and supplemental characteristic tables.

Question 1 is referring to a supplemental characteristic that has a variety of similar names: programming speed, settling time, output response time, output response characteristic, and programming response time. This is typically described with rise time and fall time values, or settling time values, or occasionally with a time constant. Rise (and fall) time values are what you would expect: the time it takes for the output voltage to go from 10% of its final value to 90% of its final value. Settling time (labeled “Output response time” in the graph below) is the time from when the output voltage begins to change until it settles within a specified settling band around the final value, such as 1% or even 0.1%, or sometimes within an LSB (least significant bit) of the final value. My fellow blogger, Ed, posted about how this affects throughput (click here) back in September of 2013.

Question 2 is referring to a specification called transient response, or load transient recovery time. Whenever the load current changes from a low current to a higher current, the output voltage temporarily dips down slightly and then quickly recovers back to the original value (or close to it).
The feedback loop design inside the power supply determines how quickly the voltage recovers from this load current change. Higher bandwidth designs recover more quickly but are less stable. Likewise, lower bandwidth designs recover more slowly and are more stable. Ed posted about optimizing the output response back in April of this year (click here).

So the transient response recovery time is the time from when the load current begins to increase (coincident with the output voltage beginning to drop) to when the output voltage settles within a specified settling band around the final voltage value.

Our customer was interested in a “fast” power supply, meaning one with a settling time to meet his needs. Once we understood what he needed, we directed him to a power supply that could easily meet his requirements!

Friday, May 29, 2015

How to calculate the accuracy of a power measurement

Electrical power in watts is never directly measured by any instrument; it is always calculated based on voltage and current measurements. The simplest example of this is with DC (unchanging) voltage and current: power in watts is simply the product of the DC voltage and DC current:
So the accuracy of the power measurement (which is calculated from the individual voltage and current measurements) is dependent on the accuracy of the individual V and I measurements.

For example, you might use a multimeter to make V and I measurements and calculate power. The accuracy of these individual measurements is typically specified as a percent of the reading plus a percent of the range which is an offset. (Note that “accuracy” here really means “inaccuracy” since we are calculating the error associated with the measurement.)

Let’s use an example of measuring 20 Vdc and 0.5 Adc from which we calculate the power to be 10 W. We want to know the error associated with this 10 W measurement. Looking up the specs for a typical multimeter (for example, the popular Keysight 34401A), we find the following 1-year specifications:

DC voltage accuracy (100 V range): 0.0045 % of reading + 0.0006 % of range
DC current accuracy (1 A range): 0.1 % of reading + 0.01% of range

The error (±) associated with the voltage measurement (20 V) is:
So when the measurement reading is 20.0000 V, the actual voltage could be any value between 19.9985 V and 20.0015 V since there is a 1.5 mV error associated with this reading.

The error (±) associated with the current measurement (0.5 A) is:
So when the measurement reading is 0.5 A, the actual current could be any value between 0.4994 A and 0.5006 A since there is a 0.6 mA error associated with this reading.

We can now do a worst-case calculation of the error associated with the calculated power measurement which is the product of the voltage and current. The lowest possible power value is the product of the lowest V and I values: 19.9985 V x 0.4994 A = 9.98725 W. The highest possible power value is product of the highest V and I values: 20.0015 V x 0.5006 A = 10.01275 W. So the error (±) associated with the 10 W power measurement is ± 12.75 mW.

The above is the brute-force method to determine the worst-case values. It can be shown that the percent of reading part of the power measurement error can be very closely approximated by the sum of the percent of reading errors for the V and I. Likewise, it can be shown that the offset part of the power measurement error can be very closely approximated by the sum of the voltage reading times the current offset error and the current reading times the voltage offset error:
Applying this equation to the example above for the 100 V and 1 A ranges at 20 V, 0.5 A:
So for 10 W, we get:
As you can see, this is the same result as produced by the brute-force approach. Isn’t it great when math works out the way you expect?!?!

In summary, the error associated with a power measurement calculated as the product of a voltage and current measurement has two parts just like the V and I errors: a % of reading part and an offset part. The % of reading part is closely approximated by adding the % of reading parts for the V and I measurements. The offset part is closely approximated by adding two products together: the voltage reading times the current offset error and the current reading times the voltage offset error. It’s as simple as that!

Wednesday, December 31, 2014

Why is the Programming Resolution Supplemental Characteristic Listed as an Average?

Hello everyone!

Happy New Year!  This is our last post of 2014 so we wanted to wish all of our readers a Happy New Year.  Today I am going to talk about a question that I have been asked a few times lately.  In many of our power supplies, we list our Programming Resolution as an average number.  Many people want to know why we do it this way.

Look at the below snippet from our 664xA DC Power Supplies Supplemental Characteristics:

You can see that that it is clearly stated as an average.

The simple answer to this question is that this is because of calibration.

The more complex answer is that we use a DAC to control the output setting of the power supply.  A certain number of DAC counts is going to represent zero to full scale on the output of the supply.  For simplicity's sake, lets assume that we are using a 12 bit DAC for a power supply that goes to fifty volts.

In an ideal world where calibration is not necessary:
A 12 bit DAC gives us 2^12 or 4096 total counts.
The step size (programming resolution) of the 50 volt power supply would be 50/4096 or 0.0122 volts.

We do not live in an ideal world though so we have to disregard some DAC counts because of how the unit calibrates. We also generally let you program a little bit above the maximum settings (usually something like 2%).  Zero volts is not going to be zero DAC Counts and 50 V is not going to be 4096 DAC counts.  For our example, lets say that the minimum that we disregard 20 counts at the top and bottom (40 total counts) and the maximum we disregard is 120 counts (240 total counts) at the top and bottom.  In this scenario:

Minimum step size = 50/(4096-40) = 0.0123 V
Maximum step size = 50/(4096-120) = 0.0130 V

For our Supplemental Characteristic, we would take the average of those 2 numbers.   This gives us 0.01265 V.

The big question is how would I know what the programming resolution is for my particular unit.  I spent about half of yesterday trying to figure that out and I'm still working that out myself.  The best solution that I have right now is to hook a DMM to the output and slowly increment my output to see when it flips to a new setting.  I need to experiment on this though.  If any readers have a better idea, please let us know in the comments.  The fact of the matter is that the error is pretty small and to be safe, any error due to being in between DAC counts is included in our Programming Accuracy specification.

Well that is all for 2014. I hope that everyone has a safe and happy 2015.  See you next year!

Matt

Wednesday, May 21, 2014

DC Source Measurement Accuracy and Resolution – With Shorter Measurement Intervals

I had gotten a customer support request a while ago inquiring about what the measurement resolution was on our new family of N6900A and N7900A Advanced Power System (APS) DC sources.  Like many of our newer products they utilize a high-speed digitizing measurement system.

“I cannot find anything about measurement resolution in the user’s guide, it must have been overlooked!” I was told. Indeed, we have included the measurement resolution in the past on our previous products. We did not include it as a single fixed value this time around, not as an oversight however, but for good reason.

Perhaps the most correct response to the inquiry is “it depends”. Depends on what? The effective measurement resolution depends on the measurement interval that is being used. Why is that? Simply put, there is noise in any measurement system. With older and more basic products that provide low speed measurements and inherently have a long measurement interval that the voltage or current signal is integrated over, measurement system noise is usually not a big factor. However, with the higher speed digitizing measurement systems we now employ in our performance DC sources, factoring in noise based on the measurement interval provides a much more realistic and meaningful answer.

For the N6900A and N7900A APS products we include Table 1 shown below, in our user’s guide to help customers ascertain what the measurement accuracy and resolution is, based on the measurement interval (i.e. measurement integration period) being used is.

Table 1: N6900A/N7900A measurement accuracy and resolution vs. Measurement interval

This table is meant to provide an added error term when using shorter measurement intervals. We use 1 power line cycle (1 NPLC) as the reference point at the top of the table, for the measurement accuracy provided in our specifications. This is a result of averaging 3,255 single samples together. By doing this we have effectively spread the measurement system noise over a greater band and filtered it out by the averaging. For voltage measurements the effective resolution is over 20 bits.

Note now at the bottom of the table there is the row for one point averaged. It is for 0.003 NPLCs, which is 5 microseconds, the sampling period of the digitizer in our DC source. For a single sample the effective measurement resolution is now 12.3 bits for voltage. Note also we provide an accuracy error adder term of 0.02%. This is taking into account the measurement repeatability affecting the accuracy.

A convenient expression for converting from number of bits to dB of signal to noise (SNR) for a digitizer is given by:

SNR (dB) = 6.02 x n (# of bits) + 1.76

The 12.3 bits of effective resolution equates to 75.8 dB of SNR, which is very much in line with what to expect from a wide band, high speed digitizing measurement system like what is provided in this product family.

As previously mentioned the effective measurement resolution is over 20 bits for a 1 NPLC measurement interval. This actually happens to be greater than the actual ADC used. While there is less resolution when using shorter measurement intervals, conversely greater resolution can be achieved by using longer measurement intervals, which I expect to talk more about in a future posting here on “Watt’s Up?”!

In the meantime this is just one more example of how we’re trying to do a better job specifying our products to make them more useful and applicable in ascertaining what their true performance will be in one’s end application.

Sunday, June 30, 2013

What is Command Processing Time?

Hello everybody,

We have a new intern here at Agilent Power & Energy HQ named Patrick.  Gary, Patrick, and I have been having a philosophical debate on what the term command processing time means.  This is a very important number for many of our customers since it tells them what kind of throughput they can get out of our test equipment.  A fast command processing time allows you to reduce your test times and therefore increase your throughput.  The question that we have been debating is:  what is command processing time and how can we measure it?  We have been discussing three scenarios.   Let’s go through them.

The first option is the amount of time that it takes the processor to take one command off the bus so that it can get to the next command.  This tells you how quickly you can send commands to the instrument.  The only issue with this is that some instruments have a buffer so it is not actually “processing” the command, just bringing it into the buffer and letting you send the next command.  Obviously this is useful but it really does not address the throughput question.  This is pretty easy to test by sending a command in a loop and timing it.  You record the time before the command is sent and the time after the loop and then divide by the number of loops you executed.  This would yield a pretty good approximation of the time.

The second option is the amount of time from when the instrument receives a command until it starts performing the action.  I believe that this is what we list in our manuals for the Command Processing Time Supplemental Characteristic.  This does address the throughput issue.  This is also easy for us at Agilent to measure.  We have a breakout for GPIB that allows us to monitor the attention line.  The test that we did was send a VOLT 5 to the instrument.  We looked at the GPIB attention line.  The time from when the attention line toggles until the power supply starts slewing the voltage up would be our command processing time (measured with an always awesome Agilent Oscilloscope).  This is what I consider to be the command processing time.

The third option includes what I spoke about in the last paragraph but also includes the slewing of the voltage.  The processing time would be the time that it takes to take the command and complete all the actions associated with it (for example settling at 5 volts after being sent a VOLT 5 command).  I do not think that this is a bad option but we have a Supplemental Characteristic for voltage rise time that addresses the slewing of the voltage.   The test method would be the same as above using an oscilloscope but watching for where the voltage settles at five volts.

What do you, our readers and customers think the correct interpretation of command processing time is?  Also, please stay tuned for a future installment where we try to figure out what the quickest interface is: LAN, USB, or GPIB.

Thursday, June 20, 2013

How can I measure output impedance of a DC power supply?

In my last posting “DC power supply output impedance characteristics”, I explained what the output impedance characteristics of a DC power supply were like for both its constant voltage (CV) and constant current (CC) modes of operation. I also shared an example of what power supply output impedance is useful for. But how does one go about measuring the output impedance of a DC power supply over frequency, if and when needed?

There are a number of different approaches that can be taken, but these days perhaps the most practical is to use a good network analyzer that will operate at low frequencies, ranging from 10 Hz up to 1 MHz, or greater, depending on your needs. Even when using a network analyzer as your starting point there are still quite a few different variations that can be taken.

Measuring the output impedance requires injecting a disturbance at the particular frequency the network analyzer is measuring at. This signal is furnished by the network analyzer but virtually always needs some amount of transformation to be useful. Measuring the output impedance of a voltage source favors driving a current signal disturbance into the output. Conversely, measuring the output impedance of a current source favors driving a voltage signal disturbance into the output. The two set up examples later on here use two different methods for injecting the disturbance.

The reference input “R” of the network analyzer is then used to measure the current while the second input “A” or “T” is used to measure the voltage on the output of the power supply being characterized. Thus the relative gain being measured by the network analyzer is the impedance, based on:
zout = vout/iout = (A or T)/R
The output voltage and current signals need to be compatible with the measurement inputs on the network analyzer. This means a voltage divider probe may be needed for the voltage measurement, depending on the voltage level, and a resistor or current probe will be needed to convert the current into an appropriate voltage signal. A key consideration here is appropriate scaling constants need to be factored in, based on the gain or attenuation of the voltage and current probes being used, so that the impedance reading is correct.

Figure 1: DC power supply output impedance measurement with the Agilent E5061B

One example set up using the Agilent E5061B network analyzer is shown in Figure 1, taken from page 15 of an Agilent E5061B application note on testing DC-DC converters, referenced below. Here the disturbance is injected in through an isolation transformer coupled across the power supply output through a DC blocking capacitor and a 1 ohm resistor. The 1 ohm resistor is doing double duty in that it is changing the voltage disturbance into a current disturbance and it is also providing a means for the “R” input to measure the current. The “T” input then directly measures the DC/DC converter’s (or power supply’s) output voltage.

A second, somewhat more elaborate, variation of this arrangement, based on using a 4395A network analyzer (now discontinued) has been posted by a colleague here on our Agilent Power Supply forum: “Output Impedance Measurement on Agilent Power Supplies”. In this set up the disturbance signal from the network analyzer is instead fed into the analog input of an Agilent N3306A electronic load. The N3306A in turn creates the current disturbance on the output of the DC power supply under test as well as provide any desired DC loading on the power supply’s output. The N3306A can be used to further boost the level of disturbance if needed. Finally, an N278xB active current probe and matching N2779A probe amplifier are used to easily measure the current signal.

Hopefully this will get you on your way if the need for making power supply output impedance ever arises!

Reference: “Evaluating DC-DC Converters and PDN with the E5061B LF-RF Network Analyzer” Application Note, publication number 5990-5902EN (click here to access)

Sunday, March 31, 2013

Remote sensing can affect load regulation performance

Back in September of 2011, I posted about what load effect was (also known as load regulation) and how it affected testing (see http://powersupplyblog.tm.agilent.com/2011/09/what-is-load-effect-and-how-does-it.html). The voltage load effect specification tells you the maximum amount you can expect the output voltage to change when you change the load current. In addition to the voltage load effect specification, some power supplies have an additional statement in the remote sensing capabilities section about changes to the voltage load effect spec when using remote sensing. These changes are sometimes referred to as load regulation degradation.

For example, the Agilent 6642A power supply (20 V, 10 A, 200 W) has a voltage load regulation specification of 2 mV. This means that for any load current change between 0 A and 10 A, the output voltage will change by no more than 2 mV. The 6642A also has a remote sensing capability spec (really, a “supplemental characteristic”). It says that each load lead is allowed to drop up to half the rated output voltage. The rated output voltage for the 6642A is 20 V, so half is 10 V meaning when remote sensing, you can drop up to 10 V on each load lead. Also included in the 6642A remote sensing capability spec is a statement about load regulation. It says that for each 1 volt change in the + output lead, you must add 3 mV to the load regulation spec. For example, if you were remote sensing and you had 0.1 ohms of resistance in your + output load lead (this could be due to the total resistance of the wire, connectors, and any relays you may have in series with the + output terminal) and you were running 10 A through the 0.1 ohms, you would have a voltage drop of 10 A x 0.1 ohms = 1 V on the + output lead. This would add 3 mV to the load regulation spec of 2 mV for a total of 5 mV.

There are other ways in which this effect can be shown in specifications. For example, when remote sensing, the Agilent 667xA Series of power supplies expresses the load regulation degradation as a formula that includes the voltage drop in the load leads, the resistance in the sense leads, and the voltage rating of the power supply. Output voltage regulation is affected by these parameters because the sense leads are part of the power supply’s feedback circuit, and these formulas describe that effect:

One more example of a way in which this effect can be shown in specifications is illustrated by the Agilent N6752A. Its load effect specification is 2 mV and goes on to say “Applies for any output load change, with a maximum load-lead drop of 1 V/lead”. So the effect of load-lead drop is already included in the load effect spec. Then, the remote sense capability section simply says that the outputs can maintain specifications with up to a 1 V drop per load lead.

When you are choosing a power supply, if you want the output voltage to be very well regulated at your load, be sure to consider all of the specifications that will affect the voltage. Be aware that as your load current  changes, the voltage can change as described by the load effect spec. Additionally, if you use remote sensing, the load effect could be more pronounced as described in the remote sensing capability section (or elsewhere). Be sure to choose a power supply that is fully specified so you are not surprised by these effects when they occur.

Monday, January 23, 2012

Six of seven new Agilent power supplies are autorangers, but what is an autoranger, anyway?

In this blog, I avoid writing posts that are heavily product focused since my intention is generally to provide education and interesting information about power products instead of simply promoting our products. However, when we (Agilent) come out with new power products, I think it is appropriate for me to announce them here. So I will tell you about the latest products announced last week, but I also can’t resist writing about some technical aspect related to these products, so I chose to write about autorangers. But first…..a word from our sponsor….

From last week’s press release, Agilent Technologies “introduced seven high-power modules for its popular N6700 modular power system. The new modules expand the ability of test-system integrators and R&D engineers to deliver multiple channels of high power (up to 500 watts) to devices under test.” Here is a link to the entire press release:

I honestly think these new power modules are really great additions to the family of N6700 power products we continue to build upon. We have several mainframes in which these power modules can be installed and now offer 34 different power modules that address applications in R&D and in integrated test systems. Oooooppps, I slipped into product promotion mode there for just a short time, but it was because I really believe in this family of products….I hope you will forgive me!

OK, now on to the more fun stuff! Since six of these seven new power modules are autorangers, let’s explore what an autoranger is. Agilent has been designing and selling autorangers since the 1970s (we were Hewlett-Packard back then) starting with the HP 6002A. To understand what an autoranger is, it will be useful to start with an understanding of what a power supply output characteristic is.

Power supply output characteristic
A power supply output characteristic shows the borders of an area containing all valid voltage and current combinations for that particular output. Any voltage-current combination that is inside the output characteristic is a valid operating point for that power supply.

There are three main types of power supply output characteristics: rectangular, multiple-range, and autoranging. The rectangular output characteristic is the most common.

Rectangular output characteristic
When shown on a voltage-current graph, it should be no surprise that a rectangular output characteristic is shaped like a rectangle. See Figure 1. Maximum power is produced at a single point coincident with the maximum voltage and maximum current values. For example, a 20 V, 5 A, 100 W power supply has a rectangular output characteristic. The voltage can be set to any value from 0 to 20 V, and the current can be set to any value from 0 to 5 A. Since 20 V x 5 A = 100 W, there is a singular maximum power point that occurs at the maximum voltage and current settings.

Multiple-range output characteristic
When shown on a voltage-current graph, a multiple-range output characteristic looks like several overlapping rectangular output characteristics. Consequently, its maximum power point occurs at multiple voltage-current combinations. Figure 2 shows an example of a multiple-range output characteristic with two ranges also known as a dual-range output characteristic. A power supply with this type of output characteristic has extended output range capabilities when compared to a power supply with a rectangular output characteristic; it can cover more voltage-current combinations without the additional expense, size, and weight of a power supply of higher power. So, even though you can set voltages up to Vmax and currents up to Imax, the combination Vmax/Imax is not a valid operating point. That point is beyond the power capability of the power supply and it is outside the operating characteristic.

Autoranging output characteristic
When shown on a voltage-current graph, an autoranging output characteristic looks like an infinite number of overlapping rectangular output characteristics. A constant power curve (V = P / I = K / I, a hyperbola) connects Pmax occurring at (I1, Vmax) with Pmax occurring at (Imax, V1). See Figure 3.

An autoranger is a power supply that has an autoranging output characteristic. While an autoranger can produce voltage Vmax and current Imax, it cannot produce them at the same time. For example, one of the new power supplies just released by Agilent is the N6755A with maximum ratings of 20 V, 50 A, 500 W. You can tell it does not have a rectangular output characteristic since Vmax x Imax (= 1000 W) is not equal to Pmax (500 W). So you can’t get 20 V and 50 A out at the same time. You can’t tell just from the ratings if the output characteristic is multiple-range or autoranging, but a quick look at the documentation reveals that the N6755A is an autoranger. Figure 4 shows its output characteristic.

For applications that require a large range of output voltages and currents without a corresponding increase in power, an autoranger is a great choice. Here are some example applications where using an autorangers provides an advantage:
• The device under test (DUT) requires a wide range of input voltages and currents, all at roughly the same power level. For example, at maximum power out, a DC/DC converter with a nominal input voltage of 24 V consumes a relatively constant power even though its input voltage can vary from 14 V to 40 V. During testing, this wide range of input voltages creates a correspondingly wide range of input currents even though the power is not changing much.
• There are a variety of different DUTs of similar power consumption, but different voltage and current requirements. Again, different DC/DC converters in the same power family can have nominal input voltages of 12 V, 24 V, or 48 V, resulting in input voltages as low as 9 V (requires a large current), and as high as 72 V (requires a small current). The large voltage and current are both needed, but not at the same time.
• A known change is coming for the DC input requirements without a corresponding change in input power. For example, the input voltage on automotive accessories could be changing from 12 V nominal to 42 V nominal, but the input power requirements will not necessarily change.
• Extra margin on input voltage and current is needed, especially if future test changes are anticipated, but the details are not presently known.

Friday, October 7, 2011

What is line effect and how does it affect my testing?

Line effect is a power supply specification (also known as line regulation or source effect) that describes how well the power supply can maintain its steady-state output setting when the AC input line voltage changes. More formally, it specifies the maximum change in steady-state DC output voltage (or current) resulting from a specified change in the AC input line voltage with all other influence quantities maintained constant. So, when a power supply is regulating its output voltage in CV (constant voltage) mode, this specification tells you how much the voltage can change when the AC input voltage changes. Here is an example:

Let’s say the voltage line effect specification for a 20 V, 5 A power supply is 1 mV and is specified for any line change within ratings. And let’s say that the AC input line voltage range for this power supply for a nominal 120 Vac line is -13% to +6% (104.4 Vac to 127.2 Vac). This means for any AC input line voltage change within the rating of the supply, the output voltage will not change by more than 1 mV. For example, if the power supply is set to 10 V, the actual output may measure 9.999 V at low line (104.4 Vac). (Note that the difference between the setting and the actual output voltage is a different specification called programming accuracy.) If you then increase the AC input line voltage from low line (104.4 Vac) to high line (127.2 Vac), the line effect specification guarantees that the output voltage will not change by more than 1 mV, so it will be somewhere between 9.998 V and 10.000 V. So if the actual output voltage started at 9.999 V at low line and measured 9.9994 V at high line, the line effect for this output when set for 10 V measures 0.4 mV (9.9994 – 9.999), well within the 1 mV specification. You must make the second voltage measurement immediately following the line voltage change to avoid capturing any short-term drift effects.

And what does “with all other influence quantities maintained constant” mean? Things like temperature and output loading can affect the output parameter, so these things must be held constant in order to see only the effect of the line change. The effects on the power supply output of changes in each of these influencing quantities (temperature, output load) are described in different specifications.

Most performance power supplies have line effect specifications of about 1 mV or less. A lower performance model may have a line effect specification of up to 10 mV or more. Power supplies with higher maximum voltage ratings and higher maximum power ratings typically have higher line effect specifications.

If you have an application where maintaining an exact voltage at your DUT is critical and your AC input line can vary throughout the day, you will want to use a power supply with a low line effect specification. If changes in the voltage at your DUT are less critical to you, most power supplies will perform well for your application regardless of line voltage behavior.

Wednesday, September 21, 2011

What is load effect and how does it affect my testing?

Load effect is a power supply specification (also known as load regulation) that describes how well the power supply can maintain its steady-state output setting when the load changes. More formally, it specifies the maximum change in steady-state DC output voltage (or current) resulting from a specified change in the load current (or voltage), with all other influence quantities maintained constant. So, when a power supply is regulating its output voltage in CV (constant voltage) mode, this specification tells you how much the voltage can change when the current changes. Here is an example:

Let’s say the voltage load effect specification for a 20 V, 5 A power supply is 2 mV and is specified for any load change. This means for any current change within the rating of the supply (in this case, up to 5 A), the output voltage will not change by more than 2 mV. For example, if the power supply is set to 10 V, the actual output may measure 9.999 V with no load (0 A). (Note that the difference between the setting and the actual output voltage is a different specification called programming accuracy.) If you then increase the current from 0 A to a full load condition of 5 A, the load effect specification guarantees that the output voltage will not change by more than 2 mV, so it will be somewhere between 9.997 V and 10.001 V. So if the actual output voltage started at 9.999 V with a 0 A load and measured 9.9982 V with a 5 A load, the load effect for this output when set for 10 V measures 0.8 mV (9.999 – 9.9982), well within the 2 mV specification. You must make the second voltage measurement immediately following the load current change to avoid capturing any short-term drift effects.

In the above example, the specified change in load current was “any load change”. Of course, it is implied that the load change is within the output ratings of the supply. You cannot change the output current from 0 A to 100 A on a 5 A power supply. Some load effect specifications state that the load change is a 50% change (e.g., 2.5 A to 5 A) while others may say 10% to 90% of full load (e.g., 0.5 A to 4.5 A).

And what does “with all other influence quantities maintained constant” mean? Things like temperature and the AC line input voltage can affect the output parameter, so these things must be held constant in order to see only the effect of the load change. The effects on the power supply output of changes in each of these influencing quantities (temperature, AC line input voltage) are described in different specifications.

Most performance power supplies have load effect specifications in the range of just a few hundred uV up to a few mV. A lower performance model may have a load effect specification of between 10 mV and 100 mV. Power supplies with higher maximum voltage ratings and higher maximum power ratings typically have higher load effect specifications.

If you have an application where maintaining an exact voltage at your DUT is critical and your DUT draws different amounts of current at different times, you will want to use a power supply with a low load effect specification. If changes in the voltage at your DUT with changes in DUT current are less critical to you, most power supplies will perform well for your application.