Friday, August 31, 2012

To autoscale, or not to autoscale: that is the question!

While the primary focus of this blog is power, this post is about a topic that applies beyond just power: autoscale. I want our readers to comment on this topic:

 Should a test equipment user use the autoscale button or not? If so, why? If not, why not?

How is autoscale related to power, you ask? One of our Agilent power products, the N6705B DC Power Analyzer, has a build-in scope-like function with an autoscale button. The built-in scope is useful for measuring things like dynamic current flowing into a device or for looking at the response to an arbitrary waveform that can also be generated with the same product. To autoscale in Scope View mode, you push an N6705B front panel Autoscale button to automatically scale the vertical and horizontal axes to show you the waveforms on the scope screen.

While this may seem like a convenient feature, there are times when using autoscale on any instrument (like an oscilloscope) does not result in the display that you want. And some signals cannot be captured with an autoscale feature. The signal must be repetitive and typically meet certain minimum vales of voltage, frequency, and duty cycle. But more importantly, using autoscale eliminates having to think about the signals you are trying to observe. While this may seem like an advantage, I think it makes us lazy and less likely to understand what we are doing.

These days, we have grown too accustomed to just pushing a button to accomplish a task. We push a button to heat our food using a microwave. We push a button to cool our homes using our central air conditioning thermostat. We push a button to turn on our computers, get food, candy, and drinks from vending machines, get cash from an ATM, start our (modern) cars. We push buttons all day long! But when it comes to test and measurement equipment, we are trying to gain insight into the circuit or device we are analyzing. And I believe that insight starts with thinking about the waveforms we are trying to display. Thinking about what the waveshape is supposed to look like…how to trigger on the signal….what the approximate maximum voltage (or current or power) is….whether or not this is a repetitive waveform or a single event. Thinking about these things brings us closer to the insight we want to glean from the signals we examine. And ultimately, it is that insight that we seek. So just pushing a button to get a signal on a scope screen provides us with little insight; in fact, it could bias our thinking into mistakenly believing that what we are seeing is correct because we did not bother to think about what the waveforms are supposed to look like ahead of time!

So I say “no”, a test equipment user should not use the autoscale button for the reasons stated above. In fact, for years, I have trained new engineers and some of our sales people, and I have been known to say on more than one occasion, “No self-respecting engineer would ever hit the autoscale button!” Of course, I am only half-serious about this statement, but I think it supports my view that it is useful to think about what you expect on the scope before just viewing the waveforms. Of course, you should ALWAYS think about whether or not what you see on the display is expected and makes sense. After all, why else would you look at the signals?

What do YOU think???

Please comment below.

Power Supply Resolution versus Accuracy



One of the questions that we have received on the support team quite a few times and something that confused me when I started at Agilent is the concept of our resolution supplemental characteristic versus our accuracy specification.  I sat down with my colleague Russell and we wanted to do a simple explanation of the differences. 

If you look at our power supply offering, there is always an accuracy specification and a resolution supplemental characteristic for both programming and measurement.  For the purposes of this blog post, we are going to look at the programming accuracy (0.06% + 19 mV) and programming resolution (3.5 mV) of the N6752A High Performance DC Power Module.  Please note that these same explanations apply to the measurement side as well but for the sake of brevity we will be sticking to programming in our example.  

Let’s start by talking about resolution.  Our power supplies use Digital to Analog Converters (DACs) to take the user inputted settings and convert them to analog signals that set a programming voltage that will interact with the control loop of the power supply to set the output.  The resolution supplemental characteristic represents one single count of the DAC.  This is also known as the Least Significant Bit (LSB).  What this means for our end user is that the smallest step they can make between two settings on the unit is the programming resolution number.  In our example, the N6752A can be set to 0.9975 V, 1.001 V, 1.0045 V, etc.  These are all multiples of 3.5 mV and any setting that falls between two DAC counts will be put into the nearest count.  If the user tried to set the N6752A to 1 V, the power supply will actually be set to 1.001 V since that is the nearest count.  This is also known as quantization error. 

The accuracy specification always includes an error term for the quantization error.  This is typically half of the resolution supplemental characteristic.  The accuracy specification also includes many other factors such as DAC accuracy, DAC linearity, offset error of operational amplifiers, gain errors of the feedback loops, and temperature drift of components.   The accuracy will always be worse than the resolution since it includes all of the factors listed above as well as the term for the quantization error.  You can definitely see this in the N6752A where the resolution is 3.5 mV and just the offset of the accuracy specification not including the gain term is 19 mV which is more than 5 times greater than the resolution. 

I hope that this was helpful.   If there are any questions, please leave comments here or on our forum at Agilent Discussion Forums

Thursday, August 23, 2012

Early Power Transistor Evolution, Part 1, Germanium


We recently completed our “Test of Time” power supply contest. Contestants told us about how they were using their Harrison Labs/HP/Agilent DC power supplies and the older the power supply, the better. It was pretty fascinating to see the many innovative way these power supplies were being used. It was also fascinating to see so many “vintage” power supplies still functional and in regular use after many decades. Several of them even being vacuum tube based!

One key component found in most all power supplies from the mid 1950s on is, no surprise, power transistors. Shortly after manufacturers were able to make reliable and reasonably rugged transistors in the mid 1950s they also developed transistors that would handle higher currents and power. Along with higher power came the need to dissipate the power. This led to some interesting packaging; some familiar and others not as familiar. Hunting through my “archives” I managed to locate some early power transistors. In review of their characteristics it was quite enlightening to see how they evolved to become better, faster, and cheaper! I also found it is quite challenging to find good, detailed, and most especially, non-conflicting information on these early devices.

Germanium was the first semiconducting material widely adopted for transistors, power and otherwise. One early power transistor I came across was the 2N174, shown in Figure 1.



Figure 1: 2N174 Power Transistor

Following are some key maximum ratings on the 2N174 power transistor:

  •  VCEO = -55V
  • VCBO = -80V
  •  VEBO= -60V
  •  IC = 15A
  • PD = 150W
  • hfe= 25
  •  fT = 10 kHz
  •  Thermal resistance = 0.35 oC/W
  •  TJ= 100 oC
  • Package: TO-36
  • Polarity: PNP
  • Material/process: Germanium alloy junction

The alloy junction process provided a reliable means to mass produce transistors. Most of the earlier transistors are PNP with N type semiconductor “pellets” or “dots” of typically indium alloyed to a P type germanium wafer. This process favored PNP production as the indium had a lower melting point than the N-type germanium bases. Still, this was a relatively slow and expensive process as they were basically manufactured one at a time. These early alloy junction transistors were not passivated and therefore needed to be hermetically packaged to prevent contamination and degradation. Often referred to as a “door knob” package, the TO-36 stud mount package was quite a piece of work and was no doubt expensive to as a result. It had a pretty impressive junction-to-case thermal resistance but given the maximum temperature of just 100 oC, low thermal resistance was necessary in order to operate the transistor at a reasonable power level. The low maximum operating temperature of germanium was one of most limiting attributes, especially for power applications. The transition frequency, fT of just 10 kHz was also extremely low. This is the frequency where current gain, hfe, drops down to 1, ceasing to be an effective amplifier. The 2N174 appears to have originated in the later 1950’s.

Another early power transistor we used in our HP 855B bench power supplies is the 2N1532, as shown in Figure 2.



Figure 2: 2N1532 power transistors used in a Harrison Labs Model 855B power supply.

Following are some key maximum ratings on the 2N1532 power transistor:

  • VCEO = -50V 
  • VCBO = -100V
  • VEBO= -50V
  •  IC = 5A
  • PD = 94W
  • hfe= 20 to 40
  • fT = 200 kHz
  • Thermal resistance = 0.8 oC/W
  •  TJ= 100 oC
  • Package: TO-3
  • Polarity: PNP
  • Material/process: Germanium alloy junction
The 2N1532 is also a germanium PNP power transistor, similar to a number of other power transistors of the time. It is packaged in the widely recognizable TO-3 diamond-shaped hermetic package.  Being a much less complex case design it must have been considerably less costly than the TO-36 package in Figure 1, and has become one of the most ubiquitous hermetic power semiconductor packages of all times. To keep junction temperature rise down the Harrison Labs Model  855B power supply used three 2N1532 transistors in its series regulator to deliver just  18 volts and 1.5 amps output. It’s no wonder why these power supplies have stood the “Test of Time” as these transistors are running significantly de-rated, at just a fraction of their maximum power here.  It is also noteworthy to see the transition frequency of 200 kHz is 20 times that of the 2N174. This is one of the more questionable data I had found but if it is accurate then clearly design and process improvements contributed to this performance improvement.  While date codes on some of the capacitors in this model 855B power supply place its manufacture in 1962, early germanium PNP power transistors in TO-3 packages like these also typically originate back in the later 1950’s.

While germanium transistors have much greater conductivity, lower forward- and saturation voltage drops compared to silicon transistors, silicon ultimately won out in the end, especially for power transistor applications. Stay tuned for my second part in an upcoming posting. Discover how silicon evolved to rule the day for power transistors!

Tuesday, August 7, 2012

How Does an Electronic Load Regulate It’s Input Voltage, Current, and Resistance?


In a sense electronic loads are the antithesis of power supplies, i.e. they sink or absorb power while power supplies source power. In another sense they are very similar in the way they regulate constant voltage (CV) or constant current (CC). When used to load a DUT, which inevitably is some form of power source, conventional practice is to use CC loading for devices that are by nature voltage sources and conversely use CV loading for devices that are by nature current sources. However most all electronic loads also feature constant resistance (CR) operation as well. Many real-world loads are resistive by nature and hence it is often useful to test power sources meant to drive such devices with an electronic load operating in CR mode.

To understand how CC and CV modes work in an electronic load it is useful to first review a previous posting I wrote here, entitled “How Does a Power Supply Regulate It’s Output Voltage and Current?”. Again, the CC and CV modes are very similar in operation for both a power supply and an electronic load. An electronic load CC mode operation is depicted in Figure 1.



Figure 1: Electronic load circuit, constant current (CC) operation

The load, operating in CC mode, is loading the output of an external voltage source. The current amplifier is regulating the electronic load’s input current by comparing the voltage on the current shunt against a reference voltage, which in turn is regulating how hard to turn on the load FET. The corresponding I-V diagram for this CC mode operation is shown in Figure 2. The operating point is where the output voltage characteristic of the DUT voltage source characteristic intersects the input constant current load line of the electronic load.



Figure 2: Electronic load I-V diagram, constant current (CC) operation

CV mode is very similar to CC mode operation, as depicted in Figure 3.  However, instead of monitoring the input current with a shunt voltage, a voltage control amplifier compares the load’s input voltage, usually through a voltage divider, against a reference voltage. When the input voltage signal reaches the reference voltage value the voltage amplifier turns the load FET on as much as needed to clamp the voltage to the set level.



Figure 3: Electronic load circuit, constant voltage (CV) operation

A battery being charged is a real-world example of a CV load, charged typically by a constant current source. The corresponding I-V diagram for CV mode operation is depicted in figure 4.




Figure 4: Electronic load I-V diagram, constant voltage (CV) operation

But how does an electronic load’s CR mode work? This requires yet another configuration, as depicted in figure 5. While CC and CV modes compare current and voltage against a reference value, in CR mode the control amplifier compares the input voltage against the input current so that one is the ratio of the other, now regulating the input at a constant resistance value.  With current sensing at 1 V/A and voltage sensing at 0.2 V/V, the electronic load’s resulting  input resistance value is 5 ohms for its CR mode operation in Figure 5.



Figure 5: Electronic load circuit, constant resistance (CR) operation

An electronic load’s CR mode is well suited for loading a power source that is either a voltage or current source by nature. The corresponding I-V diagram for this CR mode for loading a voltage source is shown in Figure 6. Here the operating point is where the output voltage characteristic of the DUT voltage source intersects the input constant resistance characteristic of the load.



Figure 6: Electronic load I-V diagram, constant resistance (CR) operation

As we have seen here an electronic load is very similar in operation to a power supply in the way it regulates to maintain constant voltage or constant current at its input.  However many real-world loads exhibit other characteristics, with resistive being most prevalent. As a result most all electronic loads are alternately able to regulate their input to maintain a constant resistance value, in addition to constant voltage and constant current.