Sunday, September 30, 2012

Power Supply Programming Examples

Hello everyone!

One of my responsibilities at Agilent is to oversee programming example generation for our new product introductions.  Programming examples are an area that we wish to improve upon.  Our goal is to provide a selection of programming examples that allow our customers to use the exciting new features of our products at introduction.

The first thing that we want to do is to make sure that we provide examples that our customers can use so I researched the most popular programming languages that customers use with power supplies.  The list that we came up with is: VB.NET, Labview, C#, and in some cases Matlab.  In terms of IO Libraries, we will use direct IO for everything.  We will not be including any driver examples in this plan.  We will also be providing a text file with the SCPI programs and an Agilent Command Expert sequence file.
I would like to solicit some feedback on this plan.  What do you think?  Can we improve this plan?  Are any programming languages missing? What do you look for in a programming example?  

 Please leave comments.

Wednesday, September 26, 2012

Battery-killing cell phone apps? – Part 2

Back on May 25, 2012, I posted about mobile device users avoiding security apps because they think the apps run down their batteries too quickly (read that post here). I also mentioned that a member of the Anti-Malware Testing Standards Organization (AMTSO) is using Agilent’s N6705B DC Power Analyzer to evaluate just how much the security apps affect battery run time and that the results would not be available for a few months. Well, the results are in and guess what? Which security app you choose does not make much difference in your battery run time.

On average, they reported that the effect of using a security app on reducing battery run time is only about 2% which translates into less than 30 minutes of lost battery life per day. And the study went on to explain that the differences in performance of one mobile security product to another were small (they tested 13 products each from a different vendor). I was amused by the author’s comment that they were “not providing a ranking” because it “could get misused by marketing departments”. Indeed!

Here is a link to the report:

The report shows a picture of Agilent’s N6705B DC Power Analyzer as the measuring device. They used this product because “This high-precision instrument can measure battery drain exactly”. A screen shot of Agilent’s 14585A Control and Analysis Software for the DC Power Analyzer was also shown in the report. The software allowed them to evaluate power consumption while performing various mobile phone tasks, such as making phone calls, viewing pictures, browsing websites, watching YouTube (I wonder if they watched any of the DC Power Analyzer videos we have posted!), watching locally stored videos, receiving and sending mails, and opening documents.

If the N6705B DC Power Analyzer and 14585A Control and Analysis Software can evaluate power consumption for all of those things, just think of what it could do for you! Check out Ed’s post from earlier this week for some of those things:

Monday, September 24, 2012

Optimizing Mobile Device Battery Run-time Seminars

On many occasions in the past here both I, and my colleague, Gary, have written about measuring, evaluating, and optimizing battery life of mobile wireless battery powered devices. There is no question that, as all kinds of new and innovative capabilities and devices are introduced; battery life continues to become an even greater challenge.

I recently gave a two-part webcast entitled “Optimize Wireless Device Battery Run-time”. In the first part “Innovative Measurements for Greater Insights” a variety of measurement techniques are employed on a number of different wireless devices to illustrate the nature of how these devices operate and draw power from their batteries over time, and in turn how to go about making and analyzing the measurements to improve the device’s battery run-time. Some key points brought out in this first part include:
  • Mobile devices operate in short bursts of activities to conserve power. The resulting current drain is pulsed, spanning a wide dynamic range. This can be challenging for a lot of traditional equipment to accurately measure.
  • Not only is a high level of dynamic range of measurement needed for amplitude, but it is also needed on the time axis as well, for gaining deeper insights on optimizing a device’s battery run-time.
  • Over long periods of time a wireless device’s activity tends to be random in nature. Displaying and analyzing long term current drain in distribution plots can quickly and concisely display and quantify currents relating to specific activities and sub-circuits that would otherwise be difficult to directly observe in a data log.
  • The battery’s characteristics influence the current and power drawn by the device. When powering the device by other than its battery, it can be a significant source of error in testing if it does not provide results like that of when using the battery.

Going beyond evaluating and optimizing the way the device makes efficient use of its battery power, the second part, “The Battery, its End Use, and Its Management” brings out the importance of, and how to go about making certain you are getting the most of the limited amount of battery power you have available to you. Some key points for this second part include:
  • Validating the battery’s stated capacity is a crucial first step both for being certain you are getting what is expected from the battery and serve as a starting reference point that you can correlate back to the manufacturer’s data.
  • Evaluating the battery under actual end-use conditions is important as the dynamic loading a wireless device places on the battery often adversely affects the capacity obtained from the battery.
  • Charging, for rechargeable batteries, must be carefully performed under stated conditions in order to be certain of in turn getting the correct amount of capacity back out of the battery. Even very small differences in charging conditions can lead to significant differences in charge delivered during the discharge of the battery.
  • The wireless device’s battery management system (or BMS) needs to be validated for proper charging of the battery as well as suitability for addressing the particular performance needs of the device.

In Figure 1 the actual charging regiment was captured on a mobile phone battery being charged by its BMS. There turned out to be a number of notable differences in comparison to when the battery was charged using a standard charging regiment.

Figure 1: Validating BMS charge regiment on a GSM/GPRS mobile phone

If you are interested in learning more about optimizing wireless device battery run-time this two part seminar is now available on-demand at:

I think you will enjoy them!

Wednesday, September 5, 2012

Early Power Transistor Evolution, Part 2, Silicon

As discussed in part 1 of this two-part posting on early power transistor evolution, by the early 1960’s germanium power transistors were in widespread use in DC power supplies, audio amplifiers, and other relatively low frequency power applications. Although fairly expensive at that time the manufacturers had processes establish to reliably produce them in volume. To learn more about early germanium power transistors click here to review part 1.

As with most all things manufacturers continued to investigate ways of making things better, faster, and cheaper. Transistors were still relatively new and ready for further innovation. Next to germanium silicon was the other semiconductor in widespread use and with new and different processes developed for transistor manufacturing, silicon quickly displaced germanium as the semiconductor of choice for power transistors. One real workhorse of a power transistor that has truly stood the “Test of Time” is the 2N3055, pictured in Figure 1. Also pictured is his smaller brother, the 2N3054.

Figure 1: 2N3055 and 2N3054 power transistors

Following are some key maximum ratings on the 2N3055 power transistor:
  • VCEO = 60V
  • VCBO = 100V
  • VEBO= 7V
  • IC = 15A
  • PD = 115W
  • hfe= 45 typical
  • fT = 1.5 MHz
  • Thermal resistance = 1.5 oC/W
  • TJ= 200 oC
  • Package: TO-3 (now TO-204AA)
  • Polarity: NPN
  • Material/process: Silicon diffused junction hometaxial-base structure

Diffused junction silicon transistors made major inroads in the early 1960’s ultimately making the germanium power transistors obsolete.  One huge improvement using silicon, especially for power transistors, is the junction temperature, which is generally rated for 200 oC.  This allowed operating at much higher ambient temperatures and at higher power levels when compared to germanium. 

While the alloy junction process being used for the early germanium transistors favored making PNP transistors, the diffused junction process on silicon favored making NPN transistors somewhat more. Silicon diffused junction NPN transistors are much more prevalent than PNP devices, and the PNP complements to NPN devices, where available, are more costly.  

The diffusion process made a giant leap in transistor mass production possible. Many transistors could now be made at once on a larger silicon wafer, greatly reducing the cost. The more precise nature of the diffusion junction over the alloy junction also improved performance. As one example, tor the 2N3055 the transition frequency increased roughly another order of magnitude over the 2N1532 germanium alloy junction transistor in part 1, to 1.5 MHz.  

The hometaxial-base structure is a single simultaneous diffusion into both sides of a homogenously-doped base wafer, one side forming the collector and the other side the emitter. A pattern on the emitter side is etched away around the emitter, down to the P-type layer, to form the base. The emitter is left standing as a plateau or “mesa” above the base.

The 2N3054 was electrically identical to the 2N3055 except for its lower current and power capabilities. It’s smaller TO-66 package however was never very popular and was quietly phase out in the early 1980’s, sometimes along with some of the devices that were packaged in it!

Process improvements beyond the single diffused hometaxial-base structure continued through the 1960s with silicon transistors, including double diffused, double- and triple diffused planar and epitaxial structures. The epitaxial structure is a departure from the diffused structures in that features are grown onto the top of the base wafer. With greater control of doping levels and gradients, and more precise and complex geometries, the performance silicon power transistors continued to improve in most all aspects.

Plastic-packaged power transistors have for the most part come to displace hermetic metal packages like the TO-3 (TO-204AA), first due to the lower cost of the part and second, with simpler mounting, reducing the cost and labor of the products they are incorporated into. One drawback of most of the plastic-packaged power devices is their maximum temperature rating has been reduced to typically 150 oC, taking back quite a bit of temperature headroom provided by the same devices in hermetic metal packages. Sometimes there is a price to be paid for progress! Pictured in figure 2 are two (of many) popular power device packages, the smaller TO-220AB and the larger TO-247.

Figure 2: TO-220AB and TO-247 power device plastic packages

It’s pretty fascinating to see how transistors and the various processes used to manufacture them evolved over time. In these two posts I’ve hardly scratched the surface of the world of power transistors and power devices. For one there is a variety of other transistor types not touched upon, including a variety of power FETs. Power FETs have made major inroads in all kinds of applications in power supplies. Also work continues to provide higher power devices in surface mount packages. These are just a couple of numerous examples, possibly something to write about at a future date!

References: “RCA Transistor Thyristor & Diode Manual” Technical Series SC-14, RCA Electronic Components, Harrison, NJ 

Friday, August 31, 2012

To autoscale, or not to autoscale: that is the question!

While the primary focus of this blog is power, this post is about a topic that applies beyond just power: autoscale. I want our readers to comment on this topic:

 Should a test equipment user use the autoscale button or not? If so, why? If not, why not?

How is autoscale related to power, you ask? One of our Agilent power products, the N6705B DC Power Analyzer, has a build-in scope-like function with an autoscale button. The built-in scope is useful for measuring things like dynamic current flowing into a device or for looking at the response to an arbitrary waveform that can also be generated with the same product. To autoscale in Scope View mode, you push an N6705B front panel Autoscale button to automatically scale the vertical and horizontal axes to show you the waveforms on the scope screen.

While this may seem like a convenient feature, there are times when using autoscale on any instrument (like an oscilloscope) does not result in the display that you want. And some signals cannot be captured with an autoscale feature. The signal must be repetitive and typically meet certain minimum vales of voltage, frequency, and duty cycle. But more importantly, using autoscale eliminates having to think about the signals you are trying to observe. While this may seem like an advantage, I think it makes us lazy and less likely to understand what we are doing.

These days, we have grown too accustomed to just pushing a button to accomplish a task. We push a button to heat our food using a microwave. We push a button to cool our homes using our central air conditioning thermostat. We push a button to turn on our computers, get food, candy, and drinks from vending machines, get cash from an ATM, start our (modern) cars. We push buttons all day long! But when it comes to test and measurement equipment, we are trying to gain insight into the circuit or device we are analyzing. And I believe that insight starts with thinking about the waveforms we are trying to display. Thinking about what the waveshape is supposed to look like…how to trigger on the signal….what the approximate maximum voltage (or current or power) is….whether or not this is a repetitive waveform or a single event. Thinking about these things brings us closer to the insight we want to glean from the signals we examine. And ultimately, it is that insight that we seek. So just pushing a button to get a signal on a scope screen provides us with little insight; in fact, it could bias our thinking into mistakenly believing that what we are seeing is correct because we did not bother to think about what the waveforms are supposed to look like ahead of time!

So I say “no”, a test equipment user should not use the autoscale button for the reasons stated above. In fact, for years, I have trained new engineers and some of our sales people, and I have been known to say on more than one occasion, “No self-respecting engineer would ever hit the autoscale button!” Of course, I am only half-serious about this statement, but I think it supports my view that it is useful to think about what you expect on the scope before just viewing the waveforms. Of course, you should ALWAYS think about whether or not what you see on the display is expected and makes sense. After all, why else would you look at the signals?

What do YOU think???

Please comment below.

Power Supply Resolution versus Accuracy

One of the questions that we have received on the support team quite a few times and something that confused me when I started at Agilent is the concept of our resolution supplemental characteristic versus our accuracy specification.  I sat down with my colleague Russell and we wanted to do a simple explanation of the differences. 

If you look at our power supply offering, there is always an accuracy specification and a resolution supplemental characteristic for both programming and measurement.  For the purposes of this blog post, we are going to look at the programming accuracy (0.06% + 19 mV) and programming resolution (3.5 mV) of the N6752A High Performance DC Power Module.  Please note that these same explanations apply to the measurement side as well but for the sake of brevity we will be sticking to programming in our example.  

Let’s start by talking about resolution.  Our power supplies use Digital to Analog Converters (DACs) to take the user inputted settings and convert them to analog signals that set a programming voltage that will interact with the control loop of the power supply to set the output.  The resolution supplemental characteristic represents one single count of the DAC.  This is also known as the Least Significant Bit (LSB).  What this means for our end user is that the smallest step they can make between two settings on the unit is the programming resolution number.  In our example, the N6752A can be set to 0.9975 V, 1.001 V, 1.0045 V, etc.  These are all multiples of 3.5 mV and any setting that falls between two DAC counts will be put into the nearest count.  If the user tried to set the N6752A to 1 V, the power supply will actually be set to 1.001 V since that is the nearest count.  This is also known as quantization error. 

The accuracy specification always includes an error term for the quantization error.  This is typically half of the resolution supplemental characteristic.  The accuracy specification also includes many other factors such as DAC accuracy, DAC linearity, offset error of operational amplifiers, gain errors of the feedback loops, and temperature drift of components.   The accuracy will always be worse than the resolution since it includes all of the factors listed above as well as the term for the quantization error.  You can definitely see this in the N6752A where the resolution is 3.5 mV and just the offset of the accuracy specification not including the gain term is 19 mV which is more than 5 times greater than the resolution. 

I hope that this was helpful.   If there are any questions, please leave comments here or on our forum at Agilent Discussion Forums

Thursday, August 23, 2012

Early Power Transistor Evolution, Part 1, Germanium

We recently completed our “Test of Time” power supply contest. Contestants told us about how they were using their Harrison Labs/HP/Agilent DC power supplies and the older the power supply, the better. It was pretty fascinating to see the many innovative way these power supplies were being used. It was also fascinating to see so many “vintage” power supplies still functional and in regular use after many decades. Several of them even being vacuum tube based!

One key component found in most all power supplies from the mid 1950s on is, no surprise, power transistors. Shortly after manufacturers were able to make reliable and reasonably rugged transistors in the mid 1950s they also developed transistors that would handle higher currents and power. Along with higher power came the need to dissipate the power. This led to some interesting packaging; some familiar and others not as familiar. Hunting through my “archives” I managed to locate some early power transistors. In review of their characteristics it was quite enlightening to see how they evolved to become better, faster, and cheaper! I also found it is quite challenging to find good, detailed, and most especially, non-conflicting information on these early devices.

Germanium was the first semiconducting material widely adopted for transistors, power and otherwise. One early power transistor I came across was the 2N174, shown in Figure 1.

Figure 1: 2N174 Power Transistor

Following are some key maximum ratings on the 2N174 power transistor:

  •  VCEO = -55V
  • VCBO = -80V
  •  VEBO= -60V
  •  IC = 15A
  • PD = 150W
  • hfe= 25
  •  fT = 10 kHz
  •  Thermal resistance = 0.35 oC/W
  •  TJ= 100 oC
  • Package: TO-36
  • Polarity: PNP
  • Material/process: Germanium alloy junction

The alloy junction process provided a reliable means to mass produce transistors. Most of the earlier transistors are PNP with N type semiconductor “pellets” or “dots” of typically indium alloyed to a P type germanium wafer. This process favored PNP production as the indium had a lower melting point than the N-type germanium bases. Still, this was a relatively slow and expensive process as they were basically manufactured one at a time. These early alloy junction transistors were not passivated and therefore needed to be hermetically packaged to prevent contamination and degradation. Often referred to as a “door knob” package, the TO-36 stud mount package was quite a piece of work and was no doubt expensive to as a result. It had a pretty impressive junction-to-case thermal resistance but given the maximum temperature of just 100 oC, low thermal resistance was necessary in order to operate the transistor at a reasonable power level. The low maximum operating temperature of germanium was one of most limiting attributes, especially for power applications. The transition frequency, fT of just 10 kHz was also extremely low. This is the frequency where current gain, hfe, drops down to 1, ceasing to be an effective amplifier. The 2N174 appears to have originated in the later 1950’s.

Another early power transistor we used in our HP 855B bench power supplies is the 2N1532, as shown in Figure 2.

Figure 2: 2N1532 power transistors used in a Harrison Labs Model 855B power supply.

Following are some key maximum ratings on the 2N1532 power transistor:

  • VCEO = -50V 
  • VCBO = -100V
  • VEBO= -50V
  •  IC = 5A
  • PD = 94W
  • hfe= 20 to 40
  • fT = 200 kHz
  • Thermal resistance = 0.8 oC/W
  •  TJ= 100 oC
  • Package: TO-3
  • Polarity: PNP
  • Material/process: Germanium alloy junction
The 2N1532 is also a germanium PNP power transistor, similar to a number of other power transistors of the time. It is packaged in the widely recognizable TO-3 diamond-shaped hermetic package.  Being a much less complex case design it must have been considerably less costly than the TO-36 package in Figure 1, and has become one of the most ubiquitous hermetic power semiconductor packages of all times. To keep junction temperature rise down the Harrison Labs Model  855B power supply used three 2N1532 transistors in its series regulator to deliver just  18 volts and 1.5 amps output. It’s no wonder why these power supplies have stood the “Test of Time” as these transistors are running significantly de-rated, at just a fraction of their maximum power here.  It is also noteworthy to see the transition frequency of 200 kHz is 20 times that of the 2N174. This is one of the more questionable data I had found but if it is accurate then clearly design and process improvements contributed to this performance improvement.  While date codes on some of the capacitors in this model 855B power supply place its manufacture in 1962, early germanium PNP power transistors in TO-3 packages like these also typically originate back in the later 1950’s.

While germanium transistors have much greater conductivity, lower forward- and saturation voltage drops compared to silicon transistors, silicon ultimately won out in the end, especially for power transistor applications. Stay tuned for my second part in an upcoming posting. Discover how silicon evolved to rule the day for power transistors!