Showing posts with label wireless device. Show all posts
Showing posts with label wireless device. Show all posts

Tuesday, May 8, 2012

Establishing Measurement Integration Time for Leakage Currents

The proliferation of mobile wireless devices drives a corresponding demand for components going into these devices. A key attribute of these components is the need to have low levels of leakage current during off and standby mode operation, to extend the battery run-time of the host device. I brought up the importance of making accurate leakage currents quickly in an earlier posting “Pay Attention to the Impact of the Bypass Capacitor on Leakage Current Value and Test Time”(click here to review). Another key aspect about making accurate leakage currents quickly is establishing the proper minimum required measurement integration time. I will go into factors that govern establishing this time here.

Assuming the leakage current being drawn by the DUT, as well as any bypass capacitors on the fixture, have fully stabilized, the key thing with selecting the correct measurement integration time is getting an acceptable level of measurement repeatability. Some experimentation is useful in determining the minimum required amount of time. The primary problem with leakage current measurement is one of AC noise sources present in the test set up. With DC leakage current being just a few micro amps or less these noises are significant. Higher level currents can be usually measured much more quickly as the AC noises are relatively negligible in comparison. There are a variety of potential noise sources, including radiated and conducted from external sources, including the AC line, and internal noise sources, such as the AC ripple voltage from the DC source’s output. This is illustrated in Figure 1 below. Noise currents directly add to the DC leakage current while noise voltages become corresponding noise currents related by the DUT and test fixture load impedance.


Figure 1: Some noise sources affecting DUT current measurement time

Using a longer measurement time integrates out the peak-to-peak random deviations in the DC leakage current to provide a consistently more repeatable DC measurement result, but at the expense of increasing overall device test time. Measurement repeatability should be based on a statistical confidence level, which I will do into more detail further on. Using a measurement integration time of exactly one power line cycle (1 PLC) of 20 milliseconds (for 50 Hz) or 16.7 milliseconds (for 60 Hz) cancels out AC line frequency noises. Many times a default time of 100 milliseconds is used as it is an integer multiple of both 20 and 16.7 milliseconds. This is fine if overall DUT test time is relatively long but generally not acceptable when total test time is just a couple of seconds, as is the case with most components. As a minimum, setting the measurement integration time to 1 PLC is usually the prudent thing to do when short overall DUT test time is paramount.

Reducing leakage current test time below 1 PLC means reducing any AC line frequency noises to a sufficiently low level such that they are relatively negligible compared to higher frequency noises, like possibly the DC source’s wideband output ripple noise voltage and current. Proper grounding, shielding, and cancellation techniques can greatly reduce noise pickup. Paying attention to the choice and size of bypass capacitors used on the test fixture is also important. A larger-than-necessary bypass capacitor can increase measured noise current when the measuring is taking place before the capacitor, which is many times the case. Establishing the requirement minimum integration time is done by setting a setting an acceptable statistical confidence level and then running a trial with a large number of measurements plotted in a histogram to assure that they fall within this confidence level for a given measurement integration time. If they did not then the measurement integration time would need to be increased. As an example I ran a series of trials to determine what the acceptable minimum required integration time was for achieving 10% repeatability with 95% confidence for a 2 micro amp leakage current. AC line noises were relatively negligible. As shown in Figure 2, when a large series of measurements were taken and plotted in a histogram, 95% of the values fell within +/- 9.5% of the mean for a measurement integration time of 1.06 milliseconds.


Figure 2: 2 Leakage current measurement repeatability histogram example

Leakage current measurements by nature take longer to measure due to their extremely low levels. Careful attention to minimizing noise and establishing the minimum required measurement integration time contributes toward improving the test throughput of components that take just seconds to test.

Thursday, April 12, 2012

Pay Attention to the Impact of the Bypass Capacitor on Leakage Current Value and Test Time

It is no secret there is big demand for all kinds of wireless battery powered devices and, likewise, the components that go into these devices. These devices and their components need to be very efficient in order to get the most operating and standby time out of the limited amount of power they have available from the battery. Off-mode and leakage currents of these devices and components need to be kept to a minimum as an important part of maximizing battery run and standby time. Levels are typically in the range of tens of microamps for devices and just a microamp or less for a component. Off-mode and leakage currents are routinely tested in production to assure they meet specified requirements. The markets for wireless battery powered devices and their components are intensely competitive. Test times need to be kept to a minimum, especially for the components. It turns out the choice of the input power bypass capacitor being used, either within the DUT on the DUT’s test fixture, can have a large impact on the leakage current value and especially the test time for making an accurate leakage current measurement.

Good things come in small packages?
A lot has been done to provide greater capacitance in smaller packages for ceramic and electrolytic capacitors, for use in bypass applications. It is worth noting that electrolytic and ceramic capacitors exhibit appreciable dielectric absorption, or DA. This is a non-linear behavior causing the capacitor to have a large time-dependent charge or discharge factor, when a voltage or short is applied. It is usually modeled as a number of different value series R-C pairs connected in parallel with the main capacitor. This causes the capacitor to take considerable time to reach its final steady state near-zero current when a voltage is applied or changed. When trying to test the true leakage current on a DUT it may be necessary to wait until the current on any bypass capacitors has reached steady state before a current measurement is taken. Depending on the test time and capacitor being used this could result in an unacceptably long wait time.

So how do they compare?
In Figure 1 I captured the time-dependent current response waveform for a 5.1 megohm resistor, a 5.1 megohm resistor in parallel with 100 microfarad electrolytic capacitor, and finally a 5.1 megohm resistor in parallel with 100 microfarad film capacitor, when a 5 volt step stimulus was applied.

Figure 1: Current response of different R-C loads to 5 volt step

The 5.1 megohm resistor (i.e. “no capacitor”) serves as a base line to compare the affect the two different bypass capacitors have on leakage current measurement. The film capacitor has relatively ideal electrical characteristics in comparison to an equivalent electrolytic or ceramic capacitor. It settles down to near steady state conditions within 0.5 to 1 second. At 3 to 3.5 seconds out (marker placement in Figure 1) the film capacitor is contributing a fairly negligible 42 nanoamps of additional leakage. In comparison the electrolytic capacitor current is still four times as great as the resistor current and nowhere near being settled out. If you ever wondered why audio equipment producers insist on high performance film capacitors in critical applications, DA is one of those reasons!

So how long did it take for the electrolytic capacitor to reach steady state? I set up a longer term capture in Figure 2 for the electrolytic capacitor. After about a whopping 40 seconds later it seemed to be fully settled out, but still contributing a substantial 893 nanoamps of additional steady state leakage current.

Figure 2: 100 microfarad electrolytic capacitor settling time

Where do I go from here?
So what should one do when needing to test leakage current? When testing a wireless device be aware of what kind and value of bypass capacitor has been incorporated into it. Most likely it is a ceramic capacitor nowadays. Film capacitors are too large and cost prohibitive here. Find out how long it takes to settle to its steady state value. Also, off-state current measurements are generally left until the end of the testing to not waste time waiting for the capacitor to reach steady state. If testing a component, if a bypass capacitor is being used on the test fixture, consider using a film capacitor. With test times of just seconds and microamp level leakage currents the wrong bypass capacitor can be a huge problem!

Wednesday, March 21, 2012

Using Current Drain Measurements to Optimize Battery Run-time of Mobile Devices

One power-related application area I do a great deal of work on is current drain measurements and analysis for optimizing the battery run-time of mobile devices. In the past the most of the focus has been primarily mobile phones. Currently 3G, 4G and many other wireless technologies like ZigBee continue to make major inroads, spurring a plethora of new smart phones, wireless appliances, and all kinds of ubiquitous wireless sensors and devices. Regardless of whether the device is overly power-hungry due to running data-intensive applications or power-constrained due to its ubiquitous nature, there is a need to optimize its thirst for power in order to get the most run-time from its battery. The right kind of measurements and analysis on the device’s current drain can yield a lot of insight on the device’s operation and efficiency of its activities that are useful for the designer in optimizing its battery run-time. I recently completed an article that appeared in Test & Measurement World, on-line back in November and then in print in their Dec 2011- Jan 2012 issue. Here is a link to the article:
http://www.tmworld.com/article/520045-Measurements_optimize_battery_run_time.php

A key factor in getting current drain measurements to yield the deeper insights that really help optimize battery run-time is the dynamic range of measurement, both in amplitude and in time, and then having the ability to analyze the details of these measurements. The need for a great dynamic range of measurement stems from the power-savings nature of today’s wireless battery powered devices. For power-savings it is much more efficient for the device to operate in short bursts of activities, getting as much done as possible in the shortest period of time, and then go into a low power idle or sleep state for an extended period of time between these bursts of activities. Of course the challenge for the designer to get his device to quickly wake up, stabilize, do its thing, and then just as quickly go back to sleep again is no small feat! As one example the current drain of a wireless temperature transmitter for its power-savings type of operation is shown in Figure 1.


Figure 1: Wireless temperature transmitter power-savings current drain

The resulting current drain is pulsed. The amplitude scale has been increased to 20 µA/div to show details of the signal’s base. This particular device’s current drain has the following characteristics:
• Period of ~4 seconds
• Duty cycle of 0.17%
• Currents of 21.8 mA peak and 53.7 µA average for a crest factor of ~400
• Sleep current of 7 µA
This extremely wide dynamic range of amplitude is challenging to measure as it spans about 3 ½ decades. Both DC offset error and noise floors of the measurement equipment must be extremely low as to not limit needed accuracy and obscure details.

Likewise being able to examine details of the current drain during the bursts of activities provides insights about the duration and current drain level of specific operations within the burst. From this you can make determinations about efficiencies of the operations and if there is opportunity to further optimize them. As an example, in standby operation a mobile phone receives in short bursts about every 0.25 to 1 seconds to check for incoming pages and drops back into a sleep state in between the receive (RX) bursts. An expanded view of one of the RX current drain bursts is shown in figure 2.


Figure 2: GPRS mobile phone RX burst details

There are a number of activities taking place during the RX burst. Having sufficient measurement bandwidth and sampling time resolution down to 10’s of µsec provides the deeper insight needed for optimizing these activities. The basic time period for the mobile phone standby operation is on the order of a second but it is usually important to look at the current drain signal over an extended period of time due to variance of activities that can occur during each of the RX bursts. Having either a very deep memory, or even better, high speed data logging, provides the dynamic range in time to get 10’s of µsec of resolution over an extended period of time, so that you can determine overall average current drain while also being able to “count the coulombs” it takes for individual, minute operations, and optimize their efficiencies.

Anticipate seeing more here in future posts about mobile wireless battery-powered devices, as it relates to the “DC” end of the spectrum. In the meantime, while you are using your smart phone or tablet and battery life isn’t quite meeting your expectation (or maybe it is!), you should also marvel at how capable and compact your device is and how far it has come along in contrast to what was the state-of-the-art 5 and 10 years ago!