Wednesday, July 15, 2015

Optimizing the performance of the zero-burden battery run-down test setup

Two years ago I added a post here to “Watt’s Up?” titled:  “Zero-burden ammeter improves battery run-down and charge management testing of battery-powered devices” (click here to review). In this post I talk about how our N6781A 20V, 3A 20W SMU (and now our N6785A 20V, 8A, 80W as well) can be used in a zero-burden ammeter mode to provide accurate current measurement without introducing any voltage drop. Together with the independent DVM voltage measurement input they can be used to simultaneously log the voltage and current when performing a battery run-down test on a battery powered device. This is a very useful test to perform for gaining valuable insights on evaluating and optimizing battery life. This can also be used to evaluate the charging process as well, when using rechargeable batteries. The key thing is zero-burden current measurement is critical for obtaining accurate results as impedance and corresponding voltage drop when using a current shunt influences test results. For reference the N678xA SMUs are used in either the N6705B DC Power Analyzer mainframe or N6700 series Modular Power System mainframe.
There are a few considerations for getting optimum performance when using the N678xA SMU’s in zero-burden current measurement mode. The primary one is the way the wiring is set up between the DUT, its battery, and the N678xA SMU. In Figure 1 below I rearranged the diagram depicting the setup in my original blog posting to better illustrate the actual physical setup for optimum performance.

Figure 1: Battery run-down setup for optimum performance
Note that this makes things practical from the perspective that the DUT and its battery do not have to be located right at the N678xA SMU.  However it is important that the DUT and battery need to be kept close together in order to minimize wiring length and associated impedance between them. Not only does the wiring contribute resistance, but its inductance can prevent operating the N678xA at a higher bandwidth setting for improved transient voltage response. The reason for this is illustrated in Figure 2.


Figure 2: Load impedance seen across N678xA SMU output for battery run-down setup
The load impedance the N678xA SMU sees across its output is the summation of the series connection of the DUT’s battery input port (primarily capacitive), the battery (series resistance and capacitance), and the jumper wire between the DUT and battery (inductive). The N678xA SMUs have multiple bandwidth compensation modes. They can be operated in their default low bandwidth mode, which provides stable operation for most any load impedance condition. However to get the most optimum voltage transient response it is better to operate N678xA SMUs in one of its higher bandwidth settings. In order to operate in one of the higher bandwidth settings, the N678xA SMUs need to see primarily capacitive loading across its remote sense point for fast and stable operation. This means the jumper wire between the DUT and battery must be kept short to minimize its inductance. Often this is all that is needed. If this is not enough then adding a small capacitor of around 10 microfarads, across the remote sense point, will provide sufficient capacitive loading for fast and stable operation. Additional things that should be done include:
  • Place remote sense connections as close to the DUT and battery as practical
  • Use twisted pair wiring; one pair for the force leads and a second pair for the remote sense leads, for the connections from the N678xA SMU to the DUT and its battery


By following these best practices you will get the optimum performance from your battery run-down test setup!

Tuesday, June 30, 2015

Using User Defined Statuses on the APS

Hi Everyone,

I wanted to talk about a feature in our Advanced Power Supply family (APS from here on out)  that not too many people know about.  The APS features two user defined statuses in the Operation Status group.  Here is a rundown of all the entries in the group:


You can see that bits 7 and 8 are User1 and User2.

Using the advanced triggering system for the APS you can define what conditions will trigger a change in these two statuses.  The N7906A Power Assistant Software (download link) has a very handy graphical way to set up the trigger.   As an example, let's say that I wanted to change the user defined status when the voltage exceeds 1 V and the unit goes into positive current limit status.  Using the Power Assistant Software I would whip up the following:


After I draw out my trigger expression, I can either download it to my APS or I can click the "SCPI to Clipboard" button on the top of the page.  If I hit that button now and then hit paste here, I get:

:SENSe:THReshold1:FUNCtion VOLTage
:SENSe:THReshold1:VOLTage 1
:SENSe:THReshold1:OPERation GT
:SYSTem:SIGNal:DEFine EXPRession1,"Thr1 AND CL+"
:STATus:OPERation:USER1:SOURce EXPRession1

I can just copy this code into my program.  It's a pretty convenient.

I think the big question is: What can you do with this?  The answer is: whatever you want.  It's user defined so you can use it in whatever way you see fit.  If you want to check if the current exceeds a certain threshold you don't want to do a bunch of measure commands in loop, you can define that as your trigger and then just check the Operation Status Group (using the STAT:OPER? or STAT:OPER:COND? queries). 

I think that the most powerful thing that you can do with this is set up a SRQ handler to act when the user statuses change.  This is actually a project that I am working on presently so I have not implemented this just yet (but I will in the near future).   When I do, I will definitely write a blog post about it though!  I wanted to get the word out about this because even I did not automatically think about this when faced with a issue that just screamed to use this.  

Thanks for reading and stay tuned for a future installment on this topic! 

  




Friday, June 19, 2015

How does your product react to a power line disturbance?

Power line disturbances can occur anywhere at any time. Your product can be exposed to disturbances such as voltage surges, sags, brownouts, cycle dropouts, or transients. If you are involved in the design, manufacture, or analysis of a power conversion product or circuit, you are interested in how your product reacts to power line disturbances because your product’s reaction will have a direct impact on how satisfied your customers are with the performance of your product. It is therefore critical for you to know how your product will react to power line disturbances. This knowledge comes only from direct measurement of the power line disturbance and the resultant behavior of your product.
Keysight’s IntegraVision power analyzer model PA2201A can allow you to gain quick insight into your product’s power consumption and dynamic behavior when it is exposed to power disturbances.
Next week, on Thursday, June 25, 2015, at 1:00 pm EDT, I will be presenting a live webinar on the topic “Successfully Make Power and AC Line Disturbance Measurements”. To get more information and to register to attend, please click this link: http://electronicdesign.com/webinar/successfully-make-power-and-ac-line-disturbance-measurements

If you are reading this BEFORE the webinar date, I hope you will attend the live presentation next week. If you are reading this AFTER the webinar date, the above link should bring you to a recording of the webinar.

Enjoy!

Tuesday, June 16, 2015

When is it best to use a battery or a power supply for testing my battery powered device?

As I do quite a bit of work with mobile battery powered devices I regularly post articles here on our “Watt’s Up?” blog about aspects on testing and optimizing battery life for these devices. As a matter of fact my posting from two weeks ago is about the webcast I will be doing this Thursday, June 18th: “Optimizing Battery Run and Charge Times of Today’s Mobile Wireless Devices”. That’s just two days away now!

With battery powered devices there are times it makes sense to use the device’s actual battery when performing testing and evaluation work to validate and gain insights on optimizing performance. In particular you will use the battery when performing a battery run-down test, to validate run-time. Providing you have a suitable test setup you can learn quite a few useful things beyond run-time that will give insights on how to better optimize your device’s performance and run-time. I go into a number of details about this in a previous posting of mine: “Zero-burden ammeter improves battery run-down and charge management testing of battery-powered devices”. If you are performing this kind of work you should find this posting useful.

However, there are other times when it makes sense to use a power supply in place of the device’s battery, to power up the device for the purpose of performing additional types of testing and evaluation work for optimizing the device’s performance. One major factor for this is the power supply can be directly set to specific levels which remain fixed for the desired duration. It eliminates the variability and difficulties of trying to do likewise with a battery, if at all possible. In most all instances it is important that the power supply provides the correct characteristics to properly emulate the battery. This includes:
  • Full two-quadrant operation for sourcing and sinking current and power
  • Programmable series resistance to simulate the battery’s ESR

These characteristics are depicted in the V-I graph in figure 1.


Figure 1: Battery emulator power supply output characteristics

Note that quadrant 1 operation is emulating when the battery is providing power to the device while quadrant 2 is emulating when the battery is being charge by the device.


A colleague here very recently had an article published that goes into a number of excellent reasons why and when it is advantageous to use a power supply in place of trying to use the actual battery, “Simulating a Battery with a Power Supply Reaps Benefits”. I believe you will find this to also be a useful reference.

Wednesday, June 3, 2015

Webcast this June 18th: Optimizing Battery Run and Charge Times of Today’s Mobile Wireless Devices

One thing for certain: Technological progress does not stand still for a moment and there is no place where this is any truer than for mobile wireless devices! Smart phones, tablets, and phablets have all but totally replaced yesterday’s mobile phones and other personal portable devices. They provide virtually unlimited information, connectivity, assistance, and all kinds of other capabilities anywhere and at any time.

However, as a consequence of all these greater capabilities and time spent being actively used is battery run time limitations. Battery run time is one of top dissatifiers of mobile device users. To help offset this manufacturers are incorporating considerably larger capacity batteries to get users through their day. I touched upon this several weeks ago with my earlier posting “Two New Keysight Source Measure Units (SMUs) for Battery Powered Device and Functional Test”. We developed higher power versions of our N678xA series SMUs in support of testing and development of these higher power mobile devices.

Ironically, a consequence of higher capacity batteries leads to worsening of another top user dissatifier, and that is battery charging time. Again, technological progress does not stand still! New specifications define higher power delivery over USB, which can be used to charge these mobile devices in less time. I also touched upon this just a few weeks ago with my posting “Updates to USB provide higher power and faster charging”. The power available over USB will no longer be the limiting factor on how long it takes to recharge a mobile device.

I have been doing a good amount of investigative work on these fronts which has lead me to put together a webcast “Optimizing Battery Run and Charge Times of Today’s Mobile Wireless Devices”. Here I will go into details about operation of these mobile devices during use and charging, and subsequent testing to validate and optimize their performance.  If you do development work on mobile devices, or even have a high level of curiosity, you may want to attend my webinar on June 18. Additional details about the webcast and registration are available at: “Click here for accessing webcast registration”. I hope you can make it!


Friday, May 29, 2015

How to calculate the accuracy of a power measurement

Electrical power in watts is never directly measured by any instrument; it is always calculated based on voltage and current measurements. The simplest example of this is with DC (unchanging) voltage and current: power in watts is simply the product of the DC voltage and DC current:
So the accuracy of the power measurement (which is calculated from the individual voltage and current measurements) is dependent on the accuracy of the individual V and I measurements.

For example, you might use a multimeter to make V and I measurements and calculate power. The accuracy of these individual measurements is typically specified as a percent of the reading plus a percent of the range which is an offset. (Note that “accuracy” here really means “inaccuracy” since we are calculating the error associated with the measurement.)

Let’s use an example of measuring 20 Vdc and 0.5 Adc from which we calculate the power to be 10 W. We want to know the error associated with this 10 W measurement. Looking up the specs for a typical multimeter (for example, the popular Keysight 34401A), we find the following 1-year specifications:

DC voltage accuracy (100 V range): 0.0045 % of reading + 0.0006 % of range
DC current accuracy (1 A range): 0.1 % of reading + 0.01% of range

The error (±) associated with the voltage measurement (20 V) is:
So when the measurement reading is 20.0000 V, the actual voltage could be any value between 19.9985 V and 20.0015 V since there is a 1.5 mV error associated with this reading.

The error (±) associated with the current measurement (0.5 A) is:
So when the measurement reading is 0.5 A, the actual current could be any value between 0.4994 A and 0.5006 A since there is a 0.6 mA error associated with this reading.

We can now do a worst-case calculation of the error associated with the calculated power measurement which is the product of the voltage and current. The lowest possible power value is the product of the lowest V and I values: 19.9985 V x 0.4994 A = 9.98725 W. The highest possible power value is product of the highest V and I values: 20.0015 V x 0.5006 A = 10.01275 W. So the error (±) associated with the 10 W power measurement is ± 12.75 mW.

The above is the brute-force method to determine the worst-case values. It can be shown that the percent of reading part of the power measurement error can be very closely approximated by the sum of the percent of reading errors for the V and I. Likewise, it can be shown that the offset part of the power measurement error can be very closely approximated by the sum of the voltage reading times the current offset error and the current reading times the voltage offset error:
Applying this equation to the example above for the 100 V and 1 A ranges at 20 V, 0.5 A:
So for 10 W, we get:
As you can see, this is the same result as produced by the brute-force approach. Isn’t it great when math works out the way you expect?!?!

In summary, the error associated with a power measurement calculated as the product of a voltage and current measurement has two parts just like the V and I errors: a % of reading part and an offset part. The % of reading part is closely approximated by adding the % of reading parts for the V and I measurements. The offset part is closely approximated by adding two products together: the voltage reading times the current offset error and the current reading times the voltage offset error. It’s as simple as that!

Should I use RS-232 or GPIB to communicate with my instrument?

Hi everyone,

I am writing this as I am preparing to go to the beach for a week.  My topic today will be short but hopefully useful.    We are going to talk about a subject that has been near and dear to my heart for the past 15 years, serial versus GPIB communication on our instruments.

Back in the days before LAN and USB became instrument standard interfaces, many of our products were designed with RS-232 serial ports in addition to GPIB.  RS-232 is standard on the 681xB AC Source/Analyzers, the E36xxA bench power supplies, the N330xA Electronic loads, as well as a few other products.

RS-232 is an interesting option for communication because it is free, most people have them standard on their computers, and you really only need to buy a reasonably priced cable.  The main drawbacks are the fact that you need to put it in remote mode yourself using the "SYST:REM" command, that reasonably priced cable has to be properly configured, and it is slower than GPIB.  The main drawbacks of GPIB is that it costs more and you need to purchase hardware.

I did some benchmarking this morning using my trusty 6811B AC Source/Analyzer.  I used the proper RS-232 cable and my Keysight 82357B USB to GPIB converter to connect to the 6811B.  I wrote a small program that measures the time to send a "*IDN?" command and receive a response.  The program looped 100 times and calculated the average time.  With GPIB, the average time to send and read back took about 7 ms.  With RS-232, the same send command and read back the response took about 50 ms.

So to answer my titular question, "Should I use RS-232 or GPIB to communicate with my instrument?", my answer in every instance would be to use GPIB.  I know that it is more expensive but you really get what you pay for in this instance.  GPIB is a much faster, more reliable way to communicate with your instruments.

Thanks for reading.  Let us know if you have any questions.