How can primary batteries achieve their expected lifetime?
How can primary batteries achieve their expected lifetime?
Primary batteries play an important role for IoT applications. Designed for longevity, they have a high-energy capacity and are often used in standalone applications where charging is impractical or impossible, such as smart meters, animals or assets tracking devices, parking availability, environment monitoring or health devices.
Battery longevity is key to maximize the total cost of ownership (TCO) of an application. It is directly related to the level and duration of the stress inflicted on the battery by the application and is therefore specific to a given project. If the “when” and “what” are not well defined and taken into account by the cell manufacturer, maintenance and replacement of failing batteries can dramatically increase the operating costs (OPEX) and the return on investment (ROI) of the application for its end user.
An understanding of the factors affecting battery life is therefore vitally important to an IoT designer for managing and optimizing the product performance. So let’s take a look at the various elements to take in consideration…
What influences the battery operating time?
The amount of energy a battery can hold (energy storage), also called “capacity”, is measured in Ampere-hour or Amp-hour (Ah). The capacity of the battery will determine its average runtime (operating time) and help predicting the end of the battery life.
The nominal or rated battery capacity is calculated for a specific current or current range called “nominal”. It is considered that the average operating time of a battery for a given application is determined by the average capacity indicated on the battery data sheet for its nominal current range divided by the consumption required by the application. For example, an IoT device consumes 1mA, and has a 1200 mAh battery; the operating time will be equal to the capacity divided by the consumption: 1200 mAh / 1 mA = 1200h. A common mistake is to look at the nominal capacity of the battery without checking if the average current drained by the application is corresponding to the nominal current range on the datasheet.
Considering the capacity alone is not sufficient to evaluate the battery’s lifetime. The life cycle of a battery depends on several factors:
- The battery type and electrochemistry
- The temperature in the field that has an effect on electrochemical efficiency
- The rate of discharge, which depends on the power needs of the application and the pulse intensity, duration and frequency.
- The voltage range of the device (maximum, nominal and cut-off voltage) –If the voltage falls below the cut off voltage (especially at the end of the battery life or when the passivation is important), the device will need to reboot which will consume energy, might provide some major drawback for the application, such as data loss, and could even cause its premature end.
- The leakage currents of the various components of the device that consume energy from the battery.
- The battery shelf life (storage conditions and duration before the battery is integrated in the device)
- The device’s shelf life (storage conditions and duration once the battery is integrated in the device)
How do we calculate the battery’s lifetime?
Calculating the battery lifetime for a given application is a tricky, yet, essential mission of an Application Engineer. Working as a bridge between the customer and the engineering teams, our role is to analyze the application’s operational profile and battery requirements to recommend the best, long lasting solution. We also provide technical support to application designers and share our expertise on how best to optimize their applications.
Our 100 years of experience and millions of batteries deployed in the field have allowed us to develop calculation tools based on statistics and historical data, for the two main chemistries that are used in our primary battery ranges used for the Internet of things: lithium-manganese dioxide (Li-MnO2 – LM/M ranges) and lithium-thionyl chloride (Li-SOCl2 - LS, LSH and LSP ranges). These tools will allow us to explore —and sometime challenge— the device’s parameters and constraints to recommend the best possible battery option and calculate the expected battery lifetime for the specific device.
Exploring the application’s environmental conditions
Knowing the temperature conditions of the IoT device during storage and once it’s deployed —whether it’s indoor or outdoor or both, or in temperate or warm countries— will allow us to determine the temperature profile of the application, and estimate the energy loss linked to self-discharge, and the passivation risk (leading to possible voltage drop). (See FAQ)
The self-discharge is a phenomenon in batteries in which internal chemical reactions reduce the energy stored in the battery without any connection between the electrodes or any external circuit. The self-discharge can be very complex to model and depends on several parameters, such as the peak current and consumption profile, the temperature, the cell’s age, etc.
There are two self-discharge phenomena that will need to be taken into account when calculating the life expectancy of the battery: self-discharge in storage and self-discharge in use.
The storage period of a battery can be significant, from the moment when it’s being manufactured to the moment its being integrated in the IoT device and up to the actual operation of the IoT application. Self-discharge tends to occur more quickly at higher temperatures. On the contrary, lower temperatures tend to reduce the rate of self-discharge and preserve the initial energy stored in the battery.
In our evaluations, we’ll take into account each step of the devices’ storage, up until the device is put into service, to make sure our estimations are as close to the reality as possible.
As explained earlier, knowing the temperatures to which the device is exposed in normal operating mode is equally important to determine the self-discharge in use. Indeed, a low temperature can protect the battery from self-discharge but the electrochemical and diffusion reactions are slowed down and the electrolyte viscosity is higher which makes the battery less able to provide energy and can cause the voltage to drop. And with application in constant power, as the impedance increase, the voltage drops, which in turn uses more current and impacts the battery’s capacity.
Conversely, if self-discharge is more important in warm temperatures, some cells —based on lithium— develop an electrochemical phenomenon called passivation that protects the cell from discharging on its own (See FAQ). But passivation can cause voltage delays and drops, which also contributes to a loss of electrochemical elements.
Knowing the temperature’s profile is also important to determine the voltage response of a battery, as the voltage readings are getting low when the temperature drops. Besides, the temperature profile will allow the application engineers to assess the risk linked to passivation (leading to voltage drop or delay during pulses) for liquid cathod systems such as lithium thionyl chloride batteries. The passivation phenomenon is greater when the temperature is reaching 30°-40°C and beyond. We therefore need to evaluate precisely the time spent by the battery in warm or cold temperatures, in storage or in use to determine the self-discharge and the passivation risk.
Exploring the application’s operational profile and energy requirements
As we said earlier, the rate of discharge of a battery depends on the energy consumption (or current consumption) of the application and the pulse intensity, duration and frequency.
We need to identify the most energy consuming functions that will affect the device’s autonomy without forgetting the sleep mode and standby mode consumption and the consumption and leakage current of the various electronic components of the application. These have the most impact on the battery’s lifetime duration. (For more information, watch our 5 tips to IoT designers)
We will need to know the pulse drain, in other words, we’ll need to know the pulse levels, duration, and frequency of all the components of the system (up to 10 components).
All this information will give us the energy needed during all stages of the device’s life (storage and operation). We will study this information in the light of the expected life duration of the device to make recommendations on how to optimize your design and select the right battery to achieve the desired lifetime.
The voltage is also very important. We’ll need to know the maximum, nominal and cut-off voltage of the application for several reasons:
- When the battery ages, the voltage tends to fluctuate. We need to make sure the selected battery can maintain a voltage above the cut-off voltage of the application during the device’s entire lifetime and over the entire temperature range of its intended use.
- If the voltage falls below the cut off voltage —especially under high pulse, or when the passivation is important— the device will need to reboot which will take up energy. This might provide some big drawback for the application, and could even cause its premature end of life.
- Depending on the minimum and maximum voltage, we might be able to recommend using several cells in a series in the battery, thus offering a longer service life. For example, if the minimum voltage of the device is 3 V and the maximum voltage is 7.2 V, we can use a combination of two cells in a series which would allow the cut off voltage to go down to 1.5 V / cell. Your devices would need to allow for the space but this would have a double advantage: increasing the battery’s voltage and offering a longer life expectancy to the device.
- An another solution is to add a supercapacitor or a lithium hybrid capacitor in parallel of a thionyl chloride cell, so as to enhance voltage response, like our LSP range of batteries.
- FYI: If you lower the cut-off voltage of the application, you’ll have a broader choice of battery technologies and you might be able to get rid of energy consuming components that might otherwise be necessary, such as supercapacitors. (For more information about the voltage, you can watch this advice from one of our application engineer on how to optimize your design).
One last question one ought to ask is whether the battery sought for the project is the main battery of the app or a back up. These last ones will stay in standby mode for a long time until they are suddenly needed, in which case the battery needs to have the ability to react quickly. Obviously this will also impact its longevity.
Exploring the battery technology itself
Now that we have been made aware of the device requirements and constraints, we will be able to recommend the right battery for your project. Last step, but not the least in the lifetime calculation process as the choice of chemistry impacts the longevity of the battery.
Li-SOCl2 chemistry for example has an exceptional reputation for reliability and long life. It also exhibits the highest energy density and can restore it for up to 20 years. Other chemistry such as lithium-manganese dioxide (Li-MnO2) is particularly adapted to high pulse applications with a low cut off voltage and offers a good trade-off between energy and power. Coupled to low-consumption electronic components, they can offer up to 20 years of service.
The construction of the cell also plays a role: a spiral technology offers a lot of power for high pulse applications (up to 4 A) whereas a bobbin construction will offer a low self discharge and a strong capacity in time but no current higher than a few mA or few dozen of mA.
Finally, for very long term discharge applications (10-20 years), we might recommend using a combination of an LiSOCl2 bobbin cell with a capacitor such as our LSP range. The capacitor is designed to help the battery to perfectly maintain the voltage during the pulse, for the whole life of the application, in whichever temperature conditions. The pulse sustaining capability of the capacitor will circumvent the effect of passivation by storing electric energy and releasing it when necessary. The battery capacity will be slightly impacted by the capacitor but the pulse drain delivered by the cell will be reduced, resulting in a longer battery life.
Juggling between requirements and constraints to get the best of the device’s battery
In the end, our aim is to recommend the right battery chemistry and technology so that the device can function properly during its whole lifetime, making full use of the battery until the end of its life. Experience is key in this process as any misreading of passivation or self-discharge, or any abuse subjecting a battery to conditions for which it was never designed could lead to premature failure: the battery goes under the cut off voltage and is not able to reboot. It is then left half charged, a waste of electrochemical elements, time and money, notwithstanding the wider economic consequences that could compromise the project.
If you have any question about this article or if you wish to get in touch with one of our expert to get a battery recommendation or lifetime calculation, feel free to email us at energizeIoT@saftbatteries.com. We will be happy to help!