How we choose primary batteries to achieve the required lifespan
How we choose primary batteries to achieve the required lifespan
Primary batteries play an important role for IoT applications. Designed for longevity, they have a high-energy capacity and are often used in standalone applications where charging is impractical or impossible, such as smart meters, tracking devices, environment monitoring or health devices.
Choosing the right battery for an application is essential because ongoing maintenance and replacement of failing batteries can dramatically increase the operating costs and the return on investment of the application.
Longevity is directly related to the level and duration of the stress inflicted on the battery by a particular application and so the “when” and “what” need to be well-defined when developing an IoT device.
There are a lot of factors that affect battery life, though. Here, we explore how Saft engineers help maximize the lifespan of primary batteries by choosing the right option for each project.
What influences the battery operating time?
The amount of energy a battery can hold (energy storage), also called ‘capacity’, is measured in Ampere-hour or Amp-hour (Ah). The capacity of the battery will determine its average runtime (operating time) and help predict the likely end of the battery life.
The nominal or rated battery capacity is calculated for a specific current or current range called ‘nominal’. The average operating time of a battery for a given application is determined by the average capacity indicated on the battery data sheet for its nominal current range divided by the consumption required by the application.
What influences the battery longevity of an IoT application?
The Battery Capacity
The level of energy that is stored in a battery is featured by its nominal capacity in Ampere-hour (Ah). The capacity will determine the battery’s average runtime for a given application, but other factors influence the life cycle of a battery:
How do we calculate a battery’s lifetime?
Calculating the battery lifetime for a given application is tricky, but essential.
Our application engineers work as a bridge between our customers and our engineering teams, analyzing the application’s operational profile and battery requirements to recommend the best, long-lasting solution.
They draw on calculation tools based on statistics and historical data from Saft’s 100 years of experience (and millions of batteries deployed) for the two main chemistries that are used in our primary battery ranges used in IoT devices: lithium-manganese dioxide (Li-MnO2 – LM, M ranges) and lithium-thionyl chloride (Li-SOCl2 - LS, LSH and LSP ranges).
These tools allow us to explore —and sometime challenge— the device’s parameters and constraints to recommend the best possible battery option and calculate the expected battery lifetime for the specific application.
Exploring the application’s environmental conditions
An important deciding factor for choosing the right battery is the temperature profile of the application.
We determine this by analyzing the temperature conditions the IoT device will experience both during storage and once it’s deployed – considering whether it’s to be used indoors or outdoors (or both), and the climate it will be exposed to.
This allows us to estimate the energy loss linked to self-discharge and the passivation risk.
Self-discharge is a phenomenon in batteries in which internal chemical reactions reduce the energy stored in the battery without any connection between the electrodes or any external circuit. The self-discharge can be very complex to model and depends on several parameters, such as the peak current and consumption profile, the temperature, the cell’s age, and more.
There are two self-discharge phenomena that need to be taken into account when calculating the life expectancy of the battery: self-discharge in storage and self-discharge in use.
Self-discharge tends to occur more quickly at higher temperatures. On the contrary, lower temperatures tend to reduce the rate of self-discharge and preserve the initial energy stored in the battery.
However, the electrochemical and diffusion reactions are slowed down at lower temperatures and the electrolyte viscosity is higher which makes the battery less able to provide energy and can cause the voltage to drop.
Conversely, in warm temperatures (above 30°C) some cells —based on lithium— develop the electrochemical phenomenon called passivation that protects the cell from discharging on its own. But passivation can also cause voltage delays and drops, delay during pulses, and contributes to a loss of electrochemical elements.
This all speaks to the importance of knowing an application’s temperature profile. It allows our application engineers to assess the risk of passivation and estimate energy loss through self-discharge, and more accurately understand the lifespan of a chosen battery.
Exploring the application’s environmental conditions
The rate of discharge of a battery depends on the energy consumption (or current consumption) of the application and the pulse intensity, duration and frequency.
This means identifying the most energy consuming functions that will affect the device’s autonomy (without forgetting the sleep mode and standby mode consumption and the consumption and leakage current of the various electronic components of the application). These have the most impact on the battery’s lifetime duration.
We also need to know the pulse drain - in other words the pulse levels, duration, and frequency of all the components of the system.
This combined information will help us understand the energy needed during all stages of the device’s life (storage and operation). This can then be studied alongside expected life duration of the device itself to optimize product design and select the right battery.
The voltage is also very important – specifically the maximum, nominal and cut-off voltage of the application.
When a battery ages, the voltage tends to fluctuate. The selected battery needs to maintain a voltage above the cut-off voltage of the application during the device’s entire lifetime.
If the voltage falls below the cut off voltage at the end of the battery life, or when the passivation is important, the device will need to reboot. This consumes energy, might provide some major drawback for the application, such as data loss, and could even cause its premature end.
Depending on the minimum and maximum voltage, it might be possible to use several cells in a series in the battery, thus offering a longer service life. For example, if the minimum voltage of the device is 3 V and the maximum voltage is 7.2 V, we can use a combination of two cells in a series which would allow the cut off voltage to go down to 1.5 V / cell. Devices need to allow for the space, but this does increase the battery’s voltage and offers a longer device life expectancy.
Another solution can be to add a supercapacitor or a lithium hybrid capacitor in parallel of a thionyl chloride cell, so as to enhance voltage response, as in our LSP range of batteries.
Exploring the battery technology
Once the device requirements and constraints are understood – our application engineers take the final step and identify the right battery – with the right chemistry and cell construction.
Li-SOCl2 chemistry, used in our LS, LSH and LSP ranges, has an exceptional reputation for reliability and long life. It also exhibits the highest energy density and can restore it for up to 20 years.
Another chemistry called lithium-manganese dioxide (Li-MnO2 ), used in our LM, M ranges, is particularly adapted to high pulse applications with a low cut off voltage and offers a good trade-off between energy and power. Coupled to low-consumption electronic components, they can also offer up to 20 years of service.
The construction of the cell also plays a role: a spiral technology offers a lot of power for high pulse applications (up to 4 A) whereas a bobbin construction will offer a low self discharge and a strong capacity - but a current no higher than a few mA or few dozen mA.
Juggling between requirements and constraints
Ultimately, the goal is to choose the right battery chemistry and technology so that the device can function properly during its whole lifetime, making full use of the battery until the end of its life.
Any misreading of passivation or self-discharge, or any abuse subjecting a battery to conditions for which it was not designed could lead to premature failure, and so getting the decision right during the design phase of a project is crucial.
If you’d like to get in touch with one of our experts to help identify the right battery for your IoT application – email us on energizeIoT@saft.com.
* This is an updated version of an article first published in June 2020.