Low-power design: what does it mean for battery selection and operation?
Low-power design: what does it mean for battery selection and operation?
“Low power” has been a longstanding mantra for IoT developers. But the advent of a new class of low power wide area (LPWA) technologies such as LTE-M or NB-IoT have accelerated the trend and allowed the industry to make huge inroads into power optimization to address the diverse requirements of the IoT market.
However, designing for low power places unique demands on IoT developers who must deal with a broad set of new requirements for connectivity, power consumption and robustness. A number of factors can affect battery operation and the trade-off between performance, energy and power consumption is often needed which makes the challenge even bigger.
As we are about to take part in Sensors Converge, on September 22 and 23 —the event that will cover the biggest sensor, chip and cloud technologies and applications that are driving innovation today— we teamed up with our partner, Deutsche Telekom AG, to take a look into the implications of low-power design in battery selection.
What is Low Power Design? Why is it important?
Low power design aims at reducing the overall dynamic and static power consumption of a device using a collection of techniques and methodologies, for the purpose of optimizing battery lifetime. It goes well beyond simply inserting a mobile operator’s NB-IoT SIM card into your device. It implies engineering and optimizing the entire hardware and software to handle this new paradigm. Hardware resources should be kept to the minimum to process the data. The connection time should also be limited and, depending on the wireless technology chosen, many parameters will require tuning to optimize overall device performance and power consumption. A sleep mode needs to be designed for the product to keep power consumption to a minimum during intervals when no communication transpires.
The theory may seem simple, but the complexity of low power design lies in the details. To help IoT developers globally with this mammoth task, DT built a digital twin modeling tool, the IoT Solution Optimizer, which allows developers to plan, model and optimize the performance of their battery-powered NB-IoT and LTE-M solutions using Saft batteries. In the same fashion, Saft provides free access to Wisebatt for Saft, the online virtual prototyping solution that allows you to find a detailed view of your devices’ hardware power consumption and optimize it. If you are at an earlier stage of your project, Saft’s Smart battery Selector can also guide you towards the most appropriate solutions for your use case.
The challenges of low power design: adapting the communication behavior and the electronic design of your device
Adapting the communication behavior
The communication activity is one of the first things to look into. Consider the frequency of communication, payload size, the employed protocols, the number of firmware updates. All of this influences how battery energy is consumed.
As a matter of fact, we wrote an article about this, detailing the impact of various communication technologies on your IoT application’s power consumption.
Optimizing payload structure and application communication patterns allows for superior performance. Indeed, the bigger your application’s payload, the more cumbersome it is to handle through a narrowband transmission pipe, resulting in retransmissions that augment power consumption. It is prudent to carefully identify which data is essential to send, and how often this should take place.
Furthermore, the device’s communication module should be in a low-power sleep mode the rest of the time. Every unnecessary communication event essentially interrupts the activated power saving features of NB-IoT or LTE-M, steadily reducing battery life as the communication frequency increases. The IoT Solution Optimizer contains algorithms which can not only recommend the right combination of power saving features based on your use case, but also the service helps you visualize how communication payload size, protocol use and communication frequency impact your battery life.
One also should select the best-fitting connectivity bearer. 3GPPTM protocols offer many powerful features and capabilities which require a certain level of expertise to use properly. A few developers may not have much prior experience optimizing their applications for 3GPPTM connectivity, and inadvertently reboot their devices if these come across a failure situation (for example, if the network rejects the device, or a service is requested that the network doesn't support). When thousands of such devices incessantly reboot and re-connect to the network, not only can it create a signaling storm, but it ultimately drains their batteries. To avoid such issues, it is strongly recommended to use modules supporting GSMA’s TS.34 Radio Policy Manager, a watchdog which monitors if unwanted application behavior occurs. Whenever RPM detects specific scenarios where the application can congest the network or drain battery power, a back off mechanism is triggered for the duration of a configurable timer, minimizing the risks.
Finally, bear in mind the mobility profile of the devices - especially if these are targeted for deployment across several countries or regularly switching between different networks. Depending on which access technologies and frequency bands are available in each market, the device will be prompted to perform network acquisition scans of varying duration. A good power saving strategy is to restrict the number of bands that the module scans to those of target operators that the device is authorized to roam on. If only the necessary bands are selected, the scanning procedure will be faster and consume less power.
Increasing battery life by reducing the number of communications thanks to Edge computing
There are alternatives to sending data very regularly to the cloud for processing. By pushing out processing functions to the gateway or to the node—a strategy sometimes called “fog computing” or “Edge computing”— you can process data locally rather than forward raw data and process it remotely which limits the bandwidth and minimize the latency between input and response —and therefore the power consumption. Artificial intelligence (AI) runs on the device’s processor, so that it can autonomously learn from its environment via the embedded sensing capabilities. The goal is to make more informed decisions about when to communicate or not.
Edge computing has been booming lately, propelled by advancements in chip technology and the need to enhance operating efficiency at the upper and lower edges of performance -Critical IoT and Massive IoT. A new generation of components shall enable more processing within the edge device while preserving multi-year battery lives for their host devices. AI can even work with the battery to optimize the device’s power consumption.
But again, it is important to look closely at the total power budget that is required to gather, process or send data. Sacrificing some computing time to do less communication usually pays off, but if the processor needs to be active all the time to gather and process information, you might lose the benefits of such features.
Adapting the electronic design
An increasing number of wireless communication modules are being optimized for implementation in LPWA solutions. Navigating through the jungle of hardware choices is no easy task, especially since an increasing number of connectivity providers require developers to use operator-certified components. To help their customers in their product design, Deutsche Telekom’s device planning tool – the IoT Solution Optimizer – integrates the industry’s largest database of power measurements for NB-IoT and LTE-M single-mode and multimode modules. Each integrated component is digitally modeled using a very detailed profile consisting of thousands of power measurements, altogether representing over one hundred 3GPPTM procedures. By integrating Deutsche Telekom-tested and certified modules, customers avoid the hassle of troubleshooting performance and interoperability issues which may drain the battery.
Customers whose IoT devices are mobile or planned for sale in various markets are attracted to multimode modules for a simple reason: The home or roaming networks these devices operate on may not have deployed NB-IoT and/or LTE-M. The availability of a second or third fallback protocols is therefore essential to bridge coverage footprint gaps and ensure constant communication. But the drawback of using multimode modules, especially with 2G fallback, is that their power amplifier architecture is significantly more complex and inefficient. Developers should therefore be aware when optimizing their power consumption, that multimode modules are more power hungry than single mode variants. So as mentioned earlier, make sure not only to choose but to program the module properly.
GNSS chips can also represent a huge power drain if not properly configured, those that provide for position service via the GPS, BeiDou, GLONASS, QZSS and Galileo satellite constellations. These usually come with different operating modes. It is necessary to properly select and dimension the chosen operating mode to scan for satellites in the most conservative manner to keep power consumption to a minimum.
Your GNSS solution is not the only cause of concern. Other components’ leakage currents need to be measured and reduced. Voltage converters and regulators may additionally consume significant power, which is then dissipated as heat.
Similar to batteries, SMT antennas are an often-overlooked component. Taking their placement location into consideration at an early stage of product hardware design is critical, especially if the IoT device’s dimensions are quite small. Depending on the location they are placed on the PCB, in the low frequency ranges of NB-IoT or LTE-M (800 or 900 MHz), the antenna’s efficiency may significantly decrease. In the worst case, much of the advantages of the LPWA technology’s coverage enhancements may be inadvertently lost.
Choosing the right battery: the implications of low-power design for battery selection and operation
The IoT Solution Optimizer application can also assist you by showing how to pick the right battery which is paramount. The course of this article made us approach the question of the power source quite late in the text, but in reality, the choice of battery should be addressed at a very early stage of your project.
Different chemistries offer different benefits and can be adapted to your device’s consumption profile. You can find out more about the advantages of the chemistries that we recommend for the Internet of Things in our article “Which types of batteries for your IoT devices?”
Lithium-Thionyl chloride (Li-SOCl2) for example, is an interesting chemistry for low-power, long-life applications as it offers a low self-discharge (meaning that that the battery’s capacity doesn’t get too impacted by storage time and use in sleep mode) and perfectly suits high-energy and high-voltage requirements in a wide range of temperatures.
This chemistry is subject to passivation, a surface reaction that protects the cell from discharging on its own and enables its long shelf life.
Passivation can offer significant advantages but the power requirements need to be anticipated to find the right trade-off between the consumption profile and the energy load. Indeed, the passivation layer is being built whilst the device is in sleep mode or in storage. Upon the current connection, the passivation layer breaks to let the current through. But if the main energy consumption current is too low, ions from peak communication current won’t be able to flow through the passivation layer, which will become too thick, causing the voltage to drop below the cut off voltage and the device to stop. To be noted as well, at very high temperature, the passivation layer is getting thicker which consumes more active materials and reduces the battery’s capacity.
So make sure to read our article “7 most common pitfalls about passivation (and how to avoid them)” to handle it properly.
The construction of the cell matters too as it has a direct impact on the performance of the cell. Bobbin cells (Saft LS range of batteries) provide higher energy density and lower self-discharge than spirally designed cells (Saft LSH range) but the limited current and pulse current capability, which is often required in Low Power Wide Area applications, might implies the use of a pulse sustaining device, such as a capacitor, EDLC or Hybrid Layer Capacitor, to achieve higher pulse currents profiles (Saft LSP range). These capacitors will also have an impact on your power budget so you’ll need to choose the right capacitor for your application.
The temperature when deployed in the field will also have its importance. Sometimes, for the same application, we may recommend different battery technologies depending on the deployment area and its temperature. As mentioned earlier, at high temperature, the Lithium-Thionyl Chloride’s passivation layer becomes too important which can impair the capacity of the cell. In that case, we might recommend using Lithium-Manganese Dioxide (Saft LM/M range), a chemistry that is not subject to passivation.
The adjustment variable between different technologies often comes down to the voltage but there are many options and solutions. Lithium-Thionyl Chloride, which offers a high voltage, could be coupled to a supercapacitor to increase the current level of the pulse. Lithium-Manganese Dioxide, which offers higher pulse capabilities, could potentially be coupled to a DC/DC convertor to convert a voltage that may otherwise be too low. Another option to increase the voltage is to use multiple cells in a battery packs or to use two cells connected in series. Each of these features, down to the way the battery is being integrated, have an impact on the current consumption.
Generally, when the voltage is higher, the electronic efficiency is better but since the power of the battery is the product of voltage and current, reducing a device’s operational voltage will enhance current consumption for the battery, thus imparting a lower lifetime.
Every project is specific and the key to success will be to find the right trade-off between the application’s lifetime and its discharge rate. Is it more interesting to consume more energy but use more than 90% of the energy available or to consume less energy, but to be able to use 70% percent of the available energy? Well, the best person to answer this question is the battery manufacturer so we recommend you speak to them!
Low power design can considerably complicate the task of an IoT developer. The IoT Solution Optimizer has been created to accelerate prioritization and help users make more informed decisions on their design choices, thus making huge savings on trial and errors. If you’d like to learn more about further aspects of IoT device design, DT has also published their IoT Solution Guidelines, online, which collect best practices gathered over the past 10 years.
Along with Saft’s IoT tools, this blog is full of information and best practices about batteries. Subscribe to our newsletter to receive them in your mailbox.
And if you’d like to discuss about how to power your low-power design device, come and join us at Sensors Converge, on September 22 and 23, Booth #325. You can also email us your questions about battery selection at energizeiot@saftbatteries.com. We’ll be happy to help.