Quantcast
Channel: TI E2E support forums
Viewing all 4543 articles
Browse latest View live

USB Type-C™ power: Should your next device have USB Type-C?

$
0
0

As USB Type-C ports continue to rise in popularity in laptops, tablets and smartphones, so do the number of failures that early adopters experience. These failures are frequently due to the new USB Type-C cables, many of which are not compliant with the official USB 3.1 specification.

With previous USB implementations (common ones shown in Figure 1 below), most systems were able to withstand the <10W applied by noncompliant cables. Now, with up to 100W of charging available through USB Type-C, properly protecting systems has become much more complicated.

 Figure 1: Today’s popular USB connectors from left to right: Type A, Mini B and Micro B

Not all USB Type-C cables are created equal

Almost every device I own has some form of USB connectivity, whether it’s the larger rectangular connector (Type A) or the smaller micro/mini ports. I own so many of these devices with various connectors that I lose track of which cable is for which device, and use them interchangeably. With the previous generation of USB cables, this was not an issue, as all cables delivered relatively low power (<10W).

100W charging support and a new physical connector, shown in Figure 2, are two major changes brought on by the USB Type-C standard. Due to this, all USB cables will need to be upgraded. There are many companies producing USB Type-C cables, often selling them across a variety of websites. The sheer amount of compliant and noncompliant cables that exist are next to impossible to keep track of. The danger of unknowingly buying noncompliant cables is that some are so far out of compliance that they can even destroy the systems they connect to.

Figure 2: The next generation of connectors: USB 3.1, otherwise known as USB Type-C

The benefits of implementing USB Type-C protection

To prevent damage to consumers’ devices, regardless of which USB Type-C cable they end up using, you need to ensure that you design your system with robust power-path protection. Doing so will help ensure that consumers are protected against a variety of common problems, including faulty cable hot-plugging or tightly spaced pins shorting from moisture or unwanted debris inside of the connector. One way to implement this protection is detailed in the USB Type-C Power-Path Protection with Audio Accessory Support Reference Design. Figure 3 shows the block diagram.

Figure 3: USB Type-C Power-Path Protection with Audio Accessory Support Reference Design block diagram

There are multiple levels of protection required for USB Type-C ports, specifically power-path and signal-path protection. In this reference design, the TPD6E05U06 handles signal-path protection on the CC1, CC2, D+, D-, SBU1 and SBU2 pins. This device integrates six ultra-low loading-capacitance transient voltage suppression (TVS) diodes into a single chip, providing International Electrotechnical Commission (IEC) 61000-4-2 Level 4 electrostatic discharge (ESD) protection. The TPS25923 eFuse and CSD17571Q2 power MOSFET together provide up to 30V of overvoltage protection, current limiting and block reverse current for the power-path. Last but certainly not least, the TUSB320LAI, TS5USBA224 and TS3A226AE manage the USB cable orientation (don’t forget that USB Type-C is a reversible connector), acting role (DFP, UFP, DRP) and cable attach/detach.

Additionally, these devices provide support for analog audio accessories, allowing for power or audio delivery over a single USB Type-C port. Download the USB Type-C Power Path Protection with Audio Accessory Support Reference Design today to add the proper protection to your next USB Type-C design.

Additional resources


Packing more performance into space-constrained embedded applications

$
0
0

Sensing applications are getting physically smaller and smaller. Whether you’re designing a remote industrial sensor node that needs to be tucked away in a factory (Figure 1) or a sensor for the next smart wearable device, space is becoming a scarce resource.

On the other hand, there is an increasing need for integration and processing to be available locally at the microcontroller (MCU) or system level. Taking delocalized measurements from the rack or test bench directly into the nodes, combined with advanced processing capabilities to support local analytics, enables remote nodes to make more timely and informed decisions, minimizing communication latencies and mitigating communication link unavailability.

Both trends – decreased physical size, more integration – are equally attractive, but they don’t always work out in each other’s favor. Embedded application developers are burdened with picking the right products to fit their constraints, both physically and computationally.

 Figure 1: Space-constrained industrial wireless sensor node

The MSP432™ MCU family has expanded with a production-ready 80-pin ball-grid array (BGA) package, introducing the same high-performance capabilities of MSP432 MCUs into a tiny 5mm-by-5mm footprint (Figure 2) to fit the requirements of space-constrained industrial applications. Packing in a 1Msps successive approximation register (SAR)-based analog-to-digital converter (ADC), the MSP432 MCU in BGA package introduces best-in-class integrated analog performance of up to 16 effective number of bits (ENOB) into the realm of ultra-small microcomputing. The applications can sample better sensor data at higher precision and lower power without having to increase printed circuit board (PCB) size to accommodate external ADCs.

Additionally, the 48MHz ARM® Cortex®-M4F central processing unit (CPU) pairing with TI’s innovative processing algorithms allow the applications to directly process data on the spot, detecting trends and ultimately making smarter decisions quickly. A good example of local analytics is the MSP432 MCU speech recognizer library, which detects your voice without the need for an internet connection.

Figure 2: MSP432 MCU in 5mm-by-5mm BGA packaging

The MSP432 MCU in BGA package can operate as the wireless host MCU when partnered with a wireless network processor that also comes in a tiny package such as the SimpleLink™ Bluetooth® low energy CC2640R2F wireless MCU in a wafer chip-scale package (WCSP). The partitioning allows each component to do what it does best: the CC2640R2F device in network processor mode provides a robust and ultra-low-power Bluetooth low energy link, while the MSP432 MCU can run additional Bluetooth profiles or protocols such as Bluetooth low energy for HomeKit technology while leaving an ample amount of its 256kB flash memory space free for application code. With analog and radio integration and processing capabilities, this “dynamic duo” can help Internet of Things (IoT) developers design highly integrated wireless sensor nodes with advanced sensing and measurement for demanding and space-constrained environments.

Additional resources

Power Tips: How to use Nyquist plots to assess system stability

$
0
0

A Bode plot is a very popular way to determine a dynamic system’s stability. However, there are times, when a Bode plot is not a straightforward stability indicator.

Figure 1 shows a Bode plot of TI’s TPS40425synchronous buck converter with. In this application, a π filter is used at the output of the buck converter.

Figure 1: Bode plot of a buck converter with an output π filter

Because the ferrite bead used in the π filter has varying inductance over the load current, the Bode plots measured at different load conditions vary dramatically. At idling, the Bode plot measured has multiple 0db gain crossover. It is difficult to apply the phase margin and gain margin criteria in this case. Instead, I used Nyquist stability criterion.

The Nyquist stability criterion looks at the Nyquist plot of open-loop systems in Cartesian coordinates, with s = jω. Assuming that the open-loop system transfer function is F(s), the Nyquist plot is a plot of the transfer function of F (jω), with ω from -∞ to +∞. Stability is determined by looking at the number of encirclements of the point at (-1,0j). If the number of counterclockwise encirclements of (-1,0j) by F(s) equals the right-half-plane poles of F(s), then the system is stable. In this example, if a buck converter does not have right-half-plane poles, the number of encirclements of (-1,0j) indicates the stability.

You can derive a Nyquist plot from the measured Bode plot. I save the data first. The analyzer I use provides the data with magnitude in decibels and phase in degrees, as shown in Figure 2. Different frequency analyzers provide different formats. There are frequency analyzers that provide the data in complex numbers.

Figure 2: Data saved by a frequency analyzer from a Bode-plot measurement

Equation 1 converts magnitude and phase into complex numbers:

(1)                                                                                                                                                

Where Mn is the magnitude and θn is the phase of measured F(s) with s = j2πfn.

After applying Equation 1, I plotted the complex numbers in Cartesian coordinates. The plot show in Figure 3 is of frequencies from 100Hz to 1MHz. It is a good approximation of plot from 0Hz to +∞Hz. The plot from -∞Hz to 0Hz is the plot from 100Hz to 1MHz mirrored horizontally in Cartesian coordinates. I added the part from plot from -∞Hz to 0Hz to Figure 3 and formed Figure 4.

Figure 3: A Nyquist plot derived from a Bode plot of frequencies from 100Hz to 1MHz


Figure 4: A Nyquist plot derived from a Bode plot from -1MHz to -100Hz and 100Hz to +1MHz

It is the path around the unity gain circle that is most relevant to the system stability.  I zoomed into the area close to the unity gain circle. Since this system is of a voltage-mode buck converter, I know the DC gain is over 90dB, with a phase starting from 0 degrees. I can approximate the plot at lower frequencies in the complete Nyquist plot, as shown in Figure 5.

Figure 5: Approximate Nyquist plot from -∞Hz to +∞Hz when the system is idling

Following the arrows from -∞Hz, you can see that the Nyquist plot does not encircle the (-1,0j) point. The system is stable. I plotted a blue circle with a radius of 0.766 and (-1,j0) as the center. If the Nyquist plot does not enter this blue circle, the phase margin is greater than 45 degrees and the gain margin should be greater than 12dB.

Following the same procedure, Figure 6 shows a Nyquist plot at full load. Following the arrows, you can see that the Nyquist plot doesn’t encircle the (-1,0j) point either. And this Nyquist plot is outside of the safety circle, as specified earlier.

Figure 6: Approximate Nyquist plot from -∞Hz to +∞Hz when the system is at full load

When Bode plots fail to provide a straightforward indication of stability, consider using a Nyquist plot. I’ve shown you in this post how to convert measured Bode plots to a Nyquist plot and how to use Nyquist stability criterion to judge system stability.

Additional resources

 

 

 

Make your PSE system smarter and more efficient

$
0
0

 Power over Ethernet (PoE) enables Ethernet cables to carry electrical power as well as data. For example, old Internet protocol (IP) phones usually needed a DC power supply and Ethernet cable to deliver power and data, respectively. With PoE implemented in an Ethernet switch, power is delivered through the Ethernet cable to the IP phone, eliminating the need for a power supply. See Figure 1.

Figure 1: Old & new IP phone data/power path

There are two types of devices on both sides of an  Ethernet cable: power sourcing equipment (PSE) and powered device (PD). On the sourcing side, PSE devices are usually installed in Ethernet switches, routers, gateways and wireless backhauls. PDs manage and protect the PoE system at the load side. PDs are usually installed in IP phones, security cameras and access points.

In this post, I will explain when you need system software to control PSE in order to implement more functionalities than what’s defined in IEEE802.3at(Institute of Electrical and Electronics Engineers standard for Ethernet), and how to get started with TPS23861 PoE MSP430™ microcontroller (MCU) reference code to develop your own system software.

The TPS23861, a PoE PSE controller, is one of the most popular PSE devices available for mass-market applications, designed in products such as Surveillance NVRs, Ethernet switches and wireless access points. It comes with three modes: auto mode, semi-auto mode and manual mode. In auto mode, host control is not necessary and the TPS23861 can operate by itself (including detection, classification, power on and fault handling). This mode is usually used in standard low-port-count PSE systems. In semi-auto mode, the port automatically performs detection and classification as long as the detection and classification are enabled (0x14). A push-button command (0x19h) is required to power on the port. Semi-auto mode is usually used in high-port-count PSE systems that designers can implement multiport power management. Manual mode provides the most flexibility. It is used in nonstandard PoE applications such as high-power PoE PDs and non-PoE loads.

When operating in semi-auto or manual mode, systems with these criteria will need an external MCU to control the PSE:

  • The system has a high port count (more than eight ports).
  • The system needs to connect to nonstandard PDs such as high-power PoE PDs.
  • The power supply is not able to provide power to all ports with  full loads, so a multiport power-management module is necessary.

Once you determine whether your system requires an external MCU, a good resource to use to develop your own software is the TPS23861 PoE MSP430 MCU reference code. This system software supports:

  • Full compliance to the IEEE802.3at PoE specification.
  • Device detection, classification and power on.
  • Fault reporting (overcurrent, overtemperature, DC disconnect, etc.).
  • Multiport power management.

Multiport power management

Multiport power management methods manage the distribution and prioritization of PDs. The IEEE specification does not define power management itself; instead, it is a feature that takes advantage of the PoE specification as it defines such terms as port and system power.

The goals of multiport power management in a POE-enabled system are twofold: power as many PoE PDs as possible and limit the power cycling of PoE PDs.

The maximum system power available limits the total number of powerable ports. For example, each PoE PD can draw a maximum of 30W, and a 48-port system can draw more than 1,440W of total system power. If the maximum system power available is less than 1,440W, multiport power management then becomes necessary so that the available system power may be used most efficiently while meeting the goals.

In the TPS23861 PoE MSP430 MCU reference code, a multiport power-management module is implemented in semi-auto mode reference code.

There two approaches to implementing the power-management feature:

  • Powering on each port without checking the remaining power and turning off ports if the total system power exceeds the power budget.
  • Before powering on each port, calculating the total system power and checking if the remaining power is enough to power on the port or not.

The TPS23861 PoE  MSP430 MCU reference code implements the second approach, considering more severe cases when the margin between the power budget in the software and the actual power capability of the power supply is not enough to turn on one extra port.

The multiport power-management module is inserted after the PSE discovers a valid PoE PD. My initial thought was to calculate how much power was left and compare it to the power that the current port is requesting (estimated by the class result after classification). If the remaining power is enough to turn on the current port, it will then initiate the power-on command; otherwise, the system software or host will check if there’s any ports powered on with lower priority. When a PD device connects to the PSE port, the PSE generates an interrupt; then the host knows which port is connected with a PD.

If there is a lower-priority port, the host will power off the port in order to have enough power to turn on the current port.

If you think more deeply about it, you will find that there are some corner cases that haven’t been considered in the above logic:

  • What if after turning off all lower-priority ports, the remaining power is still not enough to turn on the current port?
  • Since system power is only calculated when a port is inserted and finishes classification, what if the PD has a sleep mode or doesn’t pull a full load after power on, and at a certain time the load suddenly increases?

Taking these two corner cases into consideration, we optimized the multiport power management algorithm in two ways:

  • Instead of turning off a lower-priority port after recognizing insufficient power, we first check whether the remaining power is sufficient after turning off all ports that have lower priority than the current port. If the power is still not sufficient, we just leave the current port waiting. Otherwise we turn off the lowest-priority port in each loop.
  • To avoid having a load-step change damage the power supply, we added a module running in a timer-triggered interrupt that monitors the total consumed system power. If it exceeds the power budget, it will turn off the lowest-priority port.

Figure 2 shows the flow chart of power on decision and Figure 3 shows the system power monitor flow chart.

Figure 2: Power-on decision flow chart

 

Figure 3: System power monitor flow chart

TI provides TPS23861 PoE MSP430 MCU reference code to help designers ramp up quickly and develop software without having to develop it from scratch. Customers can take this code to an MSP430 MCU LaunchPad™ development kit and run it with the TPS23861EVM. For more information on the TPS23861EVM, see the EVM user guide for instructions and software architecture.

Additional resources

Detecting pesky failing batteries before they cause a problem

$
0
0

As battery-powered systems become more common, quickly identifying a failing battery so that it can be replaced is becoming increasingly important. From an individual battery powering a mobile phone to a bank of batteries used to store renewable energy, a faulty battery can lead to system downtime. At the heart of battery analyzers, which determine the health of a battery, is a precision analog-to-digital converter (ADC).

In this post, I will explore how key specifications of these ADCs, including speed, resolution and latency, enable a more inclusive analysis of a battery’s health. To better understand the importance of the ADC’s performance in a battery analyzer, let’s look at Randles’ model of a lead-acid battery, shown in Figure 1.

Figure 1: Randles’ model of a lead-acid battery

In Figure 1, R1 is the active electrolyte resistance, R2 is the charge transfer resistance and C is the double layer capacity. Together, they create a simplified equivalent circuit of a lead-acid battery. By measuring all three components and comparing them to the expected/known values, it is possible to generate an approximation of the battery’s “health,” which includes its cold cranking amps (CCA), state of charge  and capacity.

While there are a range of battery test methods, such as a discharge/charge cycle, DC load and AC testing, electrochemical impedance spectroscopy (EIS) is considered to be the most accurate by leading battery health researchers. EIS is preferred over other methods because of its capability to quickly measure CCA, SOC and battery capacity. The process involves drawing a range of small, low-frequency signals from the battery and measuring the corresponding current across a shunt resistor as well as the DC voltage of the battery. These measurements can determine R1, R2 and C, which in turn are compared with expected values to determine a battery’s health.

Depending on the health of the battery as well as the type of battery being tested, the measured current and voltage can range from very small to quite large. As such, the ADC chosen to convert the measurements must be capable of accurately measuring small changes to the input signal, across a wide range of inputs.

In many cases, a successive approximation register (SAR) ADC is the preferred converter due to its dynamic range, speed, resolution and low latency. A high-resolution SAR ADC can precisely measure low-speed signals (DC to several megahertz), which can then be oversampled and digitally filtered by a host processor (e.g. FPGA) to increase system accuracy. Alternatives include delta-sigma ADCs (which are not as well-suited for measuring a range of input frequencies) and pipeline ADCs (which offer higher speed at the cost of resolution). Additionally, the low latency of a SAR ADC shortens the time required to take a measurement without sacrificing measurement accuracy.

In the case of a battery analyzer, it can be difficult to measure current (ranging from low milliamps to high amps) or voltage (ranging from several volts to tens of volts) with high accuracy across the entire range. To do so, a high-resolution SAR ADC with a wide dynamic range (input range) and at least several hundred kilo samples per second (kSPS) takes multiple measurements of each input signal, which the host processor then digitally filters to improve measurement accuracy. Figure 2 shows a simplified diagram of a battery tester system.

Figure 2: Diagram of a battery analyzer measuring current and voltage

In Figure 2, the load is varied to draw a range of AC currents from the battery, resulting in an AC voltage across a small, high-accuracy sense resistor. A high-precision data-acquisition system designed for minimal signal distortion typically amplifies and then measures the voltage created across the resistor. In the case of measuring the DC voltage of the battery, this input is often scaled down by an amplifier to enable an ADC to measure a wide range of voltages. In both cases, the ADC chosen to digitize the signal must have high-enough resolution to enable it to detect small changes to the input signal.

While there are many SAR ADCs that you can select to measure this voltage, the ADS8900B family shown in Table 1 offers several unique advantages, including high resolution, a fast sampling rate, and excellent AC and DC performance. These features are critical for measuring the wide dynamic-range signals encountered in battery health analysis while maintaining accuracy across the input range. 

Table 1: ADS8900B family key specifications

These devices also feature an internal reference buffer that further increases system accuracy and reduces its size, which is especially important for portable battery analyzers. Figure 3 shows an external vs. internal reference buffer in a data-acquisition system.

Figure 3: External vs. internal voltage reference buffer

The reference voltage circuit is critical in precision data-acquisition systems, as it provides a point of reference for the data converter to compare against an input signal. Any error in the reference voltage will result in inaccurate measurements of the input signal. During each conversion cycle, the ADC will draw considerable current from the reference due to the internal switched-capacitor architecture of the converter. A reference buffer minimizes the voltage droop created during conversion. In the case of the ADS8900B family, the internal reference buffer is optimized to drive the ADC’s reference pin, maximizing AC and DC performance and resulting in a higher precision system than one using an external reference buffer.

I hope I’ve explained how the ADS8900B is enabling battery analyzers to more accurately measure battery health, although any system requiring precise measurement of a small and/or dynamic signal can realize the benefits that this device has to offer. Stay tuned for a future post, where I’ll show how you can use a pair of discrete ADCs to simultaneously sample inputs and how new ADCs are reducing the headaches of digital design for high-speed, high-resolution data-acquisition systems. Be sure to sign in and subscribe to Precision Hub to get these posts delivered right to your inbox.

Additional resources

Four-switch buck-boost layout tip No. 1: identifying the critical parts for layout

$
0
0

Layout is very critical to the successful operation of a buck-boost converter. A good layout begins by identifying these critical components, as shown in Figure 1:

  • High di/dt loops or hot loops.
  • High dv/dt nodes.
  • Sensitive traces.

Figure 1: Identifying high di/dt loops, high dv/dt nodes and sensitive traces

Figure 1 shows the high di/dt paths in the LM5175 four-switch buck-boost converter. The most dominant high di/dt loops are the input-switching current loop and output-switching current loop. The input loop consists of an input capacitor (CIN), MOSFETs (QH1 and QL1), and a sense resistor (Rs). The output loop consists of an output capacitor (COUT), MOSFETs (QH2 and QL2), and a sense resistor (Rs).

The high dv/dt nodes are those with fast voltage transition. These nodes are switch nodes (SW1 and SW2), boot nodes (BOOT1 and BOOT2), and gate-drive traces (HDRV1, LDRV1, HDRV2 and LDRV2), along with their return paths.

The current-sense traces from resistor Rs to the integrated circuit (IC) pins (CS and CSG), the input and output sense traces (VISNS, VOSNS, FB), and the controller components (SLOPE, Rc1, Cc1, Cc2) form the noise-sensitive traces. They are shown in blue in Figure 1.

For good layout performance, minimize the loop areas of high di/dt paths, minimize the surface areas of high di/dt nodes, and keep the noise-sensitive traces from the noisy (high di/dt and high dv/dt) portions of the circuit. In the other two installments of this series, I’ll look at each of these in detail in the context of the four-switch buck-boost converter. My next topic will include an example for optimizing hot loops.

Additional resources

Students build electric motorcycle & travel the world in 80 days

$
0
0

The gist: twenty-three students from the Eindhoven University of Technology in the Netherlands banded together to become team STORM Eindhoven.

Their mission: spread the importance of sustainable energy by designing and building the world’s first electric motorcycle from the ground up and then riding their bike around the world in 80 days.

Their accomplishment: the students traveled approximately 16,000miles across 3 continents using no gas. The motorcycle designed by the students uses a 28.5kWh modular battery system, which utilizes TI’s bq76PL536A-Q1, a stackable 3-6 cell battery monitor and protection device, and can be recharged in 6-7 hoursby being plugged into wall outlets. The bike can travel 236 miles on one full charge. (Please visit the site to view this video)

Remco Mulders, a STORM strategy member who is working on his master’s degree in sustainable energy technology at the Eindhoven University of Technology, explained the impact he believes the STORM bike will have on the world:

“By building the first electric motorcycle, we aimed to promote electric driving worldwide. The motorcycle is a means to achieve our ultimate goal of inspiring people and starting a discussion about sustainability to accelerate the transition towards electric mobility.”

The STORM Eindhoven team stopped by the Texas Instruments campus in Dallas, Texas as a part of their tour and shared how they integrated TI technology into their final product. Watch the video above to learn more about STORM Eindhoven and the world’s first electric touring motorcycle.

Check out other innovative student projects:

What are the building blocks of Bluetooth speakers?

$
0
0

Bluetooth® speakers are now fairly common in the marketplace. Many manufacturers offer very basic systems selling for a few dollars to high-end systems selling for hundreds, with varying degrees of audio performance.

The Bluetooth market is highly competitive; hence, you must be very aware of multiple design constraints to develop the right product features at the right cost. These constraints include solution size, component count, cost, efficiency and battery size.

Figure 1 shows the basic blocks inside a Bluetooth speaker:

  • Battery. Because this is a portable application, the battery is a must-have block.
    • Cost and size constraints mean that this battery must be as cost-effective, small and light as possible.
    • More than likely, you will use the least number of battery cells to achieve the longest play times.
  • Power management. This block provides the right power levels to the rest of the Bluetooth circuitry.
    • Given the cost and size constraints from the battery, the voltage coming from one or two battery cells would be quite low. You will likely need a boost converter to increase the available voltage to the rest of the system.
    • The power-management block must include a charger to recharge the battery after portable use.
  • Bluetooth system. This block provides wireless communication to the speaker from a smartphone, tablet or other Bluetooth-enabled products.
    • Bluetooth modules today offer complete solutions for portable audio systems, as they support wireless and wired-in audio natively.
  • Audio. This block contains all of the electronics to drive the speakers in the system.
    • Because the signal coming from the Bluetooth module has both low-voltage and low-current capabilities, an audio amplifier provides the signal with the necessary higher voltage and current capabilities to drive the drivers in the speaker system.
    • An audio digital-to-analog converter (DAC) may be included in this block to convert the digital audio signal from the Bluetooth module to analog, and to provide additional audio processing in the digital domain to further enrich the customer experience in higher-end systems.

Figure 1: Bluetooth speaker block diagram

Audio amplifiers: Class-AB vs. Class-D

You have two choices when selecting the best audio amplifier for your Bluetooth speaker systems: Class-AB or Class-D.

Class-AB audio amplifiers are linear amplifiers that generate no electromagnetic interference (EMI) and do not require many external electronic components. They are highly inefficient, however, and require substantial passive or even active thermal management in the form of heat sinks and fans.

On the other hand, Class-D audio amplifiers are highly efficient switching amplifiers that need very little thermal management; but they do require output inductors that are not exempt from EMI concerns.

Play time, battery and efficiency considerations: system design trade-offs

A portable system poses an interesting design challenge: how to keep costs down while adopting a necessary (and potentially expensive) battery, which may comprise one or many individual cells with different battery chemistries.

As I stated, Class-AB audio amplifiers do not generate EMI and do not require many external electronic components; as such, you would think that they would be ideal for a cost-constrained system like a Bluetooth speaker.

But their very low efficiency means that, in a Bluetooth speaker, the charge from the battery backup will be mostly wasted as heat. This low efficiency comes at a very high cost, as a system that uses a Class-AB amplifier will require additional battery cells to fulfill this requirement.

Class-D amplifiers’ high efficiency makes them ideal for portable audio systems; their high efficiency means that a very low-cell-count battery (even from a single battery cell if selecting the right chemistry) can power a Bluetooth speaker system. This reduces total system cost significantly, as well as weight and size.

Music and idle-power losses: not every Class-D audio amplifier is created equal

Audio systems are marketed by power and peak-power ratings that may not reflect how audio systems are typically used as customers do not typically listen to music at very high power in a typical home-audio system, and even less so in a portable application like a Bluetooth speaker.

For Bluetooth speakers, the main specification customers must be aware of is play time, as it lists the typical use of the system.

As I made the case for Class-D amplifiers’ high efficiency in Bluetooth speaker systems, you should be aware that efficiency is not the only factor when maximizing play time. Obvious additional factors include the power consumption of all of the system blocks when the system is active and shut down; other not-so-obvious considerations include idle power losses in the audio amplifier itself.

A typical music waveform, like the one shown in Figure 2, has some amplitude variability. Note the proportion of “loud” music (high amplitude) to “quiet” music (low amplitude). This waveform shows that audio systems playing typical music will remain most of the time in the “quiet” music range; hence the audio amplifier outputs low-power sound most of the time.

Figure 2: Typical music waveform

Previous-generation Class-D amplifier solutions like TI’s popular TPA3110D2 and most of the Class-D amplifiers in the market are not efficiency-optimized for low-output power levels. As you can see in Figure 3, the supply current in last-generation Class-D amplifiers remains constant even when the output power level is low or even zero. This constant current wastes battery charge; it shortens play time and increases battery cell count and system cost.

Figure 3: Last generation Class-D amp

Next-generation Class-D amplifiers like the TPA3128D2 use a novel hybrid modulation mode to minimize idle power losses and maximize power savings. Notice how in Figure 4, the supply current to the amplifier decreases dramatically when the output power level is low; this power savings elongates play time, thus decreasing battery cell count and system cost.

 Figure 4: TPA3128D2 performance

You can easily take full advantage of this new feature and its derived cost savings by migrating from the TPA3110D2 to the TPA3128D2, as both solutions are pin-to-pin compatible for easy redesign.

Have you designed a Bluetooth speaker system? If so, what specifications were most important to you? Log in and leave a comment below.

Additional resources

  • If you’re considering designing a portable speaker, purchase the TPA3128D2 evaluation module.
  • Check out the TI Audio landing page for audio subsystem diagrams, device recommendations and suggested design considerations.
  • Read this application note which shows tests results that prove the performance improvements by the new features of TPA3128D2 
  • Watch this short video where I discuss the benefits of TPA3128D2


Extend battery life with a LDO, a voltage supervisor and a FET

$
0
0

Extended battery life is a common design requirement across a variety of applications. Whether it’s for toys or water meters, designers have various techniques at their disposal to improve battery life. In this post, I will illustrate one such technique that involves strategically bypassing a low dropout linear regulator (LDO).

Generating the rail

Using an LDO is a common way to generate a regulated voltage from the battery. This is especially true with a single-cell lithium-ion (Li-ion) battery that outputs 4.2V when fully charged.

Let’s say that you want to generate 3.3V for a microcontroller (MCU) with a supply voltage range of 3V to 3.6V and you chose the TPS706 to generate this rail. Figure 1 illustrates this circuit.

Figure 1: The TPS706 regulating 3.3V from the battery

Despite the simplicity of this circuit, it has some limitations. Chief among these is dropout, which will cause the LDO to cease regulation and possibly put the supply voltage of the MCU outside specification.

The implications of dropout

You can expect the voltage of the Li-ion battery to drop as the battery discharges. Figure 2 shows an example discharge curve.

Figure 2: Li-ion battery voltage falling over time

This can be troubling when you remember that the LDO risks entering dropout as the input voltage approaches the regulated output voltage. At a certain point, the battery voltage will drop so low that the TPS706 will no longer be able to regulate 3.3V. Instead, the output voltage will begin to track the battery voltage at a difference equal to the dropout voltage.

The TPS706 specifies a typical dropout voltage of 295mV when the output current is 50mA and the output voltage is 3.3V. Thus, it is possible that the LDO could enter dropout once the battery voltage drops below 3.6V. Figure 3 offers an example of such behavior.


Figure 3: The TPS706 entering dropout mode

As shown, VOUT begins to droop once VIN falls to around 3.6V. Because the lower end of the MCU supply range is 3V, this is concerning – dropout can cause VOUT to fall below 3V very quickly.

Avoiding dropout

One way to circumvent this issue is to bypass the LDO before or as it enters dropout. Figure 4 illustrates how.

Figure 4: Using a P-channel MOSFET to bypass the LDO

In this circuit, the TPS3780, a dual-channel voltage detector, monitors the battery voltage via SENSE1. If the battery voltage should fall below 3.4V, OUT1 drives the gate of the P-channel MOSFET low. This enables the current (the blue arrow) to flow through the drain-source terminals of the MOSFET rather than through the input-output terminals of the LDO (the red arrow). Since the MOSFET has lower on-resistance than the LDO, the output voltage will more closely track the input voltage.

SENSE2 monitors the output voltage. Once the output voltage falls below 3V (or the bottom of the supply range of the MCU), OUT2 will assert low. This signal can put the MCU in reset mode.

Figure 5 shows the behavior of the circuit withoutthe aid of the bypass MOSFET.

Figure 5: A falling input voltage without the bypass MOSFET

To simulate a battery, the input voltage is ramped down at a rate of 1V/ms. You can see that once the input voltage hits 3.4V, it takes about 100ms for the output to fall to 3V.

Now, let’s examine the behavior of the circuit that uses the bypass MOSFET, as shown in Figure 6.

Figure 6: A falling input voltage with the bypass MOSFET

Once the input voltage falls below 3.4V, the MOSFET turns on. The output voltage is now equal to the input voltage minus the voltage drop across the MOSFET. As a result, it now takes almost 320ms for the output to reach 3V. By enhancing the PMOS device, the output voltage more closely tracks the input voltage than the LDO does in dropout. In other words, the low on-resistance of the external PMOS effectively allows for a longer battery life.

In reality, the battery voltage will fall at a slower slew rate. Therefore, using a bypass circuit can significantly extend operation time.

Current consumption

When operating off the battery, you must also consider the current consumption of the circuit. See Table 1.

Circuit element

Current (μA)

TPS706

1.3 (typ)

TPS3780

2.09 (typ)

Resistor networks

3 (typ)

Pull-up resistors

68 (typ) when the output is low

Table 1: Current consumption of various circuit elements

Taking this consumption into account is important, as it contributes to the overall discharge of the battery. Fortunately, however, the consumption is very low and the extra circuitry enables sustained use of the battery that outweighs the added current consumption. This is especially true for applications that require higher load currents.

Conclusion

LDOs are an effective, low current-consumptive method for generating a rail off the battery. However, dropout can cause problems with regulation when the battery voltage starts to droop. Using a MOSFET in conjunction with an LDO helps avoid this issue in order to attain the longest battery life. 

Additional resources

  • Read the application report for more information on resistor divider current draw and accuracy tradeoffs.

 

Tech Trends: 4 key technology trends driving innovation in 2017

$
0
0

In his Tech Trends column, Chief Technologist Ahmad Bahai explains emerging technology trends that will change our world and the key innovations needed to make them a reality.

 The electrification tide is rising. Electronics are permeating every aspect of our lives. Everything around us is getting more intelligent, more connected – and therefore replete with semiconductor content.

Big data is getting bigger, personal electronics are getting more personable, and smart machines are getting, well, smarter.

In 2017, I see the following technology trends helping to steer the course of innovation. Some of these trends are carryovers from the prior year, but continue to be pervasive and increasingly important in the technology landscape.

1 - High voltage

The growth in high voltage is driven in part by the increasing popularity of electric vehicles (EVs) and hybrid electric vehicles (HEVs). Most major car companies are aggressively developing both EVs and HEVs, and the need for power drivers and charging stations will fuel the growth of high voltage power electronics.

Also, high voltage power will be necessary to power more robust data centers with the development and proliferation of 5G-enabled devices. We’ll talk more about this later –under smart buildings and smart cities. Off-line applications, such as smart and rapid chargers – dependent on high voltage power – are also showing signs of healthy growth.

Traditional power devices continue to experience a healthy growth.  More advanced power devices, such as gallium nitride (GaN) and silicon carbide (SiC), show promising opportunities, though only when they become more affordable, by offering higher power density in a smaller footprint.

2 - Semi-autonomous systems

The automotive industry is embracing the latest electronic features at the pace of the consumer market. Even more telling is that the semiconductor content growth within cars continues to outpace automotive market growth since 2010. However, with automotive quality standards demanding higher reliability and longevity, innovation for next generation cars has prompted both new technical challenges and market promises. New complex advanced driver assistance (ADAS) systems will deploy multiple cameras, radar, LIDAR and ultrasound sensors for autonomous driving. Additionally, the EV/HEV market, which has driven innovation in power electronics, shows promising growth but still a small percentage of total market.

Another semi-autonomous system that will see growth this year is robots. Traditionally, robots have been used in industrial applications for some routine and precise applications. Robots are now finding roles in enterprise, education, the consumer market and in assembly lines working alongside people. Advanced control techniques, in conjunction with high performance motor drive and sensors, will be extensively utilized in modern robots.

Drones will also see expansion of professional applications. Their use in security, entertainment and survey services will grow this year. We will see advanced sensor systems and flight time improvements for many critical applications.

3 - Smart buildings and smart cities

Smart buildings and cities are adopting industrial internet technology at a faster pace than the rest of industry 4.0. More commercial and industrial buildings are adopting sensor networks for security, utility monitoring, water and air quality, and more.

Urbanization is accelerating globally. With higher concentrations of people in big cities, we cannot underestimate how critical it is to ensure energy efficiency, improved transportation quality, and better water and utility management. Cities are also deploying intelligent traffic monitoring and control, security systems, and intelligent utility monitoring at a faster pace.

As big data continues to grow, the demand for data – both wired and wireless – is growing exponentially. Video will contribute to about 60 percent of data traffic on networks, and mobile data is doubling every 15 months (Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2016–2021). The need for processing and moving massive amounts of data around buildings, factory floors, and inside cars or public transportation have pushed many aspects of the technology to its limits. The explosive growth of data in wireless networks has expedited the development of 5G radio infrastructure.

Data centers are also rapidly expanding. The implications of this trend manifest themselves in ultra-high speed interfaces up to 100Gbps, advance power management, and integrated radio units. Data analytics, in many cases real-time, is critical in many applications and has prompted new processor architectures. Machine learning accelerators for embedded processors and GPUs with special hardware features for artificial intelligence are utilized in many new applications.

4 - Personal electronics

The fast pace of personal electronics growth drives the development of innovations with increasing semiconductor content and differentiating features. Many of the personal device innovations will gradually enhance and emerge in automotive and industrial markets as well.

Devices around us are more powerful and omnipresent than before. Therefore, our interaction with machines is more frequent and evolving. While touch has been the dominant human machine interface, more native approaches that leverage speech, vision and gestures are emerging. Voice interface is increasingly more reliable and affordable for many consumer and automotive applications. Virtual and augmented reality started for entertainment and gaming – with dizzying effect – but have been evolving for many professional applications, such as flight simulation. Flexible electronics and display is finding applications in mobile and medical applications.

Keep an eye out for my columns this year – each diving deeper into these trends. We’ll discuss the technical challenges within each and how semiconductors are driving the technologies forward.

Breaking it down: Lesly Zamora shatters stereotypes with help from High-Tech High Heels, all-girls high school

$
0
0

Lesly ZamoraSeventeen-year-old Lesly Zamora can take apart and reassemble a computer tower in less than seven minutes.

It’s a skill she discovered after joining the Tech Girls Club at her high school last year.

“I like to destroy things,” she said with a smile, quickly adding, “but I can also fix them.”

Instilling a passion for technology in girls like Lesly is exactly what Dallas-based non-profit High-Tech High Heels aims to do. In partnership with the Communities Foundation of Texas, it provides funding to non-profits and offers grants designed to close the gender gap in science, technology, engineering and math (STEM) by sparking and cultivating a love for these subjects in girls.

The Young Women’s Preparatory Network, which oversees Lesly’s all-girl public high school in Dallas, is a beneficiary of High-Tech High Heels. Lesly attends Irma Lerma Rangel Young Women’s Leadership School, which has an intense focus on leadership and STEM.

(Please visit the site to view this video)

“As a student of an all-girl school, I have been taught that women can make a difference in the world, and that is what I plan to do within the field of technology,” Lesly said. “Taking things apart is what I love to do. Not everything without a reason, but rather to know how things are put together and function as a whole.”

Lesly’s love of math stems from the fact that, “no matter what the situation, there will always be one correct answer, but there are many ways to solve the problem to find that answer,” she said.

“The interest that I have for science and math has grown immensely over the years,” she said. “Science amazes me because it explains how things are created.”

High-Tech High Heels, founded in 2001 by 30 TI women who wanted to make a difference in STEM, celebrated its fifteenth year in 2016. Building the pipeline of women in STEM continues to be as critical as it was when the organization was founded, said Heidi Means, its co-president and manager of our wafer manufacturing site in Sherman. Currently, women represent only 12 percent of the engineering workforce, and fewer than one in five engineering graduates is a woman, according to the white paper Women in STEM: Realizing the Potential.

“We believe the world will be a better place when there is a diverse, qualified workforce with more opportunities for women in STEM,” Heidi said. “Programs that High-Tech High Heels funds help young women to learn about STEM careers and become confident that they can excel in these fields. The gap for the STEM pipeline starts early; we support programs at middle school to high school levels to prepare young women to pursue STEM fields of study in college.”

“We’re all about closing the gender gap in STEM education and STEM fields,” said Ellen Barker, a board member of High-Tech High Heels and chief information officer at TI. Sixty percent of High-Tech High Heels’ board members are women who work at TI.

At TI, we believe in helping spark a love for STEM in today’s youth– and fanning the flame to help shape tomorrow’s innovators – like Lesly. One example of how we do this is the TI Foundation’s support of High-Tech High Heels and other organizations working to increase STEM learning among groups traditionally underrepresented in STEM fields, including women, Hispanics and African Americans.

Lynn McBee, CEO of Young Women’s Preparatory Network, stressed the importance of preparing more young women like Lesly to fill the STEM pipeline.

“That’s what Young Women’s Preparatory Network does – we get our girls in the game and arm them with what they need to persist, thrive and advocate for themselves in all aspects of life, especially their careers,” she said.

“Our girls are largely from economically disadvantaged families, and many will be the first in their families to attend college. We achieve 100 percent graduation from high school and 100 percent acceptance to college, with millions of dollars in scholarships.”

Young Women’s Preparatory Network works with school districts to operate its college preparatory schools. Its class of 2016 had 291 graduates who received a total of $41.9 million in academic and merit scholarships, Lynn said.

“The notion that women ‘don’t or can’t do science’ is totally inaccurate and to me, quite ridiculous,” said Lynn, who worked as a biochemist for 24 years before turning her attention full-time to preparing young women for careers in STEM and leadership.

Taking initiative to learn new things is a key focus at Irma Rangel, Lesly said. Her responsibilities in the Tech Girls Club include diagnosing and finding solutions for hardware and software problems.

Lesly interned at a computer engineering company last summer, and that experience made her “100 percent sure” that she will major in a technology field in college, she said. Her top three choices of universities are Texas Women's University, University of North Texas, and University of Texas at Dallas.

High-Tech High Heels supported Young Women’s Preparatory Network last year with $26,500 to help start an all-girl’s robotics club at Lesly’s school and to enable middle schoolers to attend a summer science camp at UTD.

Four-switch buck-boost layout tip No. 2: optimizing hot loops in the power stage

$
0
0

Once you have identified the critical parts of your DC/DC converter design, your next task is to minimize any sources of noise and unwanted parasitics. Minimizing hot loops is a major first step in this direction. Figure 1 shows the hot loops or high di/dt loops in a four-switch buck-boost converter. Figure 1 also highlights the hot loops formed by the gate drives and their return paths, in addition to the input and output switching loops (Nos. 1 to 6).

Figure 1: Hot loops in a four-switch buck-boost converter

Since the power-stage hot loops (in red) contain the largest switching currents, optimize these first. The input loop (No. 1) carries the switching current when in buck cycles. The output loop (No. 2) carries the switching current when in boost cycles. In my experience, I’ve realized the lowest loop area and the most compact design when optimizing both loops using a symmetric layout.

Figures 2 and 3 are examples of good power-stage layouts. The layout example shown in Figure 2a provides a better thermal path for the heat generated in the sense resistors and the FETs to spread. Consider following the layout example shown in Figure 2b to create higher-density designs, as it packs the power-stage components closer together.

Figure 2: A symmetrical power-stage layout minimizes both the input and output power loops in a four-switch buck-boost converter, (a) medium density design, (b) high density design

There is a trade-off in size, thermal robustness and noise performance of the power stage. Smaller di/dt loops and smaller dv/dt nodes have lower parasitics and also radiate less. They are also more robust in the presence of external noise, as smaller loop areas couple less noise. Smaller designs are more constricted thermally, however, because there isn’t much copper directly connected to the heat-dissipating elements, which include MOSFETs, sense resistors and the inductor. For relatively higher-power designs, you may need extra copper area at the switch nodes to limit the temperature.

Figure 3 shows a design capable of handling higher currents and allows for the paralleling of FETs. The heat is distributed between the FETs, which can then spread to adjacent copper planes and thus avoid excessive temperature increases or the formation of hot spots.

Figure 3: An example layout with parallel FETs and larger copper areas for higher-power designs

In the next installment of this series, I will discuss how to optimally route sense connections.

From the Experts: Perform cyclic redundancy checking using linker-generated CRC tables

$
0
0

To verify code and/or data integrity, TI’s microcontroller (MCU) Code-Generation Tools (CGTs), including the C2000™ MCU CGT, the MSP430™ MCU CGT and the TI ARM CGT, support cyclic redundancy checking (CRC). This method can greatly enhance the performance of your embedded design and is easy to use, once you understand the basics of how CRC works. The focus of this blog post will be CRC with the C2000 MCU CGT; for more detailed information about CRC with the MSP430 MCU CGT and ARM®-based CGTs from TI, see the Additional Resources section below.

Designers use CRC to detect errors that might occur during data transmission. For a given section of code or data in an output file, the originator of the data – also known as the sender – applies a specific CRC algorithm to the content in that section to produce a CRC value, which is stored at a separate location in the output file. The consumer of the data – known as the receiver – knows what algorithm was applied to the section and can apply that same CRC algorithm to the code or data transmitted. If the CRC value computed by the receiver does not match the one computed by the sender, then the receiver may conclude that some error occurred during transmission and take appropriate action to address the problem, such as requesting that the sender retransmit the data.

You can check out a simple demonstration running on Code Composer Studio™ (CCS) software of how to perform CRC at run time using linker-generated CRC tables in the video, “Performing CRC with linker-generated CRC tables.”

(Please visit the site to view this video)

Linker-generated CRC tables

The C2000 MCU linker supports an extension to the linker command file (LCF) syntax, the crc_table() operator, that generates a CRC value for a given initialized section of code or data. If a crc_table() operator is attached to the specification of a region (an output section, a GROUP, a GROUP member, a UNION or a UNION member), then the linker will compute a CRC value for that region and store that CRC value in target memory such that it is accessible at boot or run time.

Consider as an example a section of data that gets written to flash memory. Within the LCF, you would specify a crc_table() operator to be associated with the data section like this:

In the above snippet of the LCF, the crc_table() operator instructs the linker to generate a CRC table data object called crc_table_for_2b. Using the C2000 MCU linker, this would result in a data object that looks like this:

The crc_tbl.h file located in the Include subdirectory where your C2000 MCU CGT is installed provides the formal specification of the CRC_TABLE and CRC_RECORD data structures. CRC_TABLE is a header for a vector of one or more CRC_RECORDs. Besides the location and size of the region of memory to check, each CRC_RECORD also identifies the CRC algorithm applied to that memory region to arrive at the “crc_value.” By default, when a user specifies a crc_table() operator with a single argument, the C2000 MCU linker will use what it refers to as the CRC32_PRIME algorithm. This corresponds to the CRC32 polynomial 1 (= 0x04C11DB7) CRC algorithm that C28x plus Viterbi, complex math and CRC unit (C28x+VCU) devices support in hardware. However, the linker also allows users to select a different CRC algorithm as a second argument to the crc_table() operator.

The complete syntax for the crc_table() operator is:

The C2000 MCU linker recognizes the values in Table 1 as valid crc_algorithm_id arguments. You can specify either the linker name or linker ID.

Linker name

Linker ID

Equivalent CRC algorithm on C28x plus VCU hardware

CRC32_PRIME

0 (default)

CRC32 polynomial 1 (= 0x04C11DB7)

CRC16_802_15_4

1

CRC16 polynomial 2 (= 0x00001021)

CRC16_ALT

2

CRC16 polynomial 1 (= 0x00008005)

CRC8_PRIME

3

CRC8 polynomial (= 0x00000007)

CRC32_C

11

CRC32 polynomial 2 (= 0x1EDC6F41)

CRC24_FLEXRAY

12

CRC24 polynomial (= 0x005D6DCB)

As you will see later, it is critical that the CRC algorithm selected at link time to compute the CRC value for a region is the same CRC algorithm that the application uses to CRC the region at run time.

Generating a single CRC table for multiple regions

Before discussing how to do an actual CRC on a region at run time, let’s consider another LCF example in which a single CRC table can check multiple regions. There are two ways to create a single CRC table that applies to multiple regions. In crc_ex1.cmd, the crc_table() operator is applied to multiple output sections using the same user_specified_table_name argument for each section:



In this case, the linker generates a single CRC_TABLE data object, flash1_crc_table, that contains three CRC_RECORDs, one for each of the output sections to which a crc_table() operator was attached:

By collecting all three CRC_RECORDs into the same table, your application can perform CRC on all three output sections simultaneously by passing the address of the CRC_TABLE, flash1_crc_table, to the CRC routine.

Using separate crc_table() operators for each output section has a couple of benefits:

  • You can indicate a separate CRC algorithm for each output section.
  • The memory placement of each output section is independent from the other output sections that will be checked via the CRC_TABLE.

This snippet of crc_ex2.cmd shows how the application of a crc_table() operator to a GROUP specification creates a single CRC_TABLE, flash1_crc_table, for the three output section members of the GROUP:

Like the previous LCF example, the linker will create a single CRC_TABLE, flash1_crc_table, containing a vector of three CRC_RECORDs, one for each member of the GROUP:

The linker must create a separate CRC_RECORD for each member of the GROUP because there may be gaps between members of the GROUP containing unknown values. As in the previous LCF example, because the CRC table includes all three output sections, a CRC on all three of the output sections can happen at the same time.

While the crc_ex2.cmd example only requires a single crc_table() operator specification, there are a couple of caveats associated with applying a crc_table() operator to a GROUP:

  • You can only indicate one CRC algorithm, which will be used for CRC on each output section represented in the CRC table.
  • The memory placement of the output section members of the GROUP is ordered and contiguous according to the placement instructions attached to the GROUP.

How to perform CRC at run time

Now that you know how to get the linker to generate a CRC_TABLE data object for one or more regions on which you want to perform CRC at run time, you will need to include a software routine in your application that can read and process a CRC_TABLE to perform the actual CRC. Below is an example of a function, my_check_CRC(), that can read and process a linker-generated CRC_TABLE data object:

In this example function, the crc_tbl.h header file provides declarations for the CRC_TABLE and CRC_RECORD data structures, along with definitions of the available CRC algorithm IDs. Assume that the declarations of the calc_crcXXXX() functions are available in vcu_crc_functions.h. The CRC algorithm ID from each CRC_RECORD tells the run-time application which CRC algorithm function to call. As I mentioned earlier, the C2000 MCU linker allows you to specify one of six different CRC algorithms to use in calculating the CRC value. For example, if a CRC_RECORD identifies the CRC32_PRIME algorithm for a given region of memory, then the calc_crc32p1() function is called. The application will need to supply the calc_crc32p1() function definition. If no hardware support for CRC is available, then the algorithm can be computed in a C function. However, if the application is running on a C28x+VCU device, then the calc_crc32p1() function can use the special CRC instructions available on the device to calculate the CRC value so that it matches the crc_value stored in the CRC_RECORD.

The core of such a calc_crc32p1() function is a repeat block (RPTB) loop that spins through the memory region to be checked and uses the VCRC32L_1 and VCRC32H_1 instructions to calculate the CRC value for the region using the CRC32 polynomial 1:

You can find full examples using the CRC hardware support on C28x+VCU devices in the controlSUITE™ software package, accessible from the controlSUITE software download page on TI.com. If you have CCS installed, you can also access the controlSUITE software package from the CCS App Center. The controlSUITE software package contains example CCS projects that demonstrate the use of linker-generated CRC tables. Assembly language source is provided for functions that use CRC hardware support on C28x+VCU devices to calculate CRC values (look in the directory where you have controlSUITE software installed at controlSUITE/libs/dsp/VCU).

Additional resources

To learn more about linker-generated CRC tables in the C2000 MCU CGTs, see Section 8.9 and the appendix of the “TMS320C28x Assembly Language Tools User’s Guide.”

How to efficiently move information through your factory

$
0
0

In today’s competitive market environment, two paths to success come from improved production and supply chain efficiency. These improvements range from increasing machine reliability to streamlining their ease of use or providing more accurate data for their operations. All of these concepts can be addressed based on how information moves through a factory using industrial automation systems.

What is happening in my factory?

To keep machines running smoothly, operators must quickly identify when errors might happen on a production line. For example, a soda bottling machine may need to alert operators to replenish soda levels based on the remaining volume. Once a system senses this deficiency, status updates are then displayed through a human-machine interface (HMI) on handheld devices, in large control rooms or on the machine itself. HMIs can range from simple segmented displays to high-resolution LCD displays.

Display systems typically include an application processor to run operating systems like Linux or VXWorks®, which enable access to frameworks like Qt for designing a graphical user interfaces (GUIs). Some processors even include graphic accelerators to provide sophisticated graphic processing while easing the load on the main processor. A wide range of 2D or 3D display capabilities are available on TI’s Sitara™ processors based on the ARM core.

Now that data can be displayed through an HMI, it needs to gather data from the rest of the factory.

 How does data travel through the network?

A network of sensors captures products’ information as they move down a production line and connect to programmable logic controllers (PLCs) through low-latency real-time networks. These networks use specialized industrial Ethernet communication protocols to send information within milliseconds, ensuring that a PLC transmits actions to connected devices faster than any human could. Additionally, industrial Ethernet protocols often use redundancy measures to ensure that information gets delivered in case of a network disruption.

When using Sitara processors, these industrial communications can run on the programmable real time unit and industrial communication sub system (PRU-ICSS). The extra PRU cores allow for the implementation of multiple protocols, making it easy to program many different protocols using the same device, instead of using a custom ASIC device for each protocol, saving time and resources. Having a reprogrammable device, means that factories do not need to change their entire network in order to connect different elements together.

These Ethernet networks can cover many of the factory needs, but wireless connectivity can provide a better solution.

How can I use Wi-Fi to increase connectivity?

Factories often utilize large machinery with feature displays or access points that are difficult for a human operator to access, such as controls located high above the ground or displays mounted in cramped spaces. To get information from those hard-to-reach places, Wi-Fi sensors can relay data to a more convenient location (e.g. a control room). And by incorporating cloud services, factory workers can not only access this specific information through a smartphone application, but also historical data or statistics on a particular piece of machinery.

TI’s WiLink™ 8 modules allow engineers to easily add Wi-Fi® and Bluetooth®/ Bluetooth Low Energy connectivity to many devices, perfectly pairing with Sitara processors for Industrial Automation. Features include:  

Get started now with the TI WiLink™ 8 and Sitara AM570x processor in 3 easy steps.

To Learn more about Sitara AM57x processors please visit the below links.

From zero to hero

$
0
0

It’s not easy for any device to be a hero, but the AM570x processor is just that—a hero. With a cost optimized platform that reduces board space with a 17x17 mm package size, combined ARM® Cortex®-A15, ARM Cortex-M4 , C66x DSP , 3D and 2D acceleration cores and an integrated Industrial Communications Subsystem (PRU-ICSS) capable of running different industrial Ethernet protocols simultaneously, it’s no question that the AM570x processor  is the chip of choice. But before we start with a hero, let’s start from the beginning. 

 TI’s AM57x processors revolutionized the processing experience. The Sitara™AM57x processor family integrates several different processing cores and provides the right mix of peripherals. The blend of these components provides the highest processing power in the Sitara processor family; along with high-resolution video encoding and decoding features. High performance, a key feature of the family, will continue to be part of future devices in the family through the addition of the cost-optimized AM570x processors.

As a member of the AM57x processor family, interfaces like USB and PCIe have been added to AM570x processors to provide high speed connectivity while also being cost optimized. Interfaces like USB make it easy to connect peripherals like a mouse, keyboard or USB flash drive. More information about the specific interfaces can be found in the datasheet.

Compared to the rest of the AM57x processor family, the power solution is easier. This can further reduce overall system costs. For additional flexibility, Sitara processors like the AM57x processor family can be easily paired with WiLink™ module Wi-Fi devices.

The combination of cores and peripherals make it ideal for several applications. For example, AM570x processors can be used in a programmable logic controller (PLC) application by taking advantage of the PRU-ICSS to run industrial Ethernet protocols. Additionally, AM570x processors can be used in human-machine-interface (HMI) products with its mix of peripherals. The graphics processing can be offloaded to integrated 2D and 3D graphics accelerators, and the output options allow for connections to a variety of options like a simple LCD screen or a monitor with HDMI. Anywhere that graphics, performance and cost is a concern, the AM570x processor can fill those needs.

To develop software, ProcessorSDK provides options for Linux, RT-Linux or TI RTOS on AM570x processor. It is software compatible with other Sitara processor devices, making it easy to migrate to higher performance devices or even more cost-sensitive devices in future design cycles.

AM570x processors provide the right balance of cost-optimization and processing performance to design smarter solutions today.

To Learn more about Sitara AM57x processors please visit the below links.


Getting a grip on handheld devices is easier with capacitive touch sensing

$
0
0

Get a grip – sometimes it is easier to get one than to detect it, unless your design has a microcontroller (MCU) with a specialized analog front-end that features capacitive touch sensing.

Grip detection is a real benefit in small handheld device applications, displayed in figure 1, like remote-control units, test and measurement instruments (multimeters, probes), portable battery-operated power tools, video game accessories, virtual reality devices, and beauty and health products (shavers, hair dryers). Many of these systems involve a battery-operated device where low-power consumption is a critical factor. The last thing the end user wants to happen is to forget to turn off the device and find out later that the battery has been depleted. One of the benefits of grip detection is that it can lengthen battery life by automatically powering down all or much of the system when the user isn’t holding it.

 

Figure 1: Grip detection in different applications

Of course, no one wants a remote-control unit (or any device) that takes awhile to wake up before it’s useful. Microcontroller units (MCUs) like TI’s MSP430FR25x/26x devices with CapTIvate™ touch technology can not only implement a variety of grip-detection schemes, but also have features for proximity sensing, which can wake up the system even before it’s gripped. As part of the CapTIvate touch peripheral, these MSP430™ MCUs feature a finite state machine capable of monitoring as many as four touch sensors. While the system’s central processing unit (CPU) is in deep sleep mode, the sensors controlled by the state machine can detect a finger or hand 30cm away and wake up the CPU so that it can process the upcoming event. Each of the four sensors consumes as little as 0.9μA. This sleep mode also eliminates the need for the CPU to wake up periodically and scan the sensors, as is typical in most touch-based subsystems. The traditional CPU scanning process can drive power consumption up to as much as 20μA per sensor.

In addition to low power consumption, handheld device designers are always concerned with form factor: typically, the smaller the better. Because CapTIvate technology is a high-performance, high-resolution touch front-end, you can deploy smaller devices and smaller sensors to save space and still get the job done. But use cases in power tools and beauty products raise another issue: electromagnetic interference (EMI). EMI noise can trigger false detects for capacitive touch and must be eliminated.  CapTIvate technology has several built-in hardware and software features to improve robustness in the presence of EMI. Systems based on CapTIvate touch technology can pass EMI robustness standards such as International Electrotechnical Commission (IEC) 61000-4-6 and IEC 61000-4-2.

 (Please visit the site to view this video)

Some devices with grip detection will likely require a significant number of sensors arranged in an array of some sort. MSP430FR25x/26x MCUs with CapTIvate technology have the advantage of supporting both self- and mutual-capacitance sensors in the same system and at the same time. Mutual-capacitance sensors are better suited for applications that require a large number of tightly spaced sensors and exposure to moisture. In power tools, for example, moisture resistance can be important since the end user may be sweating.

Additional resources

TI grant aims to narrow the income gap in Silicon Valley

$
0
0

Buried under Silicon Valley’s prosperity lies a secret: poverty.

Pockets of poverty are scattered across the high-tech capital as many residents struggle to live in an expensive region. Persistent poverty can affect many generations of families and take a toll on the fabric of a community and business growth over the long term.

SparkPoint ribbon cuttingSparkPoint, a one-stop resource center, aims to help low-income people in Santa Clara County gain stable financial footing, thanks in large part to a $1 million grant from the Texas Instruments Foundation. The grant came from the TI Community Fund at Silicon Valley Community Foundation.

TIers from our Santa Clara office recently joined about 60 other people in San Jose to celebrate the ribbon-cutting ceremony for the center.

Located on the San Jose City College campus, SparkPoint San Jose is a partnership between United Way Bay Area and San Jose-Evergreen Community College District’s Workforce Institute.

“We know that the best way out of poverty is a good-paying job,” said Dave Heacock, TI senior vice president, during the ribbon cutting.

And “the best way to a good job is an education,” Dave said. “At TI, we believe strong companies must help build strong communities, and those strong communities in turn strengthen our companies.”

While Santa Clara County boasts the highest median household income ($96,310 in 2015) of nine Bay Area counties, about the same percentage of residents earn less than $35,000 a year as do over $200,000 a year.

When SparkPoint San Jose opens this spring, it will offer one-on-one career coaching, financial education, tax help and other services for free to any community college student or qualifying Santa Clara County lower-income resident to address the income gap.

Since United Way Bay Area opened the first SparkPoint in Oakland in 2009, the program has helped more than 24,000 Bay Area residents become more financially stable. Additionally, 83 percent of its clients have made progress toward their financial goals and 36 percent achieved a prosperity milestone (100 percent self-sufficient income, three months of living expenses saved, a credit score of 700+ or no revolving debt).

The success of the first SparkPoint center in the South Bay is important to TI because of its proximity to our Santa Clara and Sunnyvale offices.

“One of the criteria for our grants is to have proven programs,” said Andy Smith, executive director of the TI Foundation. “SparkPoint fits perfectly with our support of United Way across the country. The results showed a really great program that’s helped clients achieve financial self security.”

Victor BarriosVictor Barrios, a 26-year-old student from South San Francisco, calls his experience with SparkPoint a “transformation.”

After his father left the family, Victor – then a teenager – felt lost, he said. He partied a lot, joined a gang and was in and out of juvenile hall and jail. He “wanted to make a change,” so at 21, he enrolled at the Bay Area’s Skyline College, but he struggled financially. A professor encouraged him to check out Skyline’s SparkPoint center, which helped him gain access to food stamps and save $200 a month.

Barrios completed two years at Skyline, and last month he enrolled at San Jose State University to study electrical engineering. He’s also taking advantage of SparkPoint’s food pantry and financial counseling.

A few years ago, student and SparkPoint client Victor Barrios (left) spoke about his experience at a retreat in San Francisco for SparkPoint counselors.

“It helped me stay in school because I felt school was making everything harder,” Barrios said. “For me, it was monumental because I could have been lured back into the neighborhood and doing the wrong things.”

Debbie Budd, chancellor of San José-Evergreen Community College District, says “SparkPoint services mitigate economic disparities to improve educational access and outcomes.”

Despite being one of the nation’s wealthiest regions, nearly a quarter of Silicon Valley’s residents earn annual salaries at or below 200 percent of the national poverty level ($23,760 for one person or $48,600 for a family of four). Many residents struggle to live in an area with rents, home prices and the cost of living substantially higher than other parts of the country.

Santa Clara County’s poverty rate was 9.5 percent in 2015, but the rate was more than double that for people without a high school diploma.

“Less than 50 percent of students who enroll in community college in the state of California end up getting a degree,” Dave said. “These students are often working multiple jobs, taking care of their families, and many are deeply embedded in a cycle of generational poverty, and the odds are not in their favor.”

The San Jose center is the fourth SparkPoint located at a community college to address low graduation rates, said Randy Hyde, senior vice president of marketing for United Way Bay Area. If a community college student uses at least three SparkPoint services, their persistence rate (continued enrollment) is 97 percent vs. the average statewide rate of 50 percent, he said.

United Way Bay Area based SparkPoint on an Annie E. Casey Foundation model that showed offering multiple services under one roof led to better results.

TI and TIers have partnered with United Way Silicon Valley for many years, contributing over $400,000 a year through workplace campaigns and TI Foundation grants. When United Way of the Bay Area merged with United Way Silicon Valley in July, it was able to expand SparkPoint. “TI has demonstrated our commitment to the Silicon Valley area since we acquired National Semiconductor in 2011,” Andy said. “This grant is another sign of that.”

How to use temperature sensors to achieve linear thermal foldback in automotive LED lighting

$
0
0

Temperature is a big concern in automotive light emitting diode (LED) headlight and taillight applications. LEDs could be exposed to high ambient temperatures while being driven at large currents to produce the necessary brightness. Combined with the large operating current, these high ambient temperatures increase the LED junction temperature, which is typically only rated up to 150°C. High junction temperatures – especially those violating data sheet specifications – risk damage to and shorten LED life times. So what can you do to decrease the junction temperature of the LEDs?

Equation 1 expresses the electrical power dissipated in each LED as:

where  is the forward voltage of the LED and  is the current through the LEDs. Equation 2 is the general formula for the junction temperature:

where  is the junction temperature,  is the ambient temperature and  is the LED junction-to-ambient thermal resistance measured in Celsius per watt.

Substituting the electrical power equation into the junction temperature equation results in Equation 3: 

The LED forward voltage and thermal resistance are both characteristics of the LED packaging. Thus, it is clear that at different ambient temperatures, the LED current is the only control parameter that can verify that the LED junction temperature will not violate the maximum specification.

In order to change the current through the LEDs, you need to feed back the ambient temperature measurement to the LEDs’ driving circuit. Designers often use negative temperature coefficient (NTC) thermistors to measure the ambient temperature. Specifically, these NTC thermistors change their resistance with the ambient temperature, so designers measure the voltage across the NTC thermistor and convert that measurement to a temperature.

However, a large problem with NTC thermistors is that their resistance decreases nonlinearly with increasing temperature. Moreover, because the resistance decreases nonlinearly, their current consumption increases exponentially across temperature. Since the current through the LEDs is linearly proportional to the temperature, having a nonlinear device requires some external circuitry or a microcontroller to linearize the NTC thermistor voltage and appropriately regulate current through the LEDs.

Using an analog output temperature sensor integrated circuit (IC) such as TI’s LMT87-Q1, which generates a voltage that tracks with ambient temperature, simplifies the total temperature measurement circuitry and enables you to implement a linear thermal foldback curve. Instead of adding external circuitry or a microcontroller to linearize the NTC thermistor output, the output of the temperature sensors feeds back directly into the device generating the current for the LEDs. This means fewer components and no need for a microcontroller to implement thermal foldback.

Figure 1 contrasts the use of the NTC thermistor and analog temperature sensor approaches. Figure 2 shows the nonlinearity of an NTC thermistor voltage compared to the LMT87-Q1 output voltage.

Figure 1: NTC thermistor thermal foldback vs. analog temperature sensor thermal foldback solution

Figure 2: Voltage input to LED driver across temperature

Figure 2 shows the differences in the voltage across the NTC thermistor and the output voltage of the LMT87. The NTC thermistor voltage was calculated by putting the NTC thermistor in series with a 10kΩ resistor and an NTC thermistor with values B25/85 of 3435K and R25 of 10kΩ.

While not violating the junction temperature is extremely important, thermal foldback will cause the luminosity of the LEDs to change. Luminosity is effectively how bright the LEDs are. LEDs have characteristic called thermal roll off that basically is reduced light efficiency at high temperatures. Therefore, allowing the LED’s junction temperature to be very high – but not quite violating its maximum specification – could lead to less brightness than anticipated or needed.

Another dominating factor of the LED’s luminosity is the optic used in the lighting module. So while thermal foldback needs to behave linearly, you may need to clamp the curve at different locations. You must take all of these dynamics into account when designing the thermal foldback of a system.

For more information about linear thermal foldback and simple tricks to change the thermal foldback curve using a TI analog temperature sensor, see TI’s Automotive Daytime Running Light (DRL) LED Driver Reference Design with Linear Thermal Foldback TI Design (TIDA-01382).

Additional resources:

 

Four-switch buck-boost layout tip No. 3: separating differential sense lines from power planes

$
0
0

In my last blog, I provided tips for optimizing hot loops in a buck-boost converter. I decided to add this tip as a separate topic after finding it in almost all of the layouts I reviewed late last year. The most frequently encountered issue in layout is the incorrect routing of differential sense signals from the sense resistor to TI’s LM5175 integrated circuit (IC) pins (the CS-CSG pair). An example of sense connection is shown in Figure 1.

Figure 1: An LM5175 schematic showing differential sense connection from power stage to the controller pins.

In some cases, designers make this error because one of the sense nodes (the lower side of the sense resistor, marked as node “N” in the yellow circle) is electrically same as the circuit ground (GND). Thus, the need to differentially route the CS-CSG pair – which carries a small signal (tens of millivolts) – is not clear to the layout engineer. Figure 2 shows this common error.

Figure 2: (a) Correct differential current sense routing and (b) a common mistake when routing differential sense signals.

In other cases, the designer does recognize the need to differentially route the current-sense signals. But during the finishing of the board, the negative trace is connected to a plane or a copper pour, as the layout tool treats the nets as a ground (GND) net. This unintended connection can happen anywhere along the trace, as shown in Figure 3. In the next paragraphs, I will describe some common practices to avoid this.

Figure 3: An example of unintentional connection of differential sense signal with power ground plane.

Net ties

A net tie allows an artificial separation of net names in the schematic (Figure 4). This allows the layout tool to treat N1 and N2 as separate nodes and protects the bulk of the differential trace (N2) from accidental connections to the ground plane or pours. The downside is that the N1 section is technically a GND net, and therefore still needs to be separated from the GND plane or copper pours (Figure 4).

Figure 4: An example of using Net-Tie to prevent unintentional connection of sense signals to copper planes or pours.

Polygon cutouts or keep outs

Many layout tools provide a feature called polygon cutouts or polygon keep outs. Polygon keep outs create a boundary that keeps polygons or copper pours from entering. A polygon keep-out layer must follow the sense trace from beginning to end. You must take additional care when the sense trace changes layer though vias. In such cases, you must use polygon keep outs on all layers around the via. Figure 5 shows an example.

Figure 5: Correct use of polygon cut-outs to separate sense traces from power planes.

The incorrect routing of sense traces can spoil an otherwise good design. Recognizing the sense traces – particularly those that share the net names with a copper area, plane or pour – is essential. These traces must be isolated using net ties or polygon keep outs during printed circuit board (PCB) design to prevent an inadvertent connection to the copper planes.

Additional resources:

LDO basics: power supply rejection ratio

$
0
0

One of the most touted benefits of low dropout linear regulators (LDOs) is their ability to attenuate voltage ripple generated by switched-mode power supplies. This is especially important for signal-conditioning devices like data converters, phase-locked loops (PLLs) and clocks, where noisy supply voltages can compromise performance. My colleague Xavier Ramus covered the detrimental effect noise has on signal-conditioning devices in the blog: Reducing high-speed signal chain power supply issues. Yet Power Supply Rejection Ratio (PSRR) is still commonly mistaken as a single, static value. In this post, I’ll attempt to illustrate what PSRR is and the variables that affect it.

Just what is PSRR?

PSRR is a common specification found in many LDO data sheets. It specifies the degree to which an AC element of a certain frequency is attenuated from the input to the output of the LDO. Equation 1 expresses PSRR as:

                                (1)

This equation tells you that the higher the attenuation, the higher the PSRR value in units of decibels. (It should be noted that some vendors apply a negative sign to indicate attenuation. Most vendors, including Texas Instruments, do not.)

It’s not uncommon to find PSRR specified in the electrical characteristics table of a data sheet at a frequency of 120Hz or 1kHz. However, this specification alone might not be so helpful in determining if a given LDO meets your filtering requirements. Let’s examine why.

Determining PSRR for your application

Figure 1 shows a DC/DC converter regulating 4.3V from a 12V rail. It’s followed by the TPS717, a high-PSRR LDO, regulating a 3.3V rail. The ripple generated from switching amounts to ±50mV on the 4.3V rail. The PSRR of the LDO will determine the amount of ripple remaining at the output of the TPS717.

 Figure 1: Using an LDO to filter switching noise

In order to determine the degree of attenuation, you must first know at which frequency the ripple is occurring. Let’s assume 1MHz for this example, as it is right in the middle of the range of common switching frequencies. You can see that the PSRR value specified at 120Hz or 1kHz will not help with this analysis. Instead, you must consult the PSRR plot in Figure 2.

Figure 2: PSRR curve for the TPS717 with VIN– VOUT = 1V

The PSRR at 1MHz is specified at 45dB under the following conditions:

  • IOUT = 150mA
  • VIN– VOUT = 1V
  • COUT = 1μF

Assume that these conditions match your own. In this case, 45dB equates to an attenuation factor of 178. You can expect your ±50mV ripple at the input to be squashed to ±281μV at the output.

Altering the conditions

But let’s say that you changed the conditions and decided to reduce your VIN– VOUT delta to 250mV in order to regulate more efficiently. You would then need to consult the curve in Figure 3.

Figure 3: PSRR curve for the TPS717 with VIN– VOUT = 0.25V

You can see that holding all other conditions constant, the PSRR at 1MHz is reduced to 23dB, or an attenuation factor of 14. This is due to the CMOS pass element entering the triode (or linear) region; that is, as the VIN– VOUT delta approaches the dropout voltage, PSRR begins to degrade. (Bear in mind that dropout voltage is a function of output current, among other factors. Hence, a lower output current decreases the dropout voltage and helps improve PSRR.)

Changing the output capacitor will have implications as well, as shown in Figure 4.

Figure 4: PSRR curve for the TPS717 with VIN– VOUT = 0.25V, COUT = 10μF

By sizing up the output capacitor from 1μF to 10μF, the PSRR at 1MHz increases to 42dB despite the VIN– VOUT delta remaining at 250mV. The high-frequency hump in the curve has shifted to the left. This is due to the impedance characteristics of the output capacitor(s). By sizing the output capacitor appropriately, you can tune, or increase, the attenuation to coincide with the particular switching noise frequency.

Turning all the knobs

Just by adjusting VIN– VOUTand the output capacitance, you can improve PSRR for a particular application. These are by no means the only variables affecting PSRR, though. Table 1 outlines the various factors.

Parameter

PSRR

Low frequency

(<1kHz)

Mid frequency

(1kHz – 100kHz)

High frequency

(>100kHz)

VIN– VOUT

+++

+++

++

Output capacitor (COUT)

No effect

+

+++

Noise reduction capacitor (CNR)

+++

+

No effect

Feed-forward capacitor (CFF)

++

+++

+

PCB layout

+

+

+++

Table 1: Variables affecting PSRR

I will discuss these other factors in a future post. But for now, I hope that you are more familiar with the various tools at your disposal that can help you design an effective LDO filter. For more information on LDO PSRR, read the application note, LDO PSRR Measurement Simplified.

Additional resources

 

Viewing all 4543 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>