Quantcast
Channel: TI E2E support forums
Viewing all 4543 articles
Browse latest View live

Protect your BLDC motor drive with cycle-by-cycle current limit control – part 2

$
0
0

Welcome back! If you missed part 1 of this series, I discussed the necessity of cycle-by-cycle over current protection in BLDC motor drives and how to sense the motor winding current. In part 2, I will discuss on how to implement the cycle-by-cycle over current protection by sensing the DC bus current and using an ultra-low power microcontroller.

TI’s ultra-low power MSP430F5132 microcontroller helps to control the motor-winding current on a PWM cycle-by-cycle basis without any software interrupt intervention.

Figure 1: Cycle-by-cycle current-limit implementation using a differential amplifier and the MSP430F5132 MCU

You can configure the high-bandwidth precision OPA374 as a single-ended differential amplifier to amplify the voltage drop across the sense resistor, RSENSE, connected in the DC bus return path

The MSP430F5132 MCU has an integrated comparator and timer event control (TEC) module that you can configure to implement the current limit. The comparator compares the analog voltages at the noninverting (+) and inverting (–) input terminals. If the noninverting terminal is more positive than the inverting terminal, the comparator output, CBOUT, is high.

You can use the output of the comparator with or without internal filtering. When setting the control bit, CBF, in the MCU, the output is filtered with an on-chip resistor-capacitor (RC) filter. You can adjust the delay of the filter in four different steps, which allows you to optimize the comparator’s response time. The output filter will suppress noise spikes, which can avoid false switching at the output of the comparator. The output filter can also reduce errors associated with comparator output oscillation when the voltage difference across the input terminals is small. The comparator features a high-precision reference voltage; to obtain different voltage references, configure the CBRSEL bit in the CBCTL2 register. The reference voltages available are 1.5V, 2.0V and 2.5V.

The TEC module is the interface between the timer modules and the external events. The TEC and Timer_D modules are connected through internal signals. The TEC module contains the control registers to configure the routing between TEC and timer modules. The TEC module also has the enable register bits, interrupt enable and interrupt flags for external event inputs. On receiving the external fault or clear signals, the TEC module controls the timer output and thus the PWM signal.

The COMPB module and TEC module are used together for cycle-by-cycle current-limit protection. You must externally route the output of the comparator, CBOUT, to the TECxFLT1 external fault event pin of the TEC module, as shown in figure 2, for current-limit protection.

Figure 2: CBOUT routed externally to the TECxFLT1 external fault event pin

The operation of the TEC module is shown in figure 3. Whenever the current-sense differential amplifier output exceeds the voltage reference of the comparator, the output CBOUT and hence TECxFLT1 goes high, which initiates an event in the TEC module. The TEC module is programmed to disable the Timer_D output PWM during such an event. The Timer_D is configured in SET/RESET mode so that during the external event, the Timer_D resets and causes the PWM output pin to go low. This programmed function means that CBOUT goes high when the motor hits an overcurrent condition and can disable the Timer_D output (as Figure 3 shows) if CBOUT is connected to TECxFLT input pin. Figure 3 shows that the PWM turns off immediately when the comparator output goes high. When CBOUT goes low, the Timer_D output is then allowed to resume normal operation. 

Figure 3: External input events resetting Timer_D output

Figure 4 shows the current-limit operation when the comparator reference, VREF, is set at 1.5V, with a 60mΩ sense resistor (RSENSE) and a differential amplifier gain of 20.

The set over current limit (IOC_LIMIT) can be calculated using Equation 1.

Overcurrent limit, IOC_LIMIT = VREF/(RSENSE*amplifier gain)                               (1)

At VREF = 1.5V, IOC_LIMIT = 1.5 / (0.06*20) = 1.25A

Figure 4: Cycle-by-cycle current limit using the MSP430F5132 MCU hardware features

Figure 5 shows the response time of the comparator and TEC module with an external trip signal connected to the comparator input. The test results show that the time between the comparator input going more than 1.2V (to ensure the worst case, I considered a lower voltage compared to the 1.5V reference to calculate the response time) and the PWM shutdown is approximately 356ns. Therefore, the total response time of the current-limit action is less than 1µs. Figure 6 shows the response time from the comparator output going high to a PWM shutdown event, which is approximately 100ns. The test results show that the hardware features in the MCU ensures a very fast cycle-by-cycle current limit action and hence protects the motor drive. 

Figure 5: Total response time of the cycle-by-cycle current-limit protection            

          Figure 6: The response time from when the comparator goes high to the PWM shutdown

Thank you for reading this blog series. I hope you found it helpful in understanding necessity of cycle-by-cycle over current protection in BLDC motor drives and how we can implement this method of over current protection using an ultra-low power microcontroller.

Additional resources


Top three reasons to include proximity sensing in your human interface design

$
0
0
One of the benefits of automating control systems within buildings is the ability to customize services and personalize a user’s environment; the human machine interface (HMI) is the gateway to such controls. The addition of proximity sensing is...(read more)

Selecting the right inductor for ultra-low distortion Class-D audio amplifiers

$
0
0

Selecting the inductance value for the output filter of a Class-D audio amplifier is always a critical design decision. With next-generation ultra-low-distortion Class-D amplifiers, selecting an inductor with poor electrical properties can severely limit the audio performance. My colleague Brian talked about how high-definition audio is changing the way we listen in his December blog post. In this post, I’ll discuss key considerations for selecting the right inductor to ensure your device is living up to its high-definition potential.

In higher-power Class-D amplifiers, (generally above 10W of output power), the passive output filter generally has both an inductor and a capacitor (LC) on each output terminal, and is thus referred to as an LC filter. The purpose of the LC filter is to convert the discontinuous pulse-width modulation  (PWM) pulse-train output of a Class-D amplifier into a continuous smooth analog sinusoid. The LC filter extracts the audio signal from the PWM representation of the audio signal.

This filtering process is critical for a couple of reasons:

  • Electromagnetic Interference (EMI) reduction. The PWM output of a Class-D amplifier is a high-amplitude voltage signal usually equal to the output stage or PVDD supply voltage. Filtering these pulses with an LC filter also filters the high-frequency content associated with the PWM pulses, reducing offending EMI emissions. With the LC filter placed as close as possible to the amplifier, long runs of speaker wire will not radiate EMI throughout the system.
  • Reduced ripple current. For Class-D amplifiers with AD modulation schemes, but without an LC filter, there is a ripple current superimposed on the audio signal. With an LC filter, specifically as the cut-off frequency of the LC filter is reduced relative to the PWM switching frequency of the amplifier, the ripple current is also reduced such that only a small residual ripple will present after the LC filter. The reactance of the LC filter filters the rest of the ripple and ideally does not dissipate any power.

Let’s use the TPA3251D2 as an example. It is a 175W Class-D audio amplifier with total harmonic distortion and noise (THD +N) capable of approaching 0.001% in the mid power band. For the TPA3251D2, inductor linearity becomes critical to extract the highest level audio performance.

For this discussion, inductor linearity is defined as inductance vs. current.

An ideal inductor would maintain the specified inductance value no matter what current passes through it. However, real-world inductors always have decreasing inductance with increasing current. At some point, the current level will saturate the inductor and the inductance will fall off severely. This is often specified as Isat.

Keep in mind that the inductance change at the Isat current rating varies between manufacturers and even inductor types. Some manufacturers specify Isat at a 30% or higher change in inductance. If you expected to use this inductor all the way to the Isat rating for an LC Class-D filter, you would observe very poor audio performance.

Table 1 shows data collected from 4 four different inductors that have good linearity specifications for high-performance Class-D audio amplifiers. I measured the inductance at 1A of current and again at 20A of current with a 600-KHz test signal, which is the nominal PWM switching frequency of the TPA3251D2 amplifier. The average change of inductance was calculated for 10 samples of each inductor.

Table 1: Average change in inductance for 10 samples of four inductor types

From the data above, the 10µH inductor from manufacturer A is more linear than the 10H inductor from manufacturer B. It is also important to note that the 7µH and 10µH inductors from manufacturer A are wound on the same core. Likewise, the 7µH and 10µH from manufacturer B are also wound on the same core.

Generally, the higher the inductance (the more turns of wire) for a given core material, size and geometry, the less linear the inductor.

I then tested these inductors on the TPA3251D2 evaluation module (EVM) and the results were clear, as you can see in Figures 1 and 2.

Figure 1: TPA3251D2EVM plot of output power and THD+N for measured inductors

Figure 2: TPA3251D2EVM plot of THD+N and signal frequency 

From the data collected from the TPA3251D2EVM, the 10µH inductor from manufacturer A outperforms manufacturer B’s 10µH and 7µH inductors. If I were to select manufacturer B’s 10µH inductor, the THD performance of the amplifier would be limited to 0.0045% at 10W, compared to 0.0017% for the manufacturer A’s 10µH inductor.

Also notice that at 20W, the THD vs. frequency performance improves significantly. If you were designing an LC filter around an amplifier where higher THD performance is acceptable, or if the native THD of the amplifier was higher, the 10µH from manufacturer B may be a suitable candidate. In the end, the designer of the system must make a choice between inductor linearity, cost and size.

Other considerations such as switching losses or ohmic losses under high output power levels will determine which inductor is best for a given system. However, by selecting an inductor with greater linearity, you can improve the THD performance of a Class-D amplifier significantly, as demonstrated with the ultra-low THD of the TPA3251D2.

If you have worked with a Class-D amplifier before, leave a comment below. I’m interested in learning more about which inductor you chose and the resulting performance.

Additional resources

Big batteries? Take a walk on the (high) side

$
0
0


It’s a spectacular time to be working in the world of batteries these days — sure, we hear a lot about wearables, smartphones and improbably tiny wireless headphones, but there’s an equally fascinating bloom of innovation happening at the other end of the portables spectrum. I’m talking big battery applications like delivery or industrial drones, battery-backup and energy-storage systems, and electric bikes and scooters.

These products command a hefty marketplace premium, and for good reason: They’re engineered for robustness and longevity, and consumers expect a corresponding premium experience that matches the price tag.

Within the battery packs powering these types of applications, it’s already a virtual given that you need a monitoring solution. Products like the bq76940 and bq76925 do a great job of capturing key analog information for a particular battery and relaying it on to a microcontroller, whether it’s our pre-programmed companion fuel gauge (bq78350-R1) or a do-it-yourself endeavor built from an MSP430™ microcontroller.

Another primary consideration in every battery system is how to control its charging and discharging. When times are good, it’s simple enough to allow either, but every engineer must prepare for the scenario when it makes sense to momentarily (or permanently) prevent one or both from occurring. For example, a fully charged pack may connect with a less-than-brilliant charger that doesn’t know that it’s topped up, or a fully discharged pack may need to go offline to avoid over-aging its cells and shortening the system’s life span.

The simple approach, which is built directly into most of our medium- to high-voltage monitors, is to drive power FETs on the “low” side of a battery pack. This means that the FETs sit nearest to the battery’s ground side, rendering them pretty easy to turn on and off, since that mainly involves generating 10-14V above ground. Two Zener diodes and you’re basically done! My designer counterparts might dispute this oversimplification, but there you go.

There’s one huge downside to this approach, however: When the FETs are off, the battery and system ground are no longer electrically connected. This lack of a common ground makes it very difficult for the battery microcontroller to talk with the outside world, short of employing isolation technologies that tend to be both expensive and power-hungry.

The better approach – and one that coincidentally emerged to become the de rigeur standard in all notebook battery packs sold – is to drive those power FETs on the “high” side instead. As you might guess, this involves placing the FETs on the other end of the battery stack, between the positive terminal of its highest cell and the system’s power rail. By keeping a common battery and system ground, both are free to communicate anytime, regardless of whether charging and/or discharging is permitted.

P-channel FETs aren’t really a great option for high-power, high-voltage applications due to their innately higher on-resistance, which leaves N-channel FETs the best possible option. The trade-off here is that now you have to create a voltage that’s even higher than your battery stack, by roughly the same 10-14V as in the low-side case. Creating such a charge pump in a product designed for notebook batteries is hard enough, but things become rather interesting when you’re trying to do this with a 24V, 36V, 48V or even 60V battery – and in the case of a motor-driven application like an e-bike, you have the extra headache of large transient swings that can double your voltages during inductive kickbacks.

This brings me to the bq76200, TI’s first-ever dedicated high-side battery FET driver for high-voltage applications. This robust, low-power IC tolerates up to 100V, and independently drives charge and discharge FETs. We’ve engineered it for maximum flexibility as well: It plays nicely with battery stacks ranging from 18V up to 60V, small to large cell capacities, and single to multiple power FETs in parallel. The bq76200 is a perfect complement to bq76940, bq76930 and bq76920 battery monitors, the bq78350-R1 companion fuel gauge, and easily pairs with a wide variety of MCUs out there thanks to its simple control interface.

The bq76200 eliminates the struggles associated with building complicated, reliability-constrained discrete charge-pump circuits to achieve a high-side FET drive. It’s here to help designers promise the best of all worlds: a truly intelligent battery, capable of collaborating with a system 24/7 while maintaining protection and ensuring longevity.

Pump it up with charge pumps – Part 1

$
0
0

Life was simple when I first became interested in electronics. Components were so big I could solder them without a microscope. Switching converters switched at a whopping 25 kHz, digital circuits all used a 5-V supply voltage and all the computers I came across used the RS-232 serial interface to communicate.

The RS-232 standard specifies that a logic 0 is represented by voltages between 5 V and 25 V, and a logic 1 by voltages between –5 V and –25 V. My problem was that although almost all the components on my boards needed only a 5-V supply, I still had to generate those two extra rails for my RS-232 interface.

Then I came across the MAX232. This device was an inspired product, combining two line drivers, two line receivers, and a positive and negative charge pump. With that bad boy running off a single 5-V supply, I could generate the additional supply voltages I needed and transmit and receive serial data.

Charge pumps are useful little DC/DC converters that use a capacitor to store energy instead of an inductor. They can be found in dedicated charge-pump devices such as the LM2775/LM2776 devices, as auxiliary rails in LCD bias supplies such as the TPS65150, or as external circuits put together from a couple of diodes and a couple of capacitors.

Generally speaking, charge pumps are:

  • Simple, often comprising no more than two diodes and two capacitors.
  • More forgiving than DC/DC converters.
  • Good for output currents in the tens of milliamps range (but not so good for currents much higher than 250 mA).
  • Less efficient than inductor-based DC/DC converters, unless they are unregulated and running open -loop.

Figure 1 is a simplified circuit diagram of an unregulated charge pump. The charge pump operates in two phases:

  • During the charge phase, switches S1 and S4 are open and switches S2 and S3 are closed. Current flows through S2 and S3 and charges the flying capacitor, CFLY, up to a voltage of VI.
  • During the discharge phase, switches S1 and S4 are closed and switches S2 and S3 are open. The negative terminal of CFLY is now at VI and the positive terminal (which is VI volts higher) is now at 2VI. Current flows from VI through the flying capacitor CFLY and switches S1 and S4. Charge is transferred from CFLY to the output capacitor, CO, to generate an output voltage approximately equal to 2VI.

   

Figure 1: Simplified charge-pump block diagram (voltage doubler)

You can rearrange the same four components (S1, S2, S3 and S4) to generate a negative output voltage equal to approximately –VI (see Figure 2).

 

Figure 2: Simplified charge-pump block diagram (voltage inverter)

The circuit just described works well, but its output voltage is unregulated. Such a simple circuit is sufficient is some applications, but a charge pump with a regulated output is much more useful.

The usual way to regulate the output voltage of a charge pump is to put an adjustable current source, I1, in series with switch S1, or S2 in the case of an inverting charge pump (see Figure 3). The error amplifier, A1, adjusts the value of I1 until the output voltage is correct. Under steady-state conditions, I1 is exactly twice the value of IO.

 

Figure 3: Different charge-pump integration levels

Note that a simple, regulated voltage doubler can only regulate its output voltage in the range of VI to 2VI. It cannot generate output voltages lower than VI. There are some fancy tricks you can do to make a buck-boost charge pump, but these kinds of devices are more complicated than the one shown in Figure 3.

Additional resources:

Designing a 25G system: 5 tips to balance power, performance and price

$
0
0

When transitioning from 10G to 25G for next-generation servers and switches, hardware design engineers have to satisfy competing objectives: minimize data latency, maintain or reduce power consumption, and keep costs as low as possible. You essentially need to do more with less in order to deliver first-rate products to your data-center customers at a competitive cost.

Here are five quick tips for designing your 25G system to strike the right balance:

1. Determine which links in the system will need signal conditioning; this will depend on routing length and printed circuit board (PCB) material. Low-loss material requires fewer signal conditioners but is more expensive compared to standard materials. Channels with loss greater than the application-specific integrated circuit’s (ASIC’s) inherent compensation capability will require some form of signal conditioning. For example, if your ASIC is capable of 30dB compensation, you will probably want to add signal conditioning to channels that have a 27dB loss or more, with the 3dB difference acting as your safety margin.

Figure 1 is an example channel-loss budget analysis comparing PCB materials A and B.

Figure 1: Example distribution of channels in a system assuming ASIC loss-compensation capability: 30dB at 12.9GHz, PCB material A loss: 0.8dB/inch at 12.9GHz, and PCB material B loss: 1.1dB/inch at 12.9GHz

2. For the channels that will require signal conditioning, design for flexibility by using a small footprint. A small footprint offers high channel density and allows you to use either a retimer or pin-compatible repeater.

3. Design a power-supply solution that will accommodate either a retimer or a repeater. For example, the TPS53513 synchronous step-down converter can supply 8A, which is more than adequate for a grouping of six retimers or repeaters.

4. Determine the SMBus address scheme needed to address each retimer/repeater device individually on the board. You can pin-strap each device with one of 16 unique SMBus addresses. If there are more than 16 devices on a board, consider using an I2C expander like the TCA/PCA family of I2C/SMBus switches to split the SMBus into multiple buses.

5. Place a single low-cost 25MHz (±100ppm) 2.5V single-ended clock on the board to support up to 20 retimer devices. This clock does not have any jitter requirements, since its purpose is not to recover data. The retimer will take in the clock, buffer it and replicate it on an output pin for easy hookup to the next retimer. There’s no need to have multiple crystals or fanout buffers. If you ultimately decide to use repeaters instead, you can leave this component unpopulated to reduce cost.

To make following these tips easier, TI has introduced the industry’s first portfolio of pin-compatible repeater (DS280BR810) and retimer (DS250DF810) solutions enabling reach extension for 25G backplane and front-port applications. How does this help balance power, performance, and price? It’s all about design simplicity and flexibility.

TI’s pin-compatible repeater and retimer solutions allow you to choose a solution that meets your performance targets while minimizing latency and reducing bill-of-materials (BOM) cost. Hardware engineers know that the cost, size, and complexity of surrounding components are just as important as the repeater or retimer itself. Consider Figure 2’s board design examples.

Figure 2: Illustrating the simplicity and cost savings of a TI solution (right) versus a generic solution (left) 

Table 1: Comparison of TI’s 25G signal conditioning solution versus others’

The pin-compatible nature of TI’s 25G DS280BR810 repeater and DS250DF810 retimer solutions allows you to generate one schematic to evaluate both options, enabling cost, power, and performance optimization for the final product. Signal integrity engineers can start testing with the repeater solution, which offers lower cost and reduced power consumption. They can upgrade to a pin-compatible retimer solution if the jitter and crosstalk in the system necessitates higher performance.

The small stuff really matters. Consider a typical data center consisting of 20,000 servers. Using a repeater instead of a retimer can save about 1W of power in a server network interface card (NIC), which can add up to over $21,000 in annual electricity savings ($0.12 per kilowatt hour), not including savings on cooling. If you eliminated $5 worth of components from the BOM, that translates to another $100,000 in savings. Finally, the difference between 50ns of latency versus 500ps can result in almost eight hours per day that would have been “wasted” while servicing requests across the whole data center (assuming 2,000 requests per second and four hours of total utilization time per server per day).

By following these tips, you should end up with a board design that allows you to strike a balance between cost, power, and performance.

Additional resources

How to use an LDO as a load switch

$
0
0

Can a low-dropout regulator (LDO) make a good load switch? Isn’t that like putting a round peg in a square hole? Well, yes, but an LDO can be a good choice if two or three of these conditions apply:

  • The input voltage is >15V (this eliminates 80% of load switches).
  • The output current is <1A (R-on is less critical).
  • Every penny counts (The LDO may be less expensive than a load switch).

For an LDO to emulate a load switch, it must have a way to force operation in dropout mode. Adjustable LDOs have a feedback pin that enables you to put the device into dropout mode (full-on output). An LDO with field effect transistor  pass element has low dropout voltage and low quiescent-current loss when in dropout mode. An LDO with bi-polar pass element that incorporates anti-saturation circuitry also has low dropout voltage and good quiescent-current loss when in dropout mode. There must be a way to turn the output off and on, LDOs with enable or shutdown pins have this feature.

As a bonus, LDOs have standard features only found in the best load switches, including:

  • Current limiting.
  • Rise-time control.
  • Over-temperature shutdown.

Connecting the adjust pin to ground will force most adjustable LDOs to pass as much voltage to the output as possible. The voltage loss across the LDO is the same as the dropout-voltage specification in the data sheet. If you like, you can connect the adjust pin to a resistor to ground and a capacitor to VOUT, which will provide controlled-output slew-rate limiting. Slew rate ultimately sets the output rise time. Even with the feedback shorted to ground, the output rise time is still controlled by the output capacitance and the LDO current limit; however, this current usually varies with temperature and manufacturing processes. See the data sheet for the I-limit’s range.

For the aforementioned resistor and capacitor-feedback connection, the output rising slew rate (SR) is expressed as Equation 1:

Vref/(R*C)                          (1)

Vref is the LDO adjust-pin voltage and R and C are the feedback components. You can calculate the rise time using Equation 2:

[VIN–Vdo]*R*C/Vref                      (2)

Vdo is dropout voltage.

For the grounded feedback connection, use Equation 3 to calculate the output rising slew rate (in V/s):

[I-limit]/Cout                     (3)

The rise time is calculable as Equation 4:

[VIN–Vdo]*Cout/[I-limit]                              (4)

Figure 1 is a 24V load switch example using the inexpensive 100mA adjustable LP2951 LDO, which has a 1.25V feedback pin and an active-low enable pin compatible with most logic. R1 and C1 control the output rise time. I added a diode to the feedback pin to protect it from the capacitor current in cases of a VOUT instantaneous short.

Figure 1: LP2951 load switch schematic

I measured the rise time in Figure 2 using 16.2k for R and 100nF for C. The calculated output rising slew rate was 1.25V/(16,200Ω*100nF) = 0.77V/ms. The measured value was 0.62V/ms.

Figure 2: Output Rise time waveform

This slew-rate limiting tactic does have an unwanted side effect. As you can see in Figure 3, the first 870mV of rise time is much faster. This should not matter for most loads, however, especially 24V loads.

Figure 3: Initial output step

You could short feedback to ground and remove R1, C1 and D1 to reduce the component count. In this case, the rise time in Figure 4 is faster and is set by the current limit and output capacitance. Load current will steal current from the capacitor and slow the rise time. The rise slew rate (with a 40mA load) of 12.4V/ms suggests a current limit of 164mA (40mA into load and 124mA into the capacitor). Notice that there is still step up at the beginning of the rise.

Figure 4: Output rise time set by current limit

Figure 5 is the DC output characteristic for VOUT vs. load current. The dropout (460mV voltage loss at 100mA) is acceptable for 24V systems. However, the current limit is not very well defined.

Figure 5: Output voltage vs. output current

Figure 6 is the ground current (quiescent current vs. output current). The PNP pass element has anti-saturation to keep the quiescent current in proportion to the load current. 

Figure 6: Quiescent current

Lastly, Figure 7 is the output current vs. time for a shorted output case.

The device switches between current-limit mode and temperature-shutdown mode. Some LDOs can provide a steady lower output current that maintains the device at the shutdown temperature.

Figure 7: Short circuit current vs. time

While load switches are generally best at switching  power to loads, there are occasions when an LDO would be a better fit. Thinking outside the box can provide creative solutions.

The top 6 challenges facing IoT – and how we are overcoming them

$
0
0

TI AvatarSmart homes. Connected cars. Intelligent factories. At the heart of all of these clever technologies is an interconnected network of devices known as the Internet of Things (IoT).

The IoT is headed for a tipping point. By some estimates, there will be 50 billion ‘things’ communicating with each other or the Internet by 2020. So a challenge lies before us: How do we make the IoT easy to use, cost effective and efficient?

We asked some of our leading IoT experts to tell us how these challenges will be overcome, with a particular focus on consumer, industrial and automotive IoT:

1. Low power is paramount

For the IoT to evolve from a niche market to a pervasive network that connects virtually every aspect of our lives, power consumption is vital. Many of the connected devices within the IoT are nodes containing microcontrollers (MCUs), sensors, wireless devices and actuators that collect data. In many cases, these nodes will be battery operated or have no batteries at all, gathering power through energy harvesting. Particularly in industrial settings, these nodes will be placed in hard-to-access or no-access areas. This means they must be able to operate and transmit data for years at a time on a single, coin-cell battery.

“Installation, maintenance and repair of batteries is difficult and costly – and on some factory floors, it can even be dangerous,” said Harsha, who focuses on wireless and low-power charging. “Our goal is for our customers to never have to replace a battery for the lifetime of a device.”

Harsha and his team are exploring ways to make tiny batteries last as long as the products they are powering:

  • Solar power – Whether it is indoor or outdoor light, harvesting even a little bit of energy from light sources can make a large impact.
  • Temperature difference – Energy can be harvested from the difference in ambient temperature between the inside and outside of an object in a factory, like a pipe carrying hot liquids, compared to the external air temperature.
  • Vibration – In an industrial setting, energy harvesting takes place using the vibration of machines on a factory floor.
  • Radio Frequency (RF) – Using the radio waves from the Wi-Fi® in your home, for example, can create a small charge for batteries in IoT nodes.

“The goal is to extend battery life by 10 or 20 percent. While consumer electronics tend to be replaced at a faster pace, IoT technology in industrial applications lasts much longer. By using energy harvesting to extend battery life, a battery could last 20 to 30 years until the entire node needs to be replaced. And in some cases, energy harvesting could be used so the node could even go battery-less,” Harsha said.

2. Sensing is essential

Without sensing, there would be no IoT. The entire IoT system starts with sensors, the tiny devices or nodes measuring anything and everything to create data that is sent to other nodes or to the cloud. Whether sensing that a door is open at your house, that your car’s oil needs to be changed or that a piece of equipment is about to fail on an assembly line, sensors gather crucial information.

“Sensing comes into play when a decision needs to be made, and there might not necessarily be a person involved,” said Jason, who specializes in current sensing. “If something is coming down a conveyor belt, a sensor can help determine what the object is, how much it weighs, whether the conveyer belt system is getting hot, etc. For example, analyzing the current in a motor can tell you the health of the motor and if there are any faults. These are all the things you need to know for control in a factory, and sensors make it possible. When you provide the data on a real-time basis, it adds up to a lot of important data impacting everything.”

Because there is so much data collected by sensors, particularly in the Industrial IoT (IIoT), Jason sees the need for innovations in sensor software as much as in hardware.

“When you get so much information, at what point is the information too much or not relevant? A missing piece to determine this are algorithms. Once those algorithms are in place and can be leveraged heavily across factories, they will change manufacturing. The manufacturing footprint – the amount of space it will take to create something – can shrink. Factories can get smaller and more efficient,” he said.

3. Connectivity options: Simplifying the complex is critical

Once the sensor data is collected by low-powered nodes, it must be sent somewhere. In most cases, it goes to a gateway, which is a midpoint between the Internet/cloud or other nodes in an IoT system.

Today, there are multiple wired and wireless options to connect devices with unique use cases and different needs. Each of the 14 different connectivity standards and technologies serve valuable purposes, but being able to take on all of those standards from Wi-Fi to Bluetooth® to Sub-1 GHz to Ethernet is a huge undertaking.

“Because of the variety of products and necessity to add connectivity to many different products, most of which did not have Internet connectivity before, there is a need to take complex technology and make it easier. That’s a big part of what we are doing today,” said Gil, director of strategic marketing for IoT.

4. Managing cloud connectivity is key

Once the data passes through a gateway, in most cases it heads straight to the cloud where that data can be analyzed, reviewed and put into action. The value of IoT comes from data running on cloud services. Just like with connectivity, there are lots of cloud service options – yet another point of complexity in the IoT world.

“There is a wide variety and diverse number of cloud providers, and there are no standards for how devices connect and are managed on the cloud,” Gil said. “To address the need for customers using multiple cloud services, we have developed the largest IoT cloud ecosystem with more than 20 cloud providers with integrated TI technology solutions.”

Gil believes IoT adoption is taking place at a much faster pace because cloud technology is available at a cost-effective price. But more work is needed to simplify the complex for further IoT growth.

5. Security is crucial for widespread adoption

Gil sees security of the entire system as the biggest barrier to widespread adoption of the IoT. With more devices becoming ‘smart,’ it will enable more potential security breach entry points. Our teams are looking at ways to build the most advanced hardware security mechanisms while keeping them small, low cost and low power. On top of that, we are investing heavily in integrated security protocols and security software to make security implementation as simple as possible for customers.

“We are reducing the barrier for adding advanced security capabilities to IoT products,” Gil said.

6. IoT needs to be made easy for inexperienced developers

At first, IoT technology was predominantly used by technology companies. But today – and even more so in the future – the IoT will be included in industries with limited technological background.

For example, take a faucet manufacturing company. Until now, electrical engineers may never have worked at a faucet manufacturing company because there was no need. But if the company wanted to make Internet-connected shower heads, the investment in manpower and time would be significant. Thus, IoT technologies must be easy to add to existing and future customer products without the need to have network and security engineers on staff.

“These companies don’t need to do the level of investment of an Internet technology company to learn the technology, because now they can get the technology ready for them from companies like TI,” Gil said.

While more and more aspects of our lives are being connected, and as the IoT continues to proliferate, a lot more work must be done. For TIers like Gil, it is hard work that is well worth the time and effort in the end.

“It’s about the massive outlook for improving our lives – from all angles. This includes consumer convenience and lifestyle in our homes and cars and efficient factories that will ultimately make the world a better place,” he said.

To learn more about our efforts in IoT, click here.


Power Tips: Multiphase supplies can save space

$
0
0

When I started at TI, one of the first power supplies that I worked on was a high-current two-phase buck power supply for a processor core. The current was 40A – pretty large at the time, and too high to implement in a single stage. Most power-supply designers look to multiphase applications to split up high-current rails into stages that are more manageable in terms of power dissipation and size. You can also apply the same principles to lower-current systems to greatly reduce the size, while maintaining the other benefits of multiphase converters.

To illustrate how the multiphase approach compares to conventional single-stage designs, let’s look at a power-supply design for DDR4 memory. The input voltage for this design is 12V +/-10% and the output supplies 1.2V at up to 6A. I’ll compare two TI Designs Low Power DDR Memory Power Supply Reference Designs , both available on the TI Designs website. One uses the TPS62180 in a two-phase configuration, while the other uses the TPS53513. Figure 1 shows the two boards tested.

Figure 1: The TPS62180 (top) and TPS53513 (bottom)

The total size for the two-phase solution is 10mm by 15mm, while the single-phase solution is 30mm by 15mm. You could optimize the components for both solutions further to reduce the space, because the size is really driven by the magnetics. The two-phase solution uses two 2mm-by-1.2mm inductors, while the single-phase solution uses one 7mm-by-6.5mm inductor. The two smaller inductors take up much less volume and are lower in cost than the single inductor.

Clearly, you can reduce space by going to a multiphase approach. But what about the efficiency? Figure 2 compares both supplies.

Figure 2: Efficiency comparison of the TPS62180 and TPS53513

Full-load efficiency for the two-phase solution comes in at just under 80%, while the single-phase solution is around 87%. So there is a penalty in efficiency for greatly shrinking the size.

One point not clear from the graph above is the light-load performance. The efficiency at 15mA for the two-phase supply is 50%, while for the single phase it is 35%. This represents a difference of 20mW of lost power. While the difference is small, in a portable application this can lead to longer battery life.

Power-supply designers are always pushed to make smaller, more efficient designs. By taking a two-phase approach, you can save significant space, but at the sacrifice of full load efficiency. If efficiency is still the most important factor, there are solutions for that as well.

Special thanks to Ryan Manack for helping with the two designs and testing.

Additional resources

The value of wettable flank-plated QFN packaging for automotive applications

$
0
0

To ensure that cars meet today’s demand for safety and high reliability, the automotive industry requires original equipment manufacturers (OEMs) to perform 100% automatic visual inspection (AVI) post-assembly. In the case of quad-flat no-lead (QFN) packages, there is no easily viewed solderable or exposed pins/terminals that enable you to determine whether or not the package successfully soldered on to the printed circuit board (PCB). The package edge has exposed copper for the terminals, these are prone to oxidation, making sidewall solder wetting difficult.

 

With QFN packages, sidewall solder coverage varies from 50-90%. OEMs must incur additional costs due to yield issues from false assembly failures, along with genuine fails where the assembly process has highlighted poor solder joints. The use of an X-ray machine to check for a good, reliable solder joint adds further expense or may not be available.

 

To resolve the issues of poor solder ability for automotive and commercial component manufacturers, the wettable flank process was developed. This reduces the rework, stops poor yields and lowers the inspection times. TI’s LM53600-Q1 and LM53601-Q1 automotive DC/DC buck regulators are available in a QFN package that uses a wettable flank process approved by many of the largest automotive OEMs.

 

TI adopted special lead plating (SLP) as an additional step during the assembly process, where the package is step-cut and then re-plated with matte tin on half of the sidewall. See Figures 1 and 2.

 

 Figure 1: Cross-section comparison between a standard QFN and a sawn-and-plated QFN with wettable flanks

Figure 1: Cross-section comparison between a standard QFN and a sawn-and-plated QFN with wettable flanks

 Figure 2: The partial cut and re-plating of tin on the half of the sidewall of a QFN package – section enlarged on right

Figure 2: The partial cut and re-plating of tin on the half of the sidewall of a QFN package – section enlarged on right

 

Tin plating provides a protective cover over the exposed copper. During the PCB assembly process, the solder joint will extend from the underside of the pad up the sidewall, resulting in an enhanced solder joint between the component and board. AVI can now assess a reliable solder joint with improved yields and improved reliability, now that a sound electrical connection exists between component and PCB.

 

Figures 3 through 6 highlight the solder joint between a QFN lead frame and a PCB with a clear exposed toe fillet, which assists with AVI and removes any false assembly failures.

 

Figure 3: Standard QFN package side view

Figure 3: Standard QFN package side view

 

Figure 4: Toe fillet on standard QFN

Figure 4: Toe fillet on standard QFN

Figure 5: Standard lead frame package sidewall

Figure 5: Standard lead frame package sidewall

 

Figure 6: Toe fillet on standard lead frame package QFN

Figure 6: Toe fillet on standard lead frame package QFN

 

In summary, you can see that there is no difference in performance or quality with the wettable flank process. In this example, TI’s LM53600-Q1 and LM53601-Q1 automotive DC/DC buck regulators include a reliable solder joint and are able to pass the stringent 100% AVI requirements required by the automotive industry today.

When to use load switches in place of discrete MOSFETs

$
0
0

Before people knew about electricity, they used candles for light. While this was a common way to see in the dark, the invention of the light bulb proved to be a better solution.

Much like a candle, the most common approach to switching a load is to use a power MOSFET surrounded by discrete resistors and capacitors (and a bipolar junction transistor (BJT)/second FET for controlling the power MOSFET). But in most cases, using a fully integrated load switch has significant advantages.

Where to find load switches in your system

A typical system involves a power supply and multiple loads that require various load currents such as Bluetooth®, Wi-Fi or processor rails. In most cases, the system must independently control which loads are on, when they are turned on and how quickly they turn on. You can implement this kind of power switching, shown in Figure 1, using a discrete MOSFET circuit or an integrated load switch.

Figure 1: Power switching from one power supply to multiple loads

A discrete MOSFET circuit contains several components to control the turn-on and turn-off of a discrete power MOSFET. You can enable or disable these circuits by using a General-purpose input/output (GPIO) signal from a microcontroller. Figure 2 shows several such circuits.

Figure 2: P-channel MOSFET (PMOS) discrete circuits

You can also use load switches to open and close the connection between the power rail and the corresponding load. These integrated devices have several benefits over their discrete counterparts. Figure 3 shows a load switch circuit.

Figure 3: Typical load switch circuit

Size advantages

One advantage of using a load switch solution is the reduced number of components and solution size. Load switches are designed to integrate components into packages that can be smaller than even the MOSFETs themselves. Figure 4 compares the size of a PMOS solution with an equivalent load switch. The smaller size of the load switch makes it ideal for even the most space-constrained applications.

Figure 4: Size comparison between the TPS22965 and an equivalent discrete solution

Feature advantages

There are also several features integrated into load switches that you won’t find in discrete circuits. To add reverse current blocking to a discrete solution, you would need an additional MOSFET to create a back-to-back configuration, effectively doubling the solution size. The TPS22910A and TPS22963C are just two examples from TI’s load-switch portfolio that come with this feature already built-in.

Quick output discharge (QOD) is a standard feature in most TI load switches that discharges the output voltage (VOUT) through an internal path to ground when the switch is disabled. Figure 5 illustrates this feature.

Figure 5: An illustration of QOD

QOD provides a known state on the output and ensures that all loads have been discharged and are turned off.

The TPS22953 and TPS22954 implement a power good feature that can signal when VOUT has charged to 90% of its final value. You can feed this signal to the enable pin of downstream modules so that they will turn on when the voltage rail powers up. You can also use the power good feature for power sequencing, enabling one load switch and having multiple rails come up in a specific order.

It’s safe to say that the invention of the light bulb made seeing in the dark an easier task, much like how an integrated load switch can remove the challenge of designing a compact and power-efficient circuit. So put out the candle and light up your power-switching design with an integrated load switch.

Additional resources

Energy harvesting solutions enable expanded 4-20mA current loop functionality

$
0
0
Sensing and measurement is becoming more and more important in the industrial sector. From both a factory and process automation standpoint, sensor transmitters are being added to improve system intelligence, detect faults and drive efficiency. In process...(read more)

WiLink™ 8 modules light up the industrial IoT night

$
0
0

Wireless connectivity technologies are widely used in consumer market all around us. Today, these opportunities are also in reach of the industrial market, replacing the standard cables of the past. The concept of the industrial Internet of Things (IIoT) has become increasingly popular over the last few years. Recently, the TI WiLink™ 8 Wi-Fi® + Bluetooth® combo-connectivity module was used by a well-known industrial electrical company in China. 

The end equipment that utilizes the WiLink 8 module monitors the status of- and records the data from a power line. A Sitara™ AM335x processor was used as the main controller for this system and data is wirelessly transmitted to the server using the WiLink 8 device. Leveraging the WiLink 8 solution’s Wi-Fi connectivity, users can control and diagnose the modules hanging on the electrical wire, which would traditionally be out of reach, via a tablet or cell phone.

Our WiLink 8 device brings a new way of working in traditional industrial businesses, such as the electric company discussed here. Companies can now access timely recorded data, either from their server or the cloud, for real-time data analysis, such as preventative maintenance, which brings extra value and selling points for their end products.

Below are some key advantages of the WiLink 8 module in this solution:

Seamless cloud connectivity

  • Can provide better stability, robustness and high wireless throughput, which is a good choice for the industrial market
  • Fully integrated connectivity solution with TI’s Sitara AM335x processor greatly reduces development time and resources while enabling a high-performance platform
  • Industrial temperature  range and precise time synchronization features
  • WiLink 8 combo connectivity modules make the development process easier and do not require RF expertise from developers
  • TI provides software, tools, reference designs, technical support and more to speed development time 

Our WiLink 8 module provides an amazing IIoT user experience in this power automation application but can also be used in applications such as surveillance, preventative maintenance and much more.  Which application areas for WiLink 8 solutions would you like to learn more about?

A battery driver you can count on to keep you going

$
0
0

When that fancy new drone of yours is busy capturing 4K ultra-high-definition video 2,000 feet in the air, the last thing you want to worry about is whether it has enough “juice” to finish your new YouTube masterpiece or land safely.

TI AvatarWe’ve got your back – a solution that minimizes power drain, voltage spikes and overheating in motor-driven devices.

“We depend on the batteries in these expensive gadgets to be reliable and keep their charge for as long as possible,” said Allen Chen, product marketing and applications manager. “You don’t want to be flying that drone over a lake and suddenly have to put on your scuba gear just to get it back.”

The same reliability is needed for electric scooters and other light, electric-mobility vehicles, robotics and cordless vacuum cleaners. The hefty battery packs in these devices often have huge transient swings that can cause battery voltages to jump up to 200 percent above their normal range. The packs also face a variety of abuse conditions, such as overcharging and overheating.

To help solve these challenges, this week we unveiled the bq76200 field-effect transistor (FET) – a battery pack front-end driver aimed at portable industrial systems and other products that may at times see up to 100 volts. The bq76200 comes in a single chip that controls both charging and discharging while consuming very low power.

“The bq76200 is designed for today’s modern industrial demands. It’s incredibly robust,” Allen said. “Unlike today’s existing low-side drivers, a high-side battery driver like the bq76200 ensures the system can constantly communicate with monitors or fuel gauges inside the battery, which goes a long way towards preventing any potential mishaps from occurring.”

The bq76200 also is a great fit for applications, such as portable medical and industrial systems and other cordless applications. In portable medical applications, it’s especially important to keep system noise down, and bq76200’s high-side drive ensures that the battery won’t contribute any interference.

Charging your product with a single battery pack

For engineers, this advanced high-voltage solution offers advanced protection FET drive and control, suitable for a range of application voltages from a small 18V cordless drill to 100V energy storage systems. It can greatly enhance a battery design and also complements our battery management products such as the bq76940 family and the bq78350-R1 companion fuel gauge. See the datasheet.

“It offers you a clear path to design low-power, flexible battery management solutions that are always communicating,” Allen said. “Best of all, in a single battery pack, we have the opportunity to now offer a comprehensive chipset solution, including this driver, our battery monitor, fuel gauge, companion NexFETs and more.”

The bq76200 is available in the TI Store and through the company’s authorized distribution network. The driver is packaged in a 16-pin TSSOP package and is priced at $1.69 USD in 1,000-unit quantities.

Read our blog to learn more about the bq76200, “Big batteries? Take a walk on the (high) side.” Also, watch these videos: TI’s bq76200 100V Battery High-Side FET Driver and Introduction to the bq76200.

Disentangling RF amplifier specs: amplifier spot noise vs. noise figure

$
0
0

In the world of high-speed amplifiers, noise is commonly represented one of two ways. For radio frequency (RF)-oriented amplifiers, such as low-noise amplifiers (LNAs), radio frequency power amplifiers (PAs) and RF gain blocks, noise is usually given as a noise figure value. For high-speed operational amplifiers (op amps) and fully differential amplifiers (FDAs), noise is typically represented as voltage spot noise, which is sometimes also accompanied by current spot noise.

Both noise figure and spot noise are useful for their own reasons, but can be confusing if you are not familiar with one or the other. In order to reduce the mystery between the two specs, let’s take a look at how they are defined and related to one another.

Spot noise is the more straightforward of the two specs: It is simply a way to represent the noise of the amplifier as a source at one of the summing inputs. It is defined as the amplifier noise integrated over a one hertz bandwidth. The units are typically nanovolts per root hertz for voltage noise and amps per root hertz for current noise.

Voltage spot noise is represented by a voltage source that can be applied at either input of the amplifier, while current spot noise is represented by either one current source applied to each input or by specific sources for the inverting and noninverting inputs, respectively. Most amplifiers typically have only a voltage-noise value or one voltage- and one current-noise value. This is because the difference in current noises at the inputs of the amplifier is usually minimal; and in general, current noise is insignificant compared to voltage noise. Current noise only becomes a dominating factor in current feedback-style amplifiers, or when the feedback resistor (RF) becomes very large. Figure 1 shows an op-amp circuit with equivalent amplifier voltage and noise sources.

Figure 1: Op amp showing equivalent input-voltage and noise sources

Since spot noise is an absolute measurement, you can use it to easily compare the noise contributions of different amplifiers independent of their configuration. However, not all amplifier data sheets include spot noise measurements. Sometimes noise measurements are represented only as a noise figure. Noise figure can be a very useful specification, but you must first understand how it is defined in order to use it properly.

The key aspect to understand about noise figure is that it is a measurement made relative to the source impedance of the system’s input signal. This is true for noise-figure measurements of any device, not just amplifiers. The noise figure is defined as the ratio of the signal-to-noise ratio (SNR) at the output of a device to the SNR at the input, expressed in decibels (Equation 1):

(1)

Thinking about this equation from a conceptual standpoint, it becomes easy to see how the noise figure becomes dependent on the source impedance of the system. Expressing the SNR in simple terms of signal, noise and gain, Equation 2 shows that increasing the source noise (most easily done by increasing source impedance) reduces the noise figure:

 

(2)

This conclusion is backed up by data when computing the noise figure for a specific amplifier. Figure 2 shows the noise-figure values for the LMH5401 FDA with four different source resistances. In Figure 2, it is very easy to see that noise figure is indeed reducing as source impedance is increasing, even though the amplifier voltage and current spot noises have not changed.

Figure 2: LMH5401 example of noise figure dependency on source impedance

From this analysis, it’s clear to see that noise figure is much more confusing than spot noise measurements. So why would anyone use it in the first place? Employed correctly, noise figure can be a very useful tool in determining total system noise quickly. If a system comprises multiple blocks of identically matched source and load impedances, you can use the respective noise figures of each block to easily calculate the total noise figure of the system using the Friis formula for noise. Equation 3 shows the Friis formula in terms of noise figure (NF) and gain (G) of blocks 1 though n of a system. 

(3)

For amplifiers, both spot noise and noise figure are useful specifications if you need to understand the noise contribution to a system. However, it is important to keep in mind the differences between the two specs. In general, spot noise is the easiest to use when comparing noise performance because it is an absolute measurement. You can also use noise figure to compare noise performance, but remember that it is relative to the source impedance of the measurement. If the source impedances are the same, you can use noise figure to compare the noise performance of different amplifiers, while additionally providing an easy way to estimate the noise impact of a block in a system.

Additional resources

 


High-voltage power innovation: Past, present and future

$
0
0

There are two things that I look forward to everyday: the people I get to work with, and the technology I get to play with. But I sometimes like to take time out to reflect on how far the industry has come in high-voltage innovation, and think about what we’ll see next. Because my team is helping to drive innovation in high-voltage power conversion, I have a front-row seat to witness how our customers take advantage of new power products to create system-level designs that make a true difference to save energy and innovate in power.

High-voltage power conversion often means something different to different people. At TI we often refer to it as a silicon-based device with a primary purpose to convert or manipulate a 100-V voltage rail or higher.

For many, innovation in high voltage may seem trivial, or maybe a little “been there, done that”— especially if you think about how far integrated circuits (ICs) have come since the Jack Kilby days. But nothing could be further from the truth.

Take gate drivers for example. Traditional circuits use gate-drive transformers, which can take up a lot of board space. New IC technology, such as TI’s 600-V UCC27714, replaces the need for a gate-drive transformer and shrinks the board footprint by over 50%, making it ideal for offline AC/DC power supplies used in server, telecom and industrial designs.

Or look at how far gallium nitride (GaN) has come, a technology that has been around since the 1970s, and what its potential in power electronics looks like. The future bodes well for GaN as many GaN-based power devices continue to reach a level of reliability that is easy for power designers to use. Take the LMG5200 half-bridge power stage GaN module, for example. By integrating a custom GaN driver and two GaN FETs, all mounted on an encapsulated laminate, the LMG5200 strips away the complexities of designing with GaN for a true plug-and-play experience.

As I look toward 2016, I get even more excited about what’s coming next, particularly exploring more reliable GaN drivers and integrated FET-plus-driver modules.

However, innovation in this space doesn’t simply come from a patent-pending circuit design, some new small geometry process node or a complex multidie packaging architecture. It comes from all of those innovations working in unison inside an organization that has amassed a comprehensive understanding of complex systems for over 30 years.

Have I mentioned power supply designs yet? A large part of high-voltage applications are in power supplies, but as I’ve personally come to appreciate, having the best IC with no comprehension of the total power-supply solution is meaningless. To put it another way, a 99.9% efficient IC paired with a suboptimal transformer doesn’t solve total solution challenges. This then adds “transformer and power-supply design expertise” to the list of innovations required for success in the high-voltage space. TI has seen some great power supply engineers over the years, from Jack Kilby to Dave Freeman to Bob Mammano. I’m happy to say that tradition will continue, as the next wave of great engineers has already arrived.

To that point, when I look forward into 2016, I’m most excited to be part of a diverse team of application engineers, system designers, IC designers, power-supply designers, process engineers and packaging engineers, all coming together to create even more applications using high-voltage power technology. The future certainly looks bright for high voltage innovation.

What do you think the future holds for high voltage innovation?

Additional resources:

TI's new DSP Benchmark Site

$
0
0

To help evaluate the performance of our processors, we have created a new benchmark site that presents benchmark data from TI’s optimized libraries as well as independent benchmarking results. These benchmarks help illustrate the benefits of DSP cores when used to implement complex mathematical algorithms.

There are two main areas on the benchmarking site:

  • Core benchmarks focus on the performance of a single core. Benchmark information for the C674x DSP, C66x DSP and the ARM® Cortex®-A15 cores are shown here.
  • Device benchmarks illustrate performance at the device level. This shows how performance scales with the number of cores used and takes into account memory, data movement and cache effects.

There is also a TI DSP benchmarking application report that shows how to duplicate the core benchmarks shown. This document details the hardware, tools and libraries used in the benchmarking process and provides modified linker command files that can be used to replicate the results.

As we discussed in our third DSP Breaktime Video (see the 3:40 mark), comparing benchmarks between different types of processors can be a tricky thing, especially when it comes to FPGAs. To paraphrase Arnon: “You have to be very careful making comparisons based on isolated benchmarks, you need to compare apples to apples, not apples to oranges.“

Our benchmarking page is just getting started. What other benchmarks would you like to see?

 

‘Agribot’ enhances fine wine with high tech

$
0
0

In an innovative blend of fine wine and high tech, a robot developed by students under the direction of a Kilby Labs researcher someday may help wine-growers and other farmers protect their crops from disease.

TI AvatarBuilt on a plywood platform mounted atop wheels from an old electric wheelchair, the agricultural robot – or agribot – is designed to drive up and down rows of vineyards to spray fungicide that will keep grapevines healthy. The automated project is based on a TI LaunchPad™ development kit, some energy-harvesting circuits, motor drivers and the GPS app in a smart phone.

Leo Estevez, who winces every spring when the weather forecast mentions rain, hopes the agribot will be rolling up and down the rows of his three vineyards when the vines begin growing and producing grapes for his two varietals next spring. He loves to make – and drink – his own wine, so he does what he can to keep his delicate grapevines healthy.

“It rains a lot in North Texas,” Leo said. “If you’re going to farm organically, then you have to spray natural stuff on the plants every time it rains to keep fungus from destroying them. This was a terrible year for that because it rained almost every day during the first half of the year.”

Leo lives with his wife and daughter on several acres in the countryside east of Dallas, home to one of his vineyards. Because he could give that vineyard the attention it needed after torrential rains during early 2015, he was able to keep it alive. Two other vineyards – on acreage an hour-and-a-half farther east – sustained so much damage as a result of the rains that they’ll have to be replanted.

In addition to his passion for growing grapes and making wine, Leo and his family raise chickens, goats, vegetables and a couple of horses. Their pastoral home is a long way – literally and figuratively – from Leo’s high-tech, high-demand job at Kilby Labs, our applied research center where he works as an embedded wireless systems analyst and where, as a sideline, he guides university seniors through their design projects.

Something useful

TI AvatarIt was while working with students on a recent senior design project that Leo hit upon an idea for an agricultural robot that could solve his vineyards’ fungus problem while helping students develop a ground-breaking innovation.

The project that evolved into the agribot began as something else entirely – a Wi-Fi®-connected system to control window blinds in buildings. We hired two students from that initial team, and they completed a design based on their project that we recently introduced to the market.

But other members of the team wanted to build on the original idea and create something that would have wheels and move instead of just opening and closing window blinds. The team added more members, built a platform and repurposed the motor drivers to turn wheels. They added a GPS-enabled smart phone to control the robot.

Then the semester ended and, while the team had done what they set out to do, a few decided to spend last summer extending the idea further.

“They had built a little car, but they wanted to turn it into something useful,” Leo said. “I had a problem, which was that I needed an automated way to spray my vineyards. I told them that if they could have the robot navigate up and down rows in the vineyard and fumigate, it would save me and – potentially – a lot of other people a lot of time.”

Vision

So the team members – including high-school volunteers – spent last summer developing an agribot that’s controlled remotely with a smart phone. They built an inexpensive prototype, tested it in the vineyard at Leo’s home and launched a crowd-sourced fund-raising campaign, which generated interest but not the money they needed to develop it into a full-fledged business. The development work became open-sourced when they published it on a website for makers.

“In farm work, there are a lot of tedious tasks,” said Guang Zhou, a doctoral student at the University of Texas at Dallas who programmed the agribot. “We want to help people doing tedious tasks with an inexpensive way to do the job.”

TI AvatarAnita Dey Barsukova, a high school sophomore who helped the team with social media and publicity, sees opportunities to add tools such as a hedge trimmer to the agribot, in addition to a fungicide sprayer.

Today, yet another university team is building on the work of previous teams and developing systems to drive an automated golf cart equipped with ultrasonic scanning for navigation. It could eventually handle tasks ranging from security and surveillance to towing trailers autonomously.

“They’ll solve another control problem that’s different than what the previous teams worked on,” Leo said. “Every team is a different set of students, and they have their own vision for what they want to do and where they want to go.”

Making makers

People often ask Leo about his motivation for helping students with their projects.

“A lot of people ask me why I bother with these projects,” Leo said. “My job at Kilby Labs has nothing to do with these projects. There is a lot of value in enabling people to create new businesses, which these projects could be. What motivates me is not just creating new products, but creating makers or engineers who can create new products.”

RS-485 basics: the RS-485 receiver

$
0
0

The last post in this series described the structure and basic operation of the RS-485 driver. In this post, I’ll discuss the RS-485 receiver and the relevant parameters in the RS-485 standard.

RS-485 transceivers such as the SN65HVD7x half-duplex family have an equivalent receiver input schematic like the one shown in Figure 1. 1) The receiver input circuitry consists of electro-static discharge (ESD) protection, a resistor-divider network, and a biasing current, all of which play a role in shaping the magnitude and common mode voltage that reaches the differential comparator.

Figure 1: Differential receiver input structure

ESD protection

The most important thing to note in terms of ESD protection for half-duplex transceivers is that the driver and receiver share the same ESD protection, saving space. But for a full-duplex transceiver, both the driver (Y and Z pins) and receiver (A and B pins) need independent ESD protection. This means you’ll need twice the area to support ESD protection.

Resistor-divider network

The resistor-divider network on the A and B inputs serves two functions. The first function is to attenuate large signals that are beyond the range of the receiver’s supply voltage. This attenuation factor is necessary because the RS-485 standard states that voltages as low as -7V and as high as +12V can exist on the bus terminals to account for ground-potential differences that may exist between transceivers on a shared network. These high voltages need attenuation down to voltages that 3.3V or 5V transceivers can handle. A typical attenuation factor is on the order of 10-to-1, greatly reducing the magnitude of the voltages seen internally at the comparator.

The second important function of the resistor-divider network is to bias the bus voltages toward VCC/2. This is necessary because simply attenuating a negative signal will not bring the voltage between the receiver’s local ground and VCC. Attenuating the signal and biasing it toward VCC/2 prevents the inputs of the comparator from getting saturated; thus enabling the comparator to properly evaluate the differential voltage between the A and B terminals.

This ability to bias the voltages also allows the system to perform without a common ground connection between the remote RS-485 driver’s ground and the local RS-485 receiver’s ground. Figure 2 shows how an input signal is both attenuated and biased from the bus terminals to the input of the comparator through the resistor-divider network.

Figure 2: Receiver input-voltage attenuation

The series combination of R1 and R2 || R3 (resistors R2 and R3 in parallel) is the primary factor that determines the receiver’s input impedance. The RS-485 standard specifies that the input leakage current of a compliant receiver must remain within the shaded region shown in Figure 3 when applying -7V to +12V to the input terminals for both powered and unpowered conditions.

Figure 3: RS-485 receiver input I-V characteristic

The trade-off that exists with receiver attenuator design is that in order to lower the leakage current, higher resistor values must be used, which increases the size of the resistors in the attenuator. Larger-sized components create a more expensive die and more parasitic capacitance. This stray capacitance and the input capacitance of the comparator sit in parallel to the resistance of the attenuator, creating a low-pass filter, which in turn limits the receiver’s maximum bandwidth. Therefore, there is a balance between the input leakage current and resistor values and the bandwidth and size of the attenuator. With the SN65HVD78 device, which is the highest speed device in the SN65HVD7x family, you can see it also has the highest bus input-leakage current due to the lower resistor-value attenuator circuit that was needed.

Biasing current

Figure 4 shows the effect of the current source connected between the B input terminal of the comparator and ground. By using the superposition principle, you can see that the current source will cause a voltage drop across R4 and R6 connected to the negative-input terminal of the comparator. This creates a fail-safe bias voltage that causes the negative terminal to have a lower voltage than the positive terminal and the output of the comparator to be in a known high state when applying a 0V differential voltage to the A and B pins.

This fail-safe biasing guarantees that the R output will be high in the presence of bus idle or bus short-circuit conditions. In the VIT numbers for the SN65HVD7x family, the positive threshold has a typical value of -70mV and the negative threshold is typically -150mV. Without fail-safe biasing, transceiver thresholds would be centered around 0V and in an indeterminate state with a 0V differential input voltage.

Figure 4: Effect of the offset bias current

In summary, understanding the basic input structures of an RS-485 receiver should help you understand important receiver electrical specifications like input-leakage current and positive and negative input thresholds. I hope you now understand the trade-offs that exist and where the numbers come from.

Stay tuned for the next RS-485 blog, which will cover the topic of unit loading. As always, feel free to post any comments or questions below.

Additional resources:

How to win the TIIC: Advice from past winners

$
0
0
Friends since high school, Sean Lyons & Troy Bryant entered the 2015 TI Innovation Challenge North America design contest as two University of Florida students with a passion for music and a passion for changing the world. First place winners of ...(read more)
Viewing all 4543 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>