Quantcast
Channel: TI E2E support forums
Viewing all articles
Browse latest Browse all 4543

Advanced safety and driver assistance systems paves the way to autonomous driving

$
0
0

To make self-driving cars a reality, there will be many legal, social and structural hurdles to overcome. While almost all of us have read in science fiction or watched in movies “auto piloted” cars, actually trusting a machine – or a computer on wheels – to drive us around in all conditions is another story. The question of liability in case of collisions will also need to be scrutinized, as there might be no human involved any more (and we are usually the ones making the mistakes or breaking the laws). In an ideal case scenario, all of the cars on the road will have the self-driving function. Unfortunately, it might take several car generations for this to become a reality. Last but not least, for cars to communicate with their environment (car-to-infrastructure as well as car-to-car communication) large investments will be required to install and maintain the actual infrastructure and to standardize how the communication is done (to everybody speaking the same “language”).

 

But for now, let’s focus on the technical feasibility for a self-driving car. Today, we take passive safety systems such as anti-lock braking systems (ABS) and airbags or electric power steering and electronic engine management for granted. Those are the systems that let the car act (brake, steer, accelerate), but what should it act upon? While advanced driver assistance systems (ADAS) are not commonplace in all cars yet, these systems will play a key role in the evolution from driving a car to being driven by a car, as they give the equivalent of eyes to the car.

As a foundation for ADAS, a wide range of new sensors will need to be deployed, specifically sensors for machine vision like camera systems, radar and LIDAR. While the use of those sensors will give the car the needed situational awareness, interpreting the large amounts of new data is posing a challenge for the processing capability of the system. For example, taking and processing each pixel of a camera image (low –level processing), identifying and recognizing  objects of interest in this picture (mid-level processing) and then making decisions based on those objects and their position and movement relative to the car (high –level processing) requires different processing capabilities. Typical microprocessing and digital processing topologies are specialized in performing one of those tasks very well but have limitations in the other ones. For machine vision, all of these processes have to be performed in sequence and in real-time, which requires heterogeneous processing capabilities as performed by TI’s TDA2x System-on-Chip (SOC) family.

 

Now with the car being able to see, interpret what to do and act on this, we have all the basic blocks needed. To read more about autonomous driving, machine vision and heterogeneous processing, read this white paper: Making cars safer through technology innovation.


Viewing all articles
Browse latest Browse all 4543

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>