[ Back ]   [ More News ]   [ Home ]
Renesas Electronics Develops Camera Video Processing Circuit Block with Low Latency, High Performance, and Low Power Consumption for Automotive Computing SoC for the Autonomous-Driving Era

Achieves 70ms-Low Latency Vehicle Camera Video Processing and Industry-Leading Full-HD 12-channel Video Processing Performance with Only 197 mW Power Consumption

TOKYO — (BUSINESS WIRE) — February 2, 2016 — Renesas Electronics Corporation (TSE: 6723), a premier supplier of advanced semiconductor solutions, today announced the development of a new video processing circuit block for use in automotive computing system-on-chips (SoCs) that will realize the autonomous vehicles of the future.

The automotive computing SoCs for autonomous vehicles are required to integrate the functionality of both in-vehicle infotainment systems and driving safety support systems, and to operate both in parallel. In particular, driving safety support systems must be able to process video data from vehicle cameras with low latency to notify the driver of appropriate information in a timely manner. One issue that developers of in-vehicle infotainment systems and driving safety support systems face is the need to process large amounts of video data and also to perform autonomous vehicle control functions, without delays and instability.

The newly developed video processing circuit block handles processing of vehicle camera video with low latency. It can perform video processing in real time on large volumes of video data with low power consumption and without imposing any additional load on the CPU and graphics processing unit (GPU), which are responsible for autonomous vehicle control. Renesas has manufactured prototypes of the new video processing circuit block using a 16 nanometer (nm) FinFET process. In addition to 70ms-latency processing of vehicle camera video, it delivers industry-leading Full-HD 12-channel video processing with only 197 mW power consumption.

Recently, in-vehicle infotainment systems foreshadowing the future emergence of autonomous vehicles, such as car navigation systems and advanced driver assistance systems (ADAS), have made significant advances that bring them closer to becoming automotive computing systems integrating the functionality of both in-vehicle infotainment systems and driving safety support systems.

Driving safety support systems are expected to perform cognitive processing based on video transferred from vehicle cameras, such as identifying obstacles, monitoring the status of the driver, and anticipating and avoiding hazards. With the appearance of devices such as the R-Car T2 vehicle camera network SoC from Renesas, it can be anticipated that video data transferred from vehicle cameras will be encoded to video streams, and driving safety support systems must decode the received video streams. In order to make cognitive processing correctly using images from wide-angle cameras, the video data must be processed to correct for distortion. This video processing will be required to be accomplished with low latency to enable the system to notify the driver of appropriate information in a timely manner.

On the other hand, in-vehicle infotainment systems are capable of interoperating with a variety of devices and services, including smartphones and cloud-based services, and therefore data from a large number of external video sources are input to the system. At the same time, it is becoming more common for vehicles to be equipped with multiple interior displays including rear-seat monitors. This means the system must be able to handle simultaneous display of multiple video signals. In-vehicle infotainment systems must have sufficient performance to process and display large volumes of video data in real time.

The newly developed video processing circuit block can decode video streams transferred from vehicle cameras and apply distortion correction, with low latency. It performs the complex video processing required by automotive computing systems, delivering real-time performance and low power consumption, while imposing no additional load on the CPU and GPU responsible for cognitive processing tasks.

Key features of the newly developed technology:

(1) Synchronous operation among video processors, combined with pipeline operation, for video decoding and distortion correction with 70ms latency

Video codec processing basically consists of parsing processing, where performance depends on the volume of encoded stream data, and image processing, where performance depends on the image resolution. The newly developed video processing circuit block implements video encoding and decoding by using a stream processor for parsing processing and a codec processor for image processing. Since the data size of the typical video streams handled by in-vehicle infotainment systems varies greatly from frame to frame, the processing time required by the stream processor, whose performance depends on the volume of encoded stream data, varies substantially from frame to frame. On the other hand, the processing time required by the codec processor, whose performance depends on the image resolution, is the same for every frame. Consequently, the stream processor and codec processor must operate asynchronously, and this can cause large delays to become an issue.

The newly developed video processing circuit block has a synchronous operation mode that utilizes a FIFO placed between the stream processor and codec processor and can handle video streams that are roughly constant in volume from frame to frame, as is expected to be the case in driving safety support systems. It also has a mechanism whereby the codec processor outputs an interrupt to the CPU each time processing of a multiple of 16 lines has completed during frame processing, thereby allowing distortion correction to start in a later stage without waiting for frame processing to finish completely. This combination of synchronous operation and incomplete-frame pipeline operation achieves low latency of only 70 ms (a reduction of 40 percent compared with existing Renesas devices using the 28 nm process) from the reception of video streams to the completion of video decoding and distortion correction.

(2) 17 video processors of six different types, optimized for automotive computing systems to deliver industry-leading Full-HD 12-channel performance

The newly developed video processing circuit block integrates 17 video processors of six different types in order to achieve real-time and power-efficient video processing without imposing any additional load on the CPU and GPU. Stream processors and codec processors handle video encoding and decoding, rendering processors perform distortion correction, video processors perform general image processing, blending processors handle image composition, and display processors perform processing for displaying images on screens. The video processors are connected to each other via hierarchical buses.

Evaluation of prototypes of the video processing circuit block comprising these video processors, fabricated with a cutting-edge 16 nm FinFET process, confirms truly industry-leading Full-HD 12-channel performance (approximately three times improvement compared to the existing Renesas devices using the 28 nm process).

(3) Combination of two types of data compression, lossless compression and lossy compression, to reduce memory bandwidth by 50 percent and achieve Full-HD 12-channel processing with industry-leading low power consumption of 197 mW

When performing the massive video processing required by Full-HD 12-channel video, data accesses to the memory are a major source of performance bottlenecks and power consumption. In addition, in automotive computing systems it is necessary to minimize the memory bandwidth consumed by video processing to avoid interfering with the cognitive processing performed by the CPU and GPU. It is essential not to inhibit the operation of driving safety support systems, which must maintain a high level of safety.

For this reason, image data stored in memory is compressed to reduce usage of memory bandwidth. By using both lossless compression, which does not alter the pixel values and results in larger silicon area, and lossy compression, which alters the pixel values and results in smaller silicon area, in a manner appropriate to the image processing characteristics, it is possible to reduce memory bandwidth by 50 percent in a typical video processing flow. In particular, to avoid an issue specific to DDR memory where the memory access efficiency drops when accessing smaller blocks of data, meaning that there is no effective reduction in memory bandwidth, caching is used for video decoding, which involves large numbers of accesses to small blocks of data. This makes it possible to increase the DDR memory access size and reduces the effective memory bandwidth by 70 percent. Evaluation of prototypes fabricated with a cutting-edge 16 nm FinFET process confirms that this reduction in memory bandwidth results in a 20 percent drop in power consumption, proportional to the reduction in the volume of data transaction on the bus, resulting in truly industry-leading Full-HD 12-channel power consumption of 197 mW (60 percent less than that of current Renesas devices using the 28 nm process).

The newly developed video processing circuit block will realize automotive computing systems integrating vehicle information systems and driving safety support systems by enabling massive video processing without imposing any additional load on the CPU and GPU, with real-time performance, low power consumption, and low delay. Renesas intends to incorporate the new video processing circuit block into its future automotive computing SoCs to contribute to a safer and more convenient driving experience.

Renesas announced this technology on February 1 at the International Solid-State Circuits Conference (ISSCC) held in San Francisco from January 31 to February 4, 2016. The demonstration showed the processing performance of a test board with an SoC incorporating the newly developed video processing circuit block by playback of Full-HD 12-channel video content, accompanied by a real-time display of memory bandwidth reduction rate.

About Renesas Electronics Corporation

Renesas Electronics Corporation (TSE: 6723), the world’s number one supplier of microcontrollers, is a premier supplier of advanced semiconductor solutions including microcontrollers, SoC solutions and a broad-range of analog and power devices. Business operations began as Renesas Electronics in April 2010 through the integration of NEC Electronics Corporation (TSE:6723) and Renesas Technology Corp., with operations spanning research, development, design and manufacturing for a wide range of applications. Headquartered in Japan, Renesas Electronics has subsidiaries in 20 countries worldwide. More information can be found at www.renesas.com.

(Remarks) All registered trademarks or trademarks are the property of their respective owners.



Contact:

Japan
Renesas Electronics Corporation
Kyoko Okamoto, + 81-3-6773-3001
Email Contact