SANTA CLARA, Calif., Aug. 21, 2020 /PRNewswire/ -- Socionext Inc. and a research group at Osaka University Institute for Datability Science have jointly developed a new method of deep learning, which enables image recognition and object detection in extremely low-light conditions. Led by Professor Hajime Nagahara, the development team merged multiple models to create a new method that enables the detection of objects without the need to generate huge datasets, a task previously thought to be essential.
Socionext plans to incorporate this new method into the company's image signal processors to develop new SoCs, as well as new camera systems around such SoCs, for applications including automotive, security, industrial and others that require high performance image recognition. The research work will be presented at the European Conference on Computer Vision (ECCV) 2020, held online from August 24 through 28 (British Summer Time).
New Method Achieves the Goal of Improved Image Recognition Performance
A major challenge throughout the evolution of computer vision technology has been to improve the image recognition performance for applications such as in-vehicle cameras and surveillance systems under poor lighting conditions. Previously, a deep learning method using RAW image data from sensors has been developed, called "Learning to See in the Dark" [1]. However, this method requires a dataset of more than 200,000 images with more than 1.5 million annotations [2] for end-to-end learning. Preparing such a large dataset with RAW images is both costly and time prohibitive.
Fig.1 Learning to See in the Dark / Challenges for RAW image recognition
The joint research team has proposed the domain adaptation method, which builds a required model using existing datasets by utilizing machine learning techniques such as Transfer Learning and Knowledge Distillation. The new domain adaptation method resolves that challenge through the following steps: (1) building an inference model with existing datasets; (2) extracting knowledge from the aforementioned inference model, (3) merging the models by glue layers, and (4) building generative model by knowledge distillation. It enables the learning of a desired image recognition model using the existing datasets (Fig.2).
Fig.2 Domain Adaptation Method
Using this domain adaptation method, the team has built an object detection model "YOLO in the Dark" using RAW images taken in extreme dark conditions, with the YOLO model [3] (Fig.3). Learning of the object detection model with RAW images can be achieved with the existing dataset, without generating additional datasets. In contrast to the existing YOLO model where the object cannot be detected by correcting brightness of images (a), the proposed new method made it possible to recognize RAW images and detect objects (b). The amount of data processing time needed in this new method is about half of the original method, which uses the combination of previous models (c).
This "direct recognition of RAW images" by the method is expected to be used for object detection in extremely dark conditions, along with many other applications. Socionext will add this new method to its line-up of leading-edge imaging technology and SoCs for enabling advanced camera systems and applications requiring high-quality, high-performance image recognition.
European Conference on Computer Vision – ECCV 2020 on Wednesday, Aug 26, 2020
Presentation by Yukihiro Sasagawa, Socionext and Hajime Nagahara, Osaka University
"YOLO in the Dark - Domain Adaptation Method for Merging Multiple Models -"
Notes:
[1] "Learning to See in the Dark": CVPR2018, Chen et al.
[2] MS COCO dataset as an example
[3] YOLO (You Only Look Once): One of the deep learning object detection methods
About Socionext America Inc.
Socionext America Inc. (SNA) is the US branch of Socionext Inc. headquartered in Santa Clara, California. The company is one of the world's leading fabless ASIC suppliers, specializing in a wide range of standard and customizable SoC solutions for automotive, consumer, and industrial markets. Socionext provides customers with quality semiconductor products based on extensive and differentiated IPs, proven design methodologies, and state-of-the-art implementation expertise, with full support.
For product information, visit our website, e-mail [email protected] or call 1-844-680-3453. For company news and updates, connect with us on Twitter, Facebook and YouTube.
All company or product names mentioned herein are trademarks or registered trademarks of their respective owners. Information provided in this press release is accurate at time of publication and is subject to change without advance notice.
SOURCE Socionext America Inc.
Related Links
WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM?
Newsrooms &
Influencers
Digital Media
Outlets
Journalists
Opted In
Share this article