Department News

The research team of Prof. Kuk-Jin Yoon, Department of Mechanical engineering, developed an algorithm that can generate super-resolved images and free of motion-blur effects, even in extremely low/high illumination conditions by utilizing event camer

관리자 2020.05.12

Down




The research team of Prof. Kuk-Jin Yoon has developed new super-resolution restoration technology by using event camera which provides asynchronous event data in low latency with response to the light intensity change onto each pixel. Compared to the RGB camera, event camera can operate in extremely high/low illumination conditions and in the high-speed motion. However, there exist some drawbacks that the application of existing image processing algorithm is difficult and the resolution of the event data is low due to sensor limitations. In this study, the research team of prof. Kuk-Jin Yoon suggested Deep Neural Network (DNN) that generate super-resolved images from event data while preserving the advantages of event cameras, along with a novel method to train the network. In collaboration with a research team in GIST (Prof. Jong-Hyun Choi and Ph. D. candidate Mohammad Mostafavi), Prof. Kuk-Jin Yoon led a study and proposed a supervised-learning based DNN and learning methodologies (Paper Title: Learning to Super Resolve Intensity Images from Events). Another research team, led by Ph.D. candidate Wang Lin (KAIST) and Prof. Kuk-Jin Yoon (KAIST), and jointly participated by Prof. Tae-Kyun Kim (Imperial College London), proposed a unsupervised-learning based adversarial network that generates super-resolution images from event data (Paper Title: EventSR: From Asynchronous Events to Image Reconstruction, Restoration, and Super-Resolution via End-to-End Adversarial Learning). Those proposed technologies not only preserve the advantages of event camera but also make it possible to generate videos of up to a million frames per second. These two studies will be presented in the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) in June, which is one of the top-tier international conferences on computer vision/machine learning, as oral and poster sessions, respectively. The results of these studies can be applied to autonomous driving vehicles, drones and robots to handle extreme environments such as high-speed motion or huge illumination changes.
These works were supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (NRF-2018R1A2B3008640) and Next-Generation Information Computing Development Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT (NRF-2017M3C4A7069369)