Jisong Kim

Deep Learning Researcher in Perception Systems

About Me

I’m Jisong Kim, a Ph.D. student in Electrical Engineering at Hanyang University in Seoul, under the guidance of Prof. Jun Won Choi. My research focuses on deep learning-based perception using cameras, radar, and lidar, specifically in 3D object detection and tracking, semantic occupancy prediction, and action detection. I’ve published my work in conferences like NeurIPS, ICRA, CVPR, and ECCV. Throughout my studies, I’ve developed AI models for smart home services, optimized object detection models, and enhanced perception technology for urban patrol robots. I’ve also collaborated with companies like Hyundai and HL Klemove on autonomous driving algorithms.

I am most skilled in: Deep Learning based Perception, Sensor Fusion, and Knowledge Distillation

Overview

Research interest areas:
  • Deep Learning-based Perception
  • Camera, radar and LiDAR-based 3D object detection and tracking
  • Radar based point cloud generation
  • Optimizing object detection through knowledge distillation
  • Analyzing defects through image and video classification
  • Video-based human action detection

Education

Ph.D. in Electrical Engineering (Advisor: Prof. Jun Won Choi)

Hanyang University, Seoul, South Korea

Mar 2022 - Present


M.S. in Electrical Engineering (Advisor: Prof. Jun Won Choi)

Hanyang University, Seoul, South Korea

Mar 2020 - Feb 2022


B.S. in Automotive Engineering

Hanyang University, Seoul, South Korea

Mar 2014 - Feb 2020

Publications

JoVALE: Detecting Human Actions in Video Using Audiovisual and Language Contexts

AAAI 2025

Taein Son, Soo Won Seo, Jisong Kim, Seok Hwan Lee, and Jun Won Choi


CRT-Fusion: Camera, Radar, Temporal Fusion Using Motion Information for 3D Object Detection

NeurIPS 2024

Jisong Kim, Minjae Seong*, and Jun Won Choi


JARViS: Detecting actions in video using unified actor-scene context relation modeling

Neurocomputing

Seok Hwan Lee, Taein Son, Soo Won Seo, Jisong Kim, and Jun Won Choi


LiDAR-Based 3D Temporal Object Detection via Motion-Aware LiDAR Feature Fusion

Sensors 24

Gyuhee Park, Junho Koh, Jisong Kim, Jun Moon, and Jun Won Choi


RCM-Fusion: Radar-Camera Multi-Level Fusion for 3D Object Detection

IEEE International Conference on Robotics and Automation (ICRA), 2024.

Jisong Kim, Minjae Seong*, Geonho Bang, Dongsuk Kum, and Jun Won Choi


PillarGen: Enhancing Radar Point Cloud Density and Quality via Pillar-based Point Generation Network

IEEE International Conference on Robotics and Automation (ICRA), 2024.

Jisong Kim, Geonho Bang*, Kwangjin Choi, Minjae Seong, Jaechang Yoo, Eunjon Pyo and Jun Won Choi


RadarDistill: Boosting Radar-based Object Detection Performance via Knowledge Distillation from LiDAR Features

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024

Geonho Bang, Kwangjin Choi*, Jisong Kim, Dongsuk Kum and Jun Won Choi


3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection

The European Conference on Computer Vision (ECCV), 2020

Jin Hyeok Yoo, Yecheol Kim, Jisong Kim and Jun Won Choi

(* indicates equal contributions)

Review Experiences

  • Journal Review : IEEE Transactions on Intelligent Transportation Systems

Skills

  • Computer Languages : Python, C++
  • Deep Learning Tools : Pytorch, Tensorflow
  • Language : Korean (Native), English (Intermediate)