{"id":75,"date":"2022-07-10T20:43:57","date_gmt":"2022-07-10T11:43:57","guid":{"rendered":"http:\/\/aiunilab.dothome.co.kr\/?page_id=75"},"modified":"2026-05-01T21:11:38","modified_gmt":"2026-05-01T12:11:38","slug":"publications","status":"publish","type":"page","link":"https:\/\/ai.sookmyung.ac.kr\/?page_id=75","title":{"rendered":"Publications"},"content":{"rendered":"\r\n<p class=\"has-virtue-primary-color has-text-color\" style=\"font-size: 15px;\"><strong>*<\/strong>\u00a0 corresponding author, <strong>+<\/strong> equally contribution<\/p>\r\n<p>&nbsp;<\/p>\r\n<p class=\"has-virtue-primary-color has-text-color\" style=\"font-size: 15px;\">\r\n\r\n<\/p>\r\n<ol class=\"wp-block-list\">\r\n<li style=\"list-style-type: none;\">\r\n<ol><\/ol>\r\n<\/li>\r\n<\/ol>\r\n<h2 class=\"has-virtue-primary-light-color has-text-color wp-block-heading\">ONGOING WORKS<\/h2>\r\n<p>&nbsp;<\/p>\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li>(<strong><a href=\"http:\/\/aiunilab.com\/?page_id=196\" data-type=\"URL\" data-id=\"http:\/\/aiunilab.com\/?page_id=196\">NRF CS Top Conference<\/a><\/strong>) UC. Jun, J. Ko, and J. Kang*, &#8220;<a href=\"https:\/\/doi.org\/10.1111\/cgf.70409\" target=\"_blank\" rel=\"noopener\">Latent Diffusion Meets GAN: Adversarial Learning in the Autoencoded Latent Space<\/a>,&#8221; <br \/><i><strong>In Proceedings of the Eurographics 2026 (Computer Graphics Forum)<\/strong>, Aachen, Germany, <\/i>4-8 May 2026<i>, <strong>accepted<\/strong><\/i>.<br \/><br \/><\/li>\r\n<li>(<strong><a href=\"http:\/\/aiunilab.com\/?page_id=196\" data-type=\"URL\" data-id=\"http:\/\/aiunilab.com\/?page_id=196\">NRF CS Top Conference<\/a><\/strong>) UC. Jun+, J. Ko+, and J. Kang*, &#8220;<a href=\"https:\/\/doi.org\/10.1111\/cgf.70378\" target=\"_blank\" rel=\"noopener\">UniCross3D: Unified Cross-View and Cross-Domain Diffusion for Consistent Single-Image 3D Generation<\/a>,&#8221; <br \/><strong><i>In Proceedings of the Eurographics <\/i><\/strong><i><strong>2026 (Computer Graphics Forum)<\/strong>, Aachen, Germany, <\/i>4-8 May 2026<i>, <b>accepted.<\/b><\/i><br \/><br \/><\/li>\r\n<li>UC. Jun, &#8230;, and J. Kang*, Diffusion-related Work,<em> ECCV 2026, submitted<\/em> (Anonymous Submission).<br \/><br \/><\/li>\r\n<li>J. Ko, UC. Jun, and J. Kang*, Text-to-3D Work<em>, ECCV 2026, submitted<\/em> (Anonymous Submission)<em>.<br \/><br \/><\/em><\/li>\r\n<li>J. Ko+, UC. Jun+, and J. Kang*, VQ-GAN-related Work, <em>ECCV 2026,<\/em> <em>submitted<\/em> (Anonymous Submission).<br \/><br \/><\/li>\r\n<li>UC. Jun+, J. Ko+, &#8230;, and J. Kang*, Physically Inspired 2D-to-3D Work, <em>ACM MM 2026, submitted (Anonymous Submission).<br \/><br \/><\/em><\/li>\r\n<li>UC. Jun, &#8230;, and J. Kang*, 3D Scene Graph Work,<em> NeurIPS 2026, submitted<\/em> (Anonymous Submission).<\/li>\r\n<\/ol>\r\n<\/li>\r\n<\/ol>\r\n<\/li>\r\n<\/ol>\r\n<\/li>\r\n<li style=\"list-style-type: none;\">\r\n<p>&nbsp;<\/p>\r\n<p>&nbsp;<\/p>\r\n<h2 class=\"has-virtue-primary-light-color has-text-color\">SELECTED PUBLICATIONS<\/h2>\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li style=\"list-style-type: none;\">\u00a0<\/li>\r\n<\/ol>\r\n<ol>\r\n<li>K. Lee, J. Huh, <strong>J. Kang<\/strong>, S. Lee, &#8220;<a href=\"https:\/\/doi.org\/10.1016\/j.patcog.2025.112805\" target=\"_blank\" rel=\"noopener\">Structure and Sensitivity in 3D Human Pose Similarity Quantification and Estimation<\/a>,&#8221;<br \/><em><strong>Pattern Recognition<\/strong><\/em>, vol. 173, pp. 112805, May 2026. (IF <strong>7.6<\/strong>, JCR 2024)<br \/><br \/><\/li>\r\n<li>(<strong><a href=\"http:\/\/aiunilab.com\/?page_id=196\" data-type=\"URL\" data-id=\"http:\/\/aiunilab.com\/?page_id=196\">NRF CS Top Conference<\/a><\/strong>) UC. Jun, J. Ko, and <strong>J. Kang*<\/strong>, &#8220;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2025\/html\/Jun_Generative_Adversarial_Diffusion_ICCV_2025_paper.html\" target=\"_blank\" rel=\"noopener\">Generative Adversarial Diffusion<\/a>,&#8221;<br \/><em><strong>In Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV 2025)<\/strong><\/em>, Honolulu, Hawaii, 19-23 Oct. 2025.<br \/><br \/><\/li>\r\n<li><strong>J. Kang+<\/strong>, S. Lee+, and S. Lee*, &#8220;<a href=\"http:\/\/dx.doi.org\/10.1145\/3734874\" target=\"_blank\" rel=\"noopener\">3D Facial Shape Similarity with Deep Multiview Perceptual Representations<\/a>,&#8221; <br \/><strong><em>ACM Transactions on Multimedia Computing, Communications, and Applications (ACM ToMM)<\/em><\/strong>, vol. 21, no. 6, pp. 183:1-183:27, July 2025. <strong>(IF 6.0<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>J. Huh, <strong>J. Kang*<\/strong>, J. Woo, S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1145\/3709001\" target=\"_blank\" rel=\"noopener\">A Low-traffic Intelligent Video Surveillance System using Scene-Preserving Video Anonymization<\/a>,&#8221;<br \/><strong><em>ACM Transactions on Intelligent Systems and Technology (ACM TIST)<\/em><\/strong>, vol. 16, no. 2, pp. 32:1-32:24, Feb. 2025. <strong>(IF 6.6<\/strong>, JCR 2024<strong>)<\/strong><br \/><br \/><\/li>\r\n<li>S. Lee, <strong>J. Kang*<\/strong>, S. Lee*, W. Lin, A.C. Bovik, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/tpami.2024.3422490\" target=\"_blank\" rel=\"noopener\">3D-PSSIM: Projective Structural Similarity for 3D Mesh Quality Assessment Robust to Topological Irregularities<\/a>,&#8221; <br \/><em><b>IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE TPAMI),<\/b><\/em> vol. 46, no. 12, pp. 595-9611, Dec. 2024. <strong>(IF 18.6,<\/strong> JCR 2024<strong>)<\/strong><br \/><br \/><\/li>\r\n<li>(<strong><a href=\"http:\/\/aiunilab.com\/?page_id=196\" data-type=\"URL\" data-id=\"http:\/\/aiunilab.com\/?page_id=196\">NRF CS Top Conference<\/a><\/strong>) <strong>J. Kang<\/strong>, T. Kim, Y. Park,&#8221;<a href=\"https:\/\/doi.org\/10.1145\/3626772.3657975\" target=\"_blank\" rel=\"noopener\">Convex Feature Embedding for Face and Voice Association<\/a>,&#8221;<br \/><em><strong>In Proceedings of International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR 2024<\/strong>)<\/em>, Washington D.C., USA, July 14-18, 2024.<em><br \/><br \/><\/em><\/li>\r\n<li>A. Nguyen, S. Choi, W. Kim, J. Kim, H. Oh, <strong>J. Kang<\/strong>, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TNNLS.2022.3211929\" target=\"_blank\" rel=\"noreferrer noopener\" data-type=\"URL\" data-id=\"https:\/\/doi.org\/10.1109\/TNNLS.2022.3211929\">Single Image 3D Reconstruction: Rethinking Point Cloud Deformation<\/a>,&#8221; <br \/><strong><em>IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS)<\/em><\/strong>, vol. 34, no. 5, May 2024. <strong>(IF 8.9<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>J. Lee, D. Nguyen, J. Kim,<strong> J. Kang<\/strong>, S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1016\/j.engappai.2023.107404\" target=\"_blank\" rel=\"noopener\">Double Reverse Diffusion for Realistic Garment Reconstruction from Images<\/a>,&#8221;<br \/><strong><em>Engineering Applications of Artificial Intelligence (EAAI), <\/em><\/strong>vol. 127, 107404, Jan. 2024<strong><em>. <\/em>(IF 8.0,<\/strong> JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>K. Lee, Y. Park, J. Huh, <strong>J. Kang*<\/strong>, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TCSVT.2022.3178430\" target=\"_blank\" rel=\"noreferrer noopener\">Self-Updatable Database System Based on Human Motion Assessment Framework<\/a>,&#8221;<br \/><strong><em>IEEE Transactions on Circuits and Systems for Video Technology (IEEE TCSVT)<\/em><\/strong>, vol. 32, no. 10, pp. 7160-7176, Oct. 2022. <strong>(IF 11.1<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li><strong>J. Kang<\/strong>, S. Lee,\u00a0 and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TSMC.2021.3054677\" target=\"_blank\" rel=\"noreferrer noopener\">Competitive Learning of Facial Fitting and Synthesis using UV Energy<\/a>,&#8221;<br \/><strong><em>IEEE Transactions on Systems, Man, and Cybernetics: Systems (IEEE TSMC)<\/em><\/strong>, vol. 52, no. 5, pp. 2858-2873, May 2022. <strong>(IF 8.7<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li><strong>J. Kang<\/strong>, S. Lee,\u00a0 M. Jang, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TCSVT.2021.3089695\" target=\"_blank\" rel=\"noreferrer noopener\">Gradient Flow Evolution for 3D Fusion from a Single Depth Sensor<\/a>,&#8221;<strong><em><br \/>IEEE Transactions on Circuits and Systems for Video Technology (IEEE TCSVT)<\/em><\/strong>, vol. 32, no. 4, pp. 2211-2225, April 2022. <strong>(IF <strong>11.1<\/strong><\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>(<strong><a href=\"http:\/\/aiunilab.com\/?page_id=196\" data-type=\"URL\" data-id=\"http:\/\/aiunilab.com\/?page_id=196\">NRF CS Top Conference<\/a><\/strong>) H. Song, J. Park, S. Heo, <strong>J. Kang<\/strong>, and S. Lee, &#8220;<a href=\"https:\/\/doi.org\/10.1145\/3394171.3413966\" target=\"_blank\" rel=\"noreferrer noopener\">PatchMatch based Multiview Stereo with Local Quadric Window<\/a>,&#8221;<br \/><strong><em>In Proceedings of ACM International Conference on Multimedia (ACM MM 2020)<\/em><\/strong>, Seattle, USA, 12-16 Oct. 2020.<span style=\"color: #000000;\"> <em>(<\/em><\/span><em>Acceptance Rate <strong>27.9%<\/strong>)<\/em><br \/><br \/><\/li>\r\n<li><strong>J. Kang<\/strong> and S. Heo, W. Hyung, J. Lim, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TIP.2018.2862346\" target=\"_blank\" rel=\"noreferrer noopener\">Three-dimensional Active Vessel Tracking Using an Elliptical Prior<\/a>,&#8221;<br \/><strong><em>IEEE Transactions on Image Processing (IEEE TIP)<\/em><\/strong>, vol. 27, no. 12, pp. 5933-5946, Dec. 2018. <strong>(IF 12.8<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>K. Lee, B. Kwon, <strong>J. Kang<\/strong>, S. Heo, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TAES.2017.2711679\" target=\"_blank\" rel=\"noreferrer noopener\">Optimal Flow Rate Control for SDN-based Naval Systems,<\/a>&#8220;<br \/><strong><em>IEEE Transactions on Aerospace and Electronic Systems (IEEE TAES)<\/em><\/strong>, vol. 53, no. 6, pp. 2690-2705, Dec. 2017. <strong>(IF 5.7<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>T. Kim, <strong>J. Kang<\/strong>, S. Lee*, and A. C. Bovik, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TMM.2013.2292592\" target=\"_blank\" rel=\"noreferrer noopener\">Multimodal Interactive Continuous Scoring of Subjective 3D Video Quality of Experience<\/a>,&#8221;<br \/><strong><em>IEEE Transactions on Multimedia (IEEE TMM)<\/em><\/strong>, vol. 16, no. 2, pp. 387-402, Feb. 2014. <strong>(IF 9.7<\/strong>, JCR 2024<strong>)<br \/><\/strong><br \/><span style=\"color: #000000;\"><br \/><\/span><\/li>\r\n<\/ol>\r\n<\/li>\r\n<\/ol>\r\n<\/li>\r\n<\/ol>\r\n<ol><\/ol>\r\n<p><\/p>\r\n<h2 class=\"has-virtue-primary-light-color has-text-color\">\u00a0<\/h2>\r\n<!-- \/wp:post-content -->\r\n\r\n<!-- wp:list {\"ordered\":true} -->\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol><!-- wp:heading {\"textColor\":\"virtue-primary-light\"} --><\/ol>\r\n<\/li>\r\n<\/ol>\r\n<h2 class=\"has-virtue-primary-light-color has-text-color\">INTERNATIONAL JOURNAL PUBLICATIONS<\/h2>\r\n<p>&nbsp;<\/p>\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li>K. Lee, J. Huh, <strong>J. Kang<\/strong>, S. Lee, &#8220;<a href=\"https:\/\/doi.org\/10.1016\/j.patcog.2025.112805\" target=\"_blank\" rel=\"noopener\">Structure and Sensitivity in 3D Human Pose Similarity Quantification and Estimation<\/a>,&#8221;<br \/><em><strong>Pattern Recognition<\/strong><\/em>, vol. 173, pp. 112805, May 2026. (IF <strong>7.6<\/strong>, JCR 2024)<br \/><br \/><\/li>\r\n<li>J. Hwang, T. Kim, and <strong>J. Kang*<\/strong>, &#8220;<a href=\"https:\/\/doi.org\/10.1007\/s00530-025-01918-y\" target=\"_blank\" rel=\"noopener\">Collaborative feature aggregation for face super-resolution and robust re-identification<\/a>,&#8221; <br \/><em><strong>Multimedia Systems<\/strong>,<\/em> vol. 31, no. 5, pp. 341<em>, <\/em>Aug. 2025. <strong>(<\/strong>IF<strong> 3.1<\/strong>, JCR 2024<strong>)<\/strong><br \/><br \/><\/li>\r\n<li><strong>J. Kang+<\/strong>, S. Lee+, and S. Lee*, &#8220;<a href=\"http:\/\/dx.doi.org\/10.1145\/3734874\" target=\"_blank\" rel=\"noopener\">3D Facial Shape Similarity with Deep Multiview Perceptual Representations<\/a>,&#8221; <br \/><strong><em>ACM Transactions on Multimedia Computing, Communications and Applications (ACM ToMM)<\/em><\/strong>, vol. 21, no. 6, pp. 183:1\u2013183:27, July 2025. <strong>(IF 6.0<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>T. Kim and <strong>J. Kang*<\/strong>, &#8220;<a href=\"https:\/\/doi.org\/10.1007\/s00530-025-01872-9\" target=\"_blank\" rel=\"noopener\">Face and Voice Association with Learning Convex Feature Embedding<\/a>,&#8221; <br \/><em><strong>Multimedia Systems<\/strong>,<\/em> vol. 31, no. 4, pp. 296, July 2025. <strong>(<\/strong>IF<strong> 3.1<\/strong>, JCR 2024<strong>)<\/strong><br \/><br \/><\/li>\r\n<li>J. Hwang, T. Kim, H. Oh, <strong>J. Kang*<\/strong>, &#8220;<a href=\"https:\/\/doi.org\/10.1007\/s00530-025-01883-6\" target=\"_blank\" rel=\"noopener\">Convolutional Neural Shading for High-Quality 3D Reconstruction from Multi-View Images<\/a>,&#8221; <br \/><em><strong>Multimedia Systems<\/strong>,<\/em> vol. 31, no. 4, pp. 290, June 2025<i><em>. <\/em><\/i><strong>(<\/strong>IF<strong> 3.1<\/strong>, JCR 2024<strong>)<\/strong><i><br \/><br \/><\/i><\/li>\r\n<li>J. Kim, <strong>J. Kang<\/strong>, T. Kim, and H. Oh, &#8221; <span class=\"fontstyle0\"><a href=\"https:\/\/doi.org\/10.1016\/j.imavis.2025.105551\" target=\"_blank\" rel=\"noopener\">SinWaveFusion: Learning a Single Image Diffusion Model in Wavelet Domain<\/a>,&#8221;<\/span><br \/><em><strong>Image and Vision Computing<\/strong>, <\/em>vol. 159, 105551, June 2025. <strong>(<\/strong>IF<strong> 4.2<\/strong>, JCR 2024<strong>)<\/strong><br \/><br \/><\/li>\r\n<li>J. Huh, <strong>J. Kang*<\/strong>, J. Woo, S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1145\/3709001\" target=\"_blank\" rel=\"noopener\">A Low-traffic Intelligent Video Surveillance System using Scene-Preserving Video Anonymization<\/a>,&#8221;<br \/><strong><em>ACM Transactions on Intelligent Systems and Technology (ACM TIST)<\/em><\/strong>, vol. 16, no. 2, pp. 32:1-32:24, Feb. 2025. <strong>(IF 6.6<\/strong>, JCR 2024<strong>)<\/strong><br \/><br \/><\/li>\r\n<li>S. Lee, <strong>J. Kang*<\/strong>, S. Lee*, W. Lin, A.C. Bovik, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/tpami.2024.3422490\" target=\"_blank\" rel=\"noopener\">3D-PSSIM: Projective Structural Similarity for 3D Mesh Quality Assessment Robust to Topological Irregularities<\/a>,&#8221; <br \/><em><b>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)<\/b><\/em>, vol. 46, no. 12, pp. 9595-9611, Dec 2024. <strong>(IF 18.6,<\/strong> JCR 2024<strong>)<\/strong><br \/><br \/><\/li>\r\n<li>H. Song+, <strong>J. Kang*<\/strong>, T. Kim*, <a href=\"https:\/\/doi.org\/10.1561\/116.20240033\" target=\"_blank\" rel=\"noopener\">Continual Learning Based Personalized Abnormal Behavior Recognition Alarm System<\/a>,&#8221;<br \/><em><strong>APSIPA Transactions on Signal and Information Processing<\/strong><\/em>, Sep. 2024. <strong>(IF 3.2<\/strong>, JCR 2024<strong>)<\/strong><br \/><br \/><\/li>\r\n<li>J. Hwang, B. Kim, T. Kim, H. Oh, <strong>J. Kang*<\/strong>, &#8220;<a href=\"https:\/\/doi.org\/10.1016\/j.imavis.2024.105043\" target=\"_blank\" rel=\"noopener\">EMOVA : Emotion-driven Neural Volumetric Avatar<\/a>,&#8221; <br \/><em><strong>Image and Vision Computing<\/strong>, <\/em>vol. 146, 105043<em>, <\/em>June\u00a02024. <strong>(<\/strong>IF<strong> 4.2<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>A. Nguyen, S. Choi, W. Kim, J. Kim, H. Oh, <strong>J. Kang<\/strong>, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TNNLS.2022.3211929\" target=\"_blank\" rel=\"noreferrer noopener\" data-type=\"URL\" data-id=\"https:\/\/doi.org\/10.1109\/TNNLS.2022.3211929\">Single Image 3D Reconstruction: Rethinking Point Cloud Deformation<\/a>,&#8221;<br \/><strong><em>IEEE Transactions on Neural Networks and Learning Systems (TNNLS)<\/em><\/strong>, vol. 34, no. 5, May 2024. <strong>(IF 8.9<\/strong>, JCR 2024<strong>)<\/strong><br \/><br \/><\/li>\r\n<li>T. Kim, J. Kim, H. Oh, and <strong>J. Kang*<\/strong>, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/access.2024.3361283\" target=\"_blank\" rel=\"noopener\">Deep Transformer based Video Inpainting Using Fast Fourier Tokenization<\/a>,&#8221;<br \/><em><strong>IEEE Access<\/strong><\/em>, vol. 12, pp. 21723-21736, Feb. 2024. <strong>(IF 3.4<\/strong>, JCR 2023<strong>)<\/strong><br \/><br \/><\/li>\r\n<li>J. Lee, D. Nguyen, J. Kim,<strong> J. Kang<\/strong>, S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1016\/j.engappai.2023.107404\" target=\"_blank\" rel=\"noopener\">Double Reverse Diffusion for Realistic Garment Reconstruction from Images<\/a>,&#8221;<br \/><strong><em>Engineering Applications of Artificial Intelligence (EAAI), <\/em><\/strong>vol. 127, 107404, Jan. 2024<strong><em>. <\/em>(IF 8.0,<\/strong> JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>Y. Yong, <strong>J. Kang<\/strong>*, and H. Oh*, &#8220;<a href=\"https:\/\/doi.org\/10.3390\/electronics13030590\" target=\"_blank\" rel=\"noopener\">Detection-Free Object Tracking for Multiple Occluded Targets in Plenoptic Video<\/a>,&#8221;<br \/><em><strong>Electronics<\/strong><\/em>, vol. 13, no. 3, 590, Jan., 2024. <strong>(IF 2.6<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>S. Lee, H. Yoon, S. Park, S. Lee, and <strong>J. Kang*<\/strong>, &#8220;<a href=\"https:\/\/doi.org\/10.3390\/electronics12173735\" target=\"_blank\" rel=\"noopener\">Stabilized Temporal 3D Face Alignment Using Landmark Displacement Learning<\/a>,&#8221;<br \/><em><strong>Electronics<\/strong><\/em>, vol. 12, no. 17, 3735, Sep., 2023. <strong>(IF 2.6<\/strong>, JCR 2024<strong>)<\/strong><br \/><br \/><\/li>\r\n<li>H. Park,\u00a0 <strong>J. Kang<\/strong>, and B. Kim*, &#8220;<a href=\"https:\/\/doi.org\/10.3390\/s23094432\" target=\"_blank\" rel=\"noopener\">ssFPN: Scale Sequence (S2) Feature-Based Feature Pyramid Network for Object Detection<\/a>,&#8221;<br \/><strong><em>Sensors<\/em><\/strong>, vol. 23<em>, <\/em>no. 9, April 2023. <strong>(IF 3.5<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li><strong>J. Kang<\/strong>, H. Song, K. Lee, and S. Lee, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/ACCESS.2023.3243287\" data-type=\"URL\" data-id=\"https:\/\/doi.org\/10.1109\/ACCESS.2023.3243287\">A Selective Expression Manipulation With Parametric 3D Facial Model<\/a>,&#8221; <br \/><em><strong>IEEE Access<\/strong><\/em>, vol. 11, pp. 17066-17084, Feb. 2023. <strong>(IF 3.6<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>K. Lee, Y. Park, J. Huh, <strong>J. Kang*<\/strong>, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TCSVT.2022.3178430\" target=\"_blank\" rel=\"noreferrer noopener\">Self-Updatable Database System Based on Human Motion Assessment Framework<\/a>,&#8221;<br \/><strong><em>IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT)<\/em><\/strong>, vol. 32, no. 10, pp. 7160-7176, Oct 2022. <strong>(IF 11.1<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li><strong>J. Kang<\/strong>, S. Lee,\u00a0 and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TSMC.2021.3054677\" target=\"_blank\" rel=\"noreferrer noopener\">Competitive Learning of Facial Fitting and Synthesis using UV Energy<\/a>,&#8221;<br \/><strong><em>IEEE Transactions on Systems, Man, and Cybernetics: Systems (T-SMC)<\/em><\/strong>, vol. 52, no. 5, pp. 2858-2873, May 2022. <strong>(IF 8.7<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>M. Jang, S. Lee, <strong>J. Kang*<\/strong>, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.3390\/s22114142\" target=\"_blank\" rel=\"noreferrer noopener\">Technical Consideration Towards Robust 3D Reconstruction with Multi-view Active Stereo Sensors<\/a>,&#8221;<br \/><strong><em>Sensors<\/em><\/strong>, vol. 22, no 11, May 2022. <strong>(IF <strong>3.5<\/strong><\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li><strong>J. Kang<\/strong>, S. Lee,\u00a0 M. Jang, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TCSVT.2021.3089695\" target=\"_blank\" rel=\"noreferrer noopener\">Gradient Flow Evolution for 3D Fusion from a Single Depth Sensor<\/a>,&#8221;<strong><em><br \/>IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT)<\/em><\/strong>, vol. 32, no. 4, pp. 2211-2225, April 2022. <strong>(IF <strong>11.1<\/strong><\/strong>, JCR 2024<strong>)<\/strong><br \/><br \/><\/li>\r\n<li>M. Jang+, H. Yoon+, S. Lee,\u00a0 <strong>J. Kang*<\/strong>, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.3390\/s22093332\" target=\"_blank\" rel=\"noreferrer noopener\">A Comparison and Evaluation of Stereo Matching on Active Stereo Images<\/a>,&#8221;<br \/><strong><em>Sensors<\/em><\/strong>, vol. 22<em>, no. 9<\/em>, Apr. 2022. <strong>(IF 3.5<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>D. Kim, S. Heo, <strong>J. Kang*<\/strong>, H. Kang, and S. Lee<strong>*<\/strong>, &#8220;<a href=\"https:\/\/www.mdpi.com\/2076-3417\/11\/19\/9194\" target=\"_blank\" rel=\"noreferrer noopener\">A Photo Identification Framework to Prevent Copyright Infringement with Manipulations<\/a>,&#8221;<br \/><strong><em>Applied Sciences-Basel<\/em><\/strong>, vol. 11, no. 19, Oct. 2021. <strong>(IF 2.5<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>H. Yoon+, M. Jang+, J. Heo, <strong>J. Kang*<\/strong>, and S. Lee<strong>*<\/strong>, &#8220;<a href=\"https:\/\/doi.org\/10.3390\/s21186276\" target=\"_blank\" rel=\"noreferrer noopener\">Multiple Sensor Synchronization with the RealSense RGB-D Camera<\/a>,&#8221;<br \/><strong><em>Sensors<\/em><\/strong>, vol. 21, no. 18, Sep. 2021. <strong>(IF 3.5, <\/strong>JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li><strong>J. Kang<\/strong> and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/ACCESS.2020.3029065\" target=\"_blank\" rel=\"noreferrer noopener\">A Greedy Pursuit Approach for Fitting 3D Facial Expression Models<\/a>,&#8221;<br \/><strong><em>IEEE Access<\/em><\/strong>, vol. 8, pp. 192682-192692, Oct. 2020. <strong>(IF 3.6<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>S. Heo, H. Song, <strong>J. Kang<\/strong>, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/ACCESS.2020.3026545\" target=\"_blank\" rel=\"noreferrer noopener\">Local Spherical Harmonics for Facial Shape and Albedo Estimation<\/a>,&#8221;<br \/><strong><em>IEEE Access<\/em><\/strong>, vol. 8, pp. 177424-177436, Sep. 2020. <strong>(IF 3.6<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li><strong>J. Kang<\/strong> and S. Heo, W. Hyung, J. Lim, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TIP.2018.2862346\" target=\"_blank\" rel=\"noreferrer noopener\">Three-dimensional Active Vessel Tracking Using an Elliptical Prior<\/a>,&#8221;<br \/><strong><em>IEEE Transactions on Image Processing (TIP)<\/em><\/strong>, vol. 27, no. 12, pp. 5933-5946, Dec. 2018. <strong>(IF 12.8<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>K. Lee, B. Kwon, <strong>J. Kang<\/strong>, S. Heo, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TAES.2017.2711679\" target=\"_blank\" rel=\"noreferrer noopener\">Optimal Flow Rate Control for SDN-based Naval Systems,<\/a>&#8220;<br \/><strong><em>IEEE Transactions on Aerospace and Electronic Systems (T-AES)<\/em><\/strong>, vol. 53, no. 6, pp. 2690-2705, Dec. 2017. <strong>(IF 5.7<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>S. Lee, <strong>J. Kang<\/strong>, S. Heo, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1016\/j.cmpb.2017.06.017\" target=\"_blank\" rel=\"noreferrer noopener\">An Enhanced Particle-filtering Framework for Vessel Segmentation and Tracking<\/a>,&#8221;<br \/><strong><em>Computer Methods and Programs in Biomedicine<\/em><\/strong>, vol. 148, no. 1, pp. 99-112, Sep. 2017. <strong>(IF 4.8<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li><strong>J. Kang,<\/strong> T. Kim, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1007\/s11277-015-2680-z\" target=\"_blank\" rel=\"noreferrer noopener\">Implementation of Multimodal Interactive Continuous Scoring for 3D Quality of Experience<\/a>,&#8221;<br \/><strong><em>Wireless Personal Communications<\/em><\/strong>, vol. 84, no. 2, pp. 1133-1149, Sep. 2015. <strong>(IF 2.2<\/strong>, JCR 2024<strong>)<br \/><br \/><\/strong><\/li>\r\n<li>T. Kim, <strong>J. Kang<\/strong>, S. Lee*, and A. C. Bovik, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/TMM.2013.2292592\" target=\"_blank\" rel=\"noreferrer noopener\">Multimodal Interactive Continuous Scoring of Subjective 3D Video Quality of Experience<\/a>,&#8221;<br \/><strong><em>IEEE Transactions on Multimedia<\/em><\/strong>, vol. 16, no. 2, pp. 387-402, Feb. 2014. <strong>(IF 9.7<\/strong>, JCR 2024<strong>)<\/strong><\/li>\r\n<\/ol>\r\n<\/li>\r\n<\/ol>\r\n<\/li>\r\n<\/ol>\r\n<\/li>\r\n<\/ol>\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol><!-- \/wp:heading -->\r\n\r\n<!-- wp:list-item \/-->\r\n\r\n<!-- wp:list-item \/-->\r\n\r\n<!-- wp:list-item \/--><\/ol>\r\n<\/li>\r\n<\/ol>\r\n<ol><!-- wp:list-item \/-->\r\n\r\n<!-- wp:list-item \/-->\r\n\r\n<!-- wp:list-item \/--><\/ol>\r\n<\/li>\r\n<!-- \/wp:list --><\/ol>\r\n<\/li>\r\n<\/ol>\r\n<ol><!-- wp:heading {\"textColor\":\"virtue-primary-light\"} --><\/ol>\r\n<\/li>\r\n<\/ol>\r\n<h2 class=\"has-virtue-primary-light-color has-text-color\"><br \/><br \/><\/h2>\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol><!-- \/wp:heading -->\r\n\r\n<!-- wp:heading {\"textColor\":\"virtue-primary-light\"} --><\/ol>\r\n<\/li>\r\n<\/ol>\r\n<\/li>\r\n<\/ol>\r\n<h2 class=\"has-virtue-primary-light-color has-text-color\">INTERNATIONAL CONFERENCE PROCEEDINGS<strong>\u00a0<\/strong><\/h2>\r\n<p>&nbsp;<\/p>\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li>(<strong><a href=\"http:\/\/aiunilab.com\/?page_id=196\" data-type=\"URL\" data-id=\"http:\/\/aiunilab.com\/?page_id=196\">NRF CS Top Conference<\/a><\/strong>) UC. Jun, J. Ko, and <strong>J. Kang*<\/strong>, &#8220;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2025\/html\/Jun_Generative_Adversarial_Diffusion_ICCV_2025_paper.html\" target=\"_blank\" rel=\"noopener\">Generative Adversarial Diffusion<\/a>,&#8221;<br \/><em><strong>In Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV 2025)<\/strong><\/em>, Honolulu, Hawaii, 19-23 Oct. 2025.<br \/><br \/><\/li>\r\n<li>Y. Han, <strong>J. Kang<\/strong>, S. Lee, and T. Kim, &#8220;<a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2025W\/VQualA\/html\/Han_Understanding_Perceptual_Quality_in_CCTV_Images_A_Benchmark_Dataset_and_ICCVW_2025_paper.html\" target=\"_blank\" rel=\"noopener\">Understanding Perceptual Quality in CCTV Images: A Benchmark Dataset and Entropy-based Insights<\/a>,&#8221;<br \/><em><strong>In Proceedings of the IEEE\/CVF International Conference on Computer Vision Workshops (ICCVW 2025)<\/strong><\/em>, Honolulu, Hawaii, 19-23 Oct. 2025.<br \/><br \/><\/li>\r\n<li>(<span style=\"color: #ff6600;\"><strong>Best Paper Award<\/strong><\/span>) UC. Jun and <strong>J. Kang*, <\/strong>&#8220;Multi Style 3D Stylization with Dynamic Style-Aware Deformation,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>International Conference on Multimedia Information Technology and Applications (MITA 2025)<\/em><\/strong>, Jeju, Korea, 21-24 July 2025.<br \/><br \/><\/li>\r\n<li>J. Ko and <strong>J. Kang*, <\/strong>&#8220;3D-Consistent GAN Inversion via Multi-View Latent Alignment,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span> International Conference on Multimedia Information Technology and Applications (MITA 2025)<\/em><\/strong>, Jeju, Korea, 21-24 July 2025.<br \/><br \/><\/li>\r\n<li>(<strong><a href=\"http:\/\/aiunilab.com\/?page_id=196\" data-type=\"URL\" data-id=\"http:\/\/aiunilab.com\/?page_id=196\">NRF CS Top Conference<\/a><\/strong>) <strong>J. Kang+<\/strong>, H. Song+, T. Kim*, <span style=\"color: #000000;\">&#8220;<a href=\"https:\/\/doi.org\/10.1109\/AVSS61716.2024.10672574\" target=\"_blank\" rel=\"noopener\">Real-time Abnormal Behavior Recognition for Patient Monitoring in Hospitals<\/a>,&#8221;<br \/><em><strong>In Proceedings of the IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS 2024<\/strong>)<\/em><\/span><span style=\"color: #000000;\">, Niagara Falls, Canada, 15-16 July 2024.<\/span><br \/><br \/><\/li>\r\n<li>(<span style=\"color: #ff6600;\"><strong>Best Paper Award<\/strong><\/span>) J. Ko and <strong>J. Kang*, <\/strong>&#8220;Personalization Text-to-Image Diffusion Models for Specific Subjects,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>International Conference on Multimedia Information Technology and Applications (MITA 2024)<\/em><\/strong>, Taipei, Taiwan, 23-26 July 2024.<br \/><br \/><\/li>\r\n<li>UC. Jun and <strong>J. Kang*, <\/strong>&#8220;Zero-shot Text-To-3D Generation Using 2D Diffusion,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span> International Conference on Multimedia Information Technology and Applications (MITA 2024)<\/em><\/strong>, Taipei, Taiwan, 23-26 July 2024.<br \/><br \/><\/li>\r\n<li>(<strong><a href=\"http:\/\/aiunilab.com\/?page_id=196\" data-type=\"URL\" data-id=\"http:\/\/aiunilab.com\/?page_id=196\">NRF CS Top Conference<\/a><\/strong>) <strong>J. Kang<\/strong>, T. Kim, Y. Park, <span style=\"color: #000000;\">&#8220;<a href=\"https:\/\/doi.org\/10.1145\/3626772.3657975\" target=\"_blank\" rel=\"noopener\">Convex Feature Embedding for Face and Voice Association<\/a>,&#8221;<br \/><em><strong>In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR 2024<\/strong>)<\/em><\/span><span style=\"color: #000000;\">, <em>Short Paper<\/em>, Washington D.C., USA, 14-18 July 2024. <em>(<\/em><\/span><em>Acceptance Rate <strong>24.0%<\/strong>)<\/em><br \/><br \/><\/li>\r\n<li>(<span style=\"color: #ff6600;\"><strong>Best Paper Award<\/strong><\/span>) UC. Jun, N. Eun, J. Lee, and <strong>J. Kang*<\/strong>, &#8220;Self-supervised 3D Face Model Learning for Monocular Image,&#8221;<br \/><em><strong>In Proceedings of <span style=\"color: #000000;\">the <\/span>Korea-Japan Joint Workshop on Complex Communication Sciences (KJCCS 2024)<\/strong><\/em>, Beppu, Japan, 29-31 Jan. 2024.<br \/><br \/><\/li>\r\n<li>J. Ko, Y. Ok, J. Lee, and <strong>J. Kang*<\/strong>, &#8220;A Bilinear Face Model for Real-time Performance-Based Applications,&#8221;<br \/><em><strong>In Proceedings of <span style=\"color: #000000;\">the <\/span>Korea-Japan Joint Workshop on Complex Communication Sciences (KJCCS 2024)<\/strong><\/em>, Beppu, Japan, 29-31 Jan. 2024.<br \/><br \/><\/li>\r\n<li>J. Hwang and <strong>J. Kang*, <\/strong>&#8220;<a href=\"http:\/\/dx.doi.org\/10.1109\/ICCE59016.2024.10444241\" target=\"_blank\" rel=\"noopener\">Double Discrete Representation for 3D Human Pose Estimation from Head-mounted Camera<\/a>,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>IEEE International Conference on Consumer Electronics (IEEE ICCE 2024)<\/em><\/strong>, Las Vegas, NV, USA, 5-8 Jan. 2024.<br \/><br \/><\/li>\r\n<li>C. Kim, G. Lee, Y. Choi, <strong>J. Kang<\/strong>, B. Kim*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/ICCE59016.2024.10444505\" target=\"_blank\" rel=\"noopener\">Channel Selective Relation Network for Efficient Few-shot Facial Expression Recognition<\/a>,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>IEEE International Conference on Consumer Electronics (IEEE ICCE 2024)<\/em><\/strong>, Las Vegas, NV, USA, 5-8 Jan. 2024.<br \/><br \/><\/li>\r\n<li>J. Hwang and <strong>J. Kang*, <\/strong>&#8220;<a href=\"https:\/\/doi.org\/10.1109\/WACVW60836.2024.00042\" target=\"_blank\" rel=\"noopener\">Aerial View 3D Human Pose Estimation Using Double Vector Quantized-Variational AutoEncoders<\/a>,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>IEEE\/CVF Winter Conference on Applications of Computer Vision Workshops (IEEE\/CVF WACVW 2024)<\/em><\/strong>, Waikoloa, Hawaii, USA, 4-8 Jan. 2024.<br \/><br \/><\/li>\r\n<li>J. Hwang and <strong>J. Kang*, <\/strong>&#8220;<a href=\"https:\/\/doi.org\/10.1109\/BigData59044.2023.10386942\" target=\"_blank\" rel=\"noopener\">Audio-visual Neural Face Generation with Emotional Stimuli<\/a>,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>IEEE International Conference on Big Data (IEEE BigData 2023)<\/em><\/strong>, Sorrento, Italy, 15-18 Dec. 2023.<br \/><br \/><\/li>\r\n<li>S. Lee, H. Yoon, <strong>J. Kang<\/strong>, J. Kim, J. Son. J. Huh, and S. Lee, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/MMSP59012.2023.10337645\" target=\"_blank\" rel=\"noopener\">Video-based Stabilized 3D Face Alignment using Temporal Multi-Discrimination<\/a>,&#8221; <br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>IEEE Workshop on Multimedia Signal Processing (IEEE MMSP 2023)<\/em><\/strong>, Poitiers, France, 27-29 Sep. 2023.<br \/><br \/><\/li>\r\n<li>(<strong><a href=\"http:\/\/aiunilab.com\/?page_id=196\" data-type=\"URL\" data-id=\"http:\/\/aiunilab.com\/?page_id=196\">NRF CS Top Conference<\/a><\/strong>) <strong>J. Kang<\/strong>, J. Hwang, M. Choi, and S. Lee<strong>*<\/strong>, <span style=\"color: #000000;\">&#8220;<a href=\"https:\/\/doi.org\/10.1145\/3588028.3603688\" target=\"_blank\" rel=\"noopener\">High-resolution 3D Reconstruction with Neural Mesh Shading<\/a>,&#8221;<br \/><em><strong>In Proceedings of the International Conference on Computer Graphics &amp; Interactive Techniques (ACM SIGGRAPH 2023<\/strong>)<\/em><\/span><span style=\"color: #000000;\">, <em>Posters<\/em>, Los Angeles, USA, 6-10 Aug. 2023. <em>(<\/em><\/span><em>Acceptance Rate <strong>35.7%<\/strong>)<br \/><br \/><\/em><\/li>\r\n<li>(<span style=\"color: #ff6600;\"><strong>Best Paper Award<\/strong><\/span>) S. Lee, S. Park, H. Yoon, S. Lee, and <strong>J. Kang*<\/strong>,\u00a0 &#8220;Video-based Face Reconstruction with Landmark Displacement Learning,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>International Conference on Multimedia Information Technology and Applications (MITA 2023)<\/em><\/strong>, Ostrava, Czech, 11-14 July 2023.<br \/><br \/><\/li>\r\n<li>S. Park and <strong>J. Kang*, <\/strong>&#8220;Transcatheter Arterial Chemoembolization Suitability Prediction Model for Hepatocellular Carcinoma,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>International Conference on Multimedia Information Technology and Applications (MITA 2023)<\/em><\/strong>, Ostrava, Czech, 11-14 July 2023.<em><br \/><br \/><\/em><\/li>\r\n<li>(<strong><a href=\"http:\/\/aiunilab.com\/?page_id=196\" data-type=\"URL\" data-id=\"http:\/\/aiunilab.com\/?page_id=196\">NRF CS Top Conference<\/a><\/strong>) <strong>J. Kang, <\/strong>J. Hwang, K. Lee, and S. Lee<strong>*<\/strong>, &#8220;<a href=\"https:\/\/doi.org\/10.1145\/3591106.3592273\" target=\"_blank\" rel=\"noopener\">Unlocking Potential of 3D-aware GAN for More Expressive Faces<\/a>,&#8221;<br \/><em><strong>In Proceedings of <span style=\"color: #000000;\">the <\/span>ACM International Conference on Multimedia Retrieval (ACM ICMR 2023)<\/strong><\/em>, Thessaloniki, Greece, 12-15 June 2023. <em>(Acceptance Rate <strong>32.6%<\/strong>)<br \/><br \/><\/em><\/li>\r\n<li>S. Park and <strong>J. Kang*, <\/strong>&#8220;3D Clothed Human Parametric Model from a Single Scan with Joint Optimization<a href=\"https:\/\/drive.google.com\/file\/d\/1C8_GcipdSQzxv0HIb_4ueu0AKskk7OMF\/view?usp=sharing\">,<\/a>&#8220;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span><\/em>International Conference on Communication and Computer Research (ICCR 2022)<\/strong>, Nov. 2022.<br \/><br \/><\/li>\r\n<!-- \/wp:heading -->\r\n\r\n<!-- wp:list-item -->\r\n<li>S. Park and <strong>J. Kang*, <\/strong>&#8220;A Down-sampling Method of SMPL Model for Blend Skinning SMPL-topological Mesh,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>International Conference on Multimedia Information Technology and Applications (MITA 2022)<\/em><\/strong>, Jeju, Korea, 5-6 Jul. 2022.<br \/><br \/><\/li>\r\n<!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item -->\r\n<li>(<span style=\"color: #ff6600;\"><strong>Best Paper Award<\/strong><\/span>) S. Park and<strong> J. Kang*, <\/strong>&#8220;<a href=\"https:\/\/drive.google.com\/file\/d\/1C8_GcipdSQzxv0HIb_4ueu0AKskk7OMF\/view?usp=sharing\">An Avatar Generation from a Single Scan,<\/a>&#8220;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>International Conference on Multimedia Information Technology and Applications (MITA 2022)<\/em><\/strong>, Jeju, Korea, 5-6 Jul. 2022.<br \/><br \/><\/li>\r\n<li><strong>J. Kang<\/strong>, H. Yoon, S. Lee, and S. Lee, \u201c<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9689595\" target=\"_blank\" rel=\"noreferrer noopener\">Checkerboard Corner Localization Accelerated with Deep False Detection for Multi-camera Calibration<\/a>,\u201d<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>APSIPA Annual Summit and Conference (APSIPA ASC 2021)<\/em><\/strong>, IEEE, Tokyo, Japan, 14-17 Dec. 2021.<br \/><br \/><\/li>\r\n<li>S. Heo, H. Song, <strong>J. Kang<\/strong>, and S. Lee, \u201c<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9689525\" target=\"_blank\" rel=\"noreferrer noopener\">High-Quality Single Image 3D Facial Shape Reconstruction via Robust Albedo Estimation<\/a>,\u201d<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>APSIPA Annual Summit and Conference (APSIPA ASC 2021)<\/em><\/strong>, IEEE, Tokyo, Japan, 14-17 Dec. 2021.<br \/><br \/><\/li>\r\n<li>H. Yoon, S. Lee, <strong>J. Kang<\/strong>, and S. Lee, \u201c<a href=\"https:\/\/doi.org\/10.1109\/MMSP53017.2021.9733619\" target=\"_blank\" rel=\"noreferrer noopener\">Deep Chessboard Corner Detection Using Multi-task Learning<\/a>,\u201d<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>IEEE Workshop on Multimedia Signal Processing (IEEE MMSP 2021)<\/em><\/strong>, Tampere, Finland, 6-8 Oct. 2021.<br \/><br \/><\/li>\r\n<li><strong>J. Kang<\/strong>, S. Lee, M. Jang, H. Yoon, and S. Lee, \u201c<a href=\"https:\/\/doi.org\/10.1109\/ICIP42928.2021.9506166\" target=\"_blank\" rel=\"noreferrer noopener\">WarpingFusion: Accurate Multi-view TSDF Fusion with Local Perspective Warp<\/a>,\u201d<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>IEEE International Conference on Image Processing (IEEE ICIP 2021)<\/em><\/strong>, Anchorage, USA, 19-22 Sep. 2021.<br \/><br \/><\/li>\r\n<li>(<span style=\"color: #ff6600;\"><strong>Best Paper Award<\/strong><\/span>) <strong>J. Kang<\/strong>, S. Lee, M. Jang, and S. Lee, \u201c<a href=\"https:\/\/doi.org\/10.1109\/ICSIPA52582.2021.9576808\" target=\"_blank\" rel=\"noreferrer noopener\">Sparse Checkerboard Corner Detection from Global Perspective<\/a>,\u201d<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2021)<\/em><\/strong>, Virtual, 13-15 Sep. 2021.<br \/><br \/><\/li>\r\n<li>M. Jang, S. Lee, <strong>J. Kang<\/strong>, and S. Lee, \u201c<a href=\"https:\/\/doi.org\/10.1109\/ICSIPA52582.2021.9576787\" target=\"_blank\" rel=\"noreferrer noopener\">Active Stereo Matching Benchmark for 3D Reconstruction using Multi-view Depths<\/a>,\u201d<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2021)<\/em><\/strong>, Virtual, 13-15 Sep. 2021.<br \/><br \/><\/li>\r\n<li><strong>J. Kang<\/strong>, S. Lee, S. Heo, and S. Lee, &#8220;<a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9306435\" target=\"_blank\" rel=\"noreferrer noopener\">Image Inpainting using Weighted Mask Convolution<\/a>,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>APSIPA Annual Summit and Conference (APSIPA ASC 2020)<\/em><\/strong>, IEEE, Auckland, New Zealand, 7-10 Dec. 2020.<br \/><br \/><\/li>\r\n<li>(<strong><a href=\"http:\/\/aiunilab.com\/?page_id=196\" data-type=\"URL\" data-id=\"http:\/\/aiunilab.com\/?page_id=196\">NRF CS Top Conference<\/a><\/strong>) H. Song, J. Park, S. Heo, <strong>J. Kang<\/strong>, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1145\/3394171.3413966\" target=\"_blank\" rel=\"noreferrer noopener\">PatchMatch based Multiview Stereo with Local Quadric Window<\/a>,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>ACM International Conference on Multimedia (ACM MM 2020)<\/em><\/strong>, Seattle, USA, 12-16 Oct. 2020.<br \/><br \/><\/li>\r\n<li>(<strong><a href=\"http:\/\/aiunilab.com\/?page_id=196\" data-type=\"URL\" data-id=\"http:\/\/aiunilab.com\/?page_id=196\">NRF CS Top Conference<\/a><\/strong>) <strong>J. Kang<\/strong>, S. Lee, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.2312\/egs.20201018\" target=\"_blank\" rel=\"noreferrer noopener\">UV Completion with Self-referenced Discrimination<\/a>,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>Eurographics<\/em><\/strong> <em><strong>2020<\/strong><\/em>, <em>Short Paper<\/em>, Norrk\u00f6ping, Sweden, 25-29 May 2020.<br \/><br \/><\/li>\r\n<li>T. Choi, <strong>J. Kang<\/strong>, and S. Lee*, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/ICIP.2018.8451448\" target=\"_blank\" rel=\"noreferrer noopener\">Fitting Facial Models to Spatial Points: Blendshape Approaches and Benchmark<\/a>,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>IEEE International Conference on Image Processing (IEEE ICIP 2018)<\/em><\/strong>, Athens, Greece, 7-10 Oct. 2018.<br \/><br \/><\/li>\r\n<li>H. Song, <strong>J. Kang<\/strong>, and S. Lee, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/ICIP.2018.8451375\" target=\"_blank\" rel=\"noreferrer noopener\">CONCATNET: A Deep Architecture of Concatenation-Assisted Network for Dense Facial Landmark Alignment<\/a>,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>IEEE International Conference on Image Processing (IEEE ICIP 2018)<\/em><\/strong>, Athens, Greece, 7-10 Oct. 2018<br \/><br \/><\/li>\r\n<li>H. Kim, <strong>J. Kang<\/strong>, J. Kim, D. Kim, K. Lee, S. Lee, and A. C. Bovik, &#8220;Scene Adaptive Saliency Detection on Stereoscopic Videos,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>International Workshop on Video Processing and Quality Metrics (VPQM)<\/em><\/strong>, Chandler, USA, 5-6 Feb. 2015.<br \/><br \/><\/li>\r\n<li><strong>J. Kang<\/strong>, T. Oh, N. Choi, S. Lee, S. Lee and H. Kang, &#8220;<a href=\"https:\/\/doi.org\/10.1109\/ICOIN.2014.6799705\" target=\"_blank\" rel=\"noreferrer noopener\">Network-based Content Identification System via Content-based Comics Fingerprint<\/a>,&#8221;<br \/><strong><em>In Proceedings of <span style=\"color: #000000;\">the <\/span>International Conference on Information Networking (ICOIN)<\/em><\/strong>, IEEE, Phuket, Thailand, 10-12 Feb. 2014.<\/li>\r\n<!-- \/wp:list-item --><\/ol>\r\n<\/li>\r\n<!-- \/wp:list -->\r\n\r\n<!-- wp:paragraph --><\/ol>\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol><!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item --><!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item --><!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item --><!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item --><!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item --><!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item --><!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item --><!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item --><!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item --><!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item --><!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item --><!-- \/wp:list-item -->\r\n\r\n<!-- wp:list-item --><!-- \/wp:list-item --><\/ol>\r\n<\/li>\r\n<!-- \/wp:list -->\r\n\r\n<!-- wp:paragraph --><\/ol>\r\n<\/li>\r\n<\/ol>\r\n<p>&nbsp;<\/p>\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol><!-- \/wp:paragraph -->\r\n\r\n<!-- wp:heading {\"textColor\":\"virtue-primary-light\"} --><\/ol>\r\n<\/li>\r\n<\/ol>\r\n<h2 class=\"has-virtue-primary-light-color has-text-color\">DOMESTIC JOURNAL PUBLICATIONS<\/h2>\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol><!-- \/wp:heading -->\r\n\r\n<!-- wp:list {\"ordered\":true} -->\r\n<li style=\"list-style-type: none;\">\r\n<ol><!-- wp:list-item -->\r\n<li>S. Lee, H. Yoon, S. Lee, and <strong>J. Kang*, <\/strong>&#8220;<a href=\"https:\/\/doi.org\/10.33851\/JMIS.2023.10.2.101\" target=\"_blank\" rel=\"noopener\">Temporal Facial Alignment with Triple Discriminators<\/a>,&#8221; <br \/><em><strong>Journal of Multimedia Information System<\/strong><\/em>, vol. 10, no. 2, June 2023.<br \/><br \/><\/li>\r\n<li>\uac15\uc9c0\uc6b0, &#8220;<a href=\"http:\/\/convergence.sookmyung.ac.kr\/wp-content\/uploads\/2023\/02\/1.-%EA%B0%95%EC%A7%80%EC%9A%B0.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">\ube44\uc804 \u00b7 \uadf8\ub798\ud53d\uc2a4 \uc778\uacf5\uc9c0\ub2a5 \uae30\uc220\uc744 \ud65c\uc6a9\ud558\uc5ec \ubb34\uc5c7\uc744 \uc5f0\uad6c\ud560 \uc218 \uc788\ub294\uac00?<\/a>,&#8221; <br \/><em><strong>\ucc3d\uc758\uc735\ud569\uc5f0\uad6c \ud559\uc220\uc9c0<\/strong><\/em>, 2\uad8c 2\ud638, 1-23, 2022\ub144 12\uc6d4.<\/li>\r\n<!-- \/wp:list-item --><\/ol>\r\n<\/li>\r\n<!-- \/wp:list -->\r\n\r\n<!-- wp:paragraph --><\/ol>\r\n<\/li>\r\n<\/ol>\r\n<p>&nbsp;<\/p>\r\n<h2 class=\"has-virtue-primary-light-color has-text-color\"><br \/>DOMESTIC CONFERENCE PROCEEDINGS<\/h2>\r\n<ol>\r\n<li style=\"list-style-type: none;\">\r\n<ol><!-- \/wp:heading -->\r\n\r\n<!-- wp:list {\"ordered\":true} -->\r\n<li style=\"list-style-type: none;\">\r\n<ol><!-- wp:list-item -->\r\n<li>\uace0\uc7ac\uc740, \ucd5c\uc724\ud601, \uac15\uc9c0\uc6b0, &#8220;\uc81c\uc5b4 \uac00\ub2a5\ud55c 3D \uc2e4\ub0b4 \uc7a5\uba74 \uc0dd\uc131 \uc5f0\uad6c \ub3d9\ud5a5,&#8221;<br \/><strong><em>\ud55c\uad6d\uc815\ubcf4\ucc98\ub9ac\ud559\ud68c \ucd98\uacc4\ud559\uc220\ub300\ud68c ASK 2026<\/em><\/strong>, \ub77c\uce74\uc774 \uc0cc\ub4dc\ud30c\uc778, \uac15\uc6d0\ub3c4, 2026.05.20 ~ 2026.05.23.<br \/><br \/><\/li>\r\n<li>\uc804\uc720\ucc44, \ucd5c\uc724\ud601, \uac15\uc9c0\uc6b0, &#8220;3\ucc28\uc6d0 \uc7a5\uba74 \uadf8\ub798\ud504 \uad6c\uc131\uc758 \ucd5c\uadfc \uc5f0\uad6c \ub3d9\ud5a5\uacfc \uacfc\uc81c,&#8221;<br \/><strong><em>\ud55c\uad6d\uc815\ubcf4\ucc98\ub9ac\ud559\ud68c \ucd98\uacc4\ud559\uc220\ub300\ud68c ASK 2026<\/em><\/strong>, \ub77c\uce74\uc774 \uc0cc\ub4dc\ud30c\uc778, \uac15\uc6d0\ub3c4, 2026.05.20 ~ 2026.05.23.<br \/><br \/><\/li>\r\n<li>\uace0\uc7ac\uc740, \uc774\uc131\ubbfc, \uac15\uc9c0\uc6b0, &#8220;\uc815\uad50\ud55c 3D Gaussian \ud3b8\uc9d1\uc744 \uc704\ud55c \ub2e8\uc77c \ub77c\ubca8 \ud560\ub2f9 \ubc0f \uacbd\uacc4 \ubcf4\uc815,&#8221;<br \/><em><strong>2025 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd94\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \uc911\uc559\ub300\ud559\uad50, \ubd80\uc0b0\uc2dc, 2025.11.13 ~ 2025.11.15.<br \/><br \/><\/li>\r\n<li>\uc804\uc720\ucc44, \uace0\uc7ac\uc740, \uac15\uc9c0\uc6b0, &#8220;\ub2e4\uc911 \ud574\uc0c1\ub3c4 \ud2b8\ub77c\uc774\ud50c\ub808\uc778\uc744 \uc774\uc6a9\ud55c \ub2e8\uc77c \ubdf0 3\ucc28\uc6d0 \ubcf5\uc6d0,&#8221;<br \/><em><strong>2025 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd94\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \ubd80\uc0b0\ub300\ud559\uad50, \ubd80\uc0b0\uc2dc, 2025.11.13 ~ 2025.11.15.<br \/><br \/><\/li>\r\n<li>\uc804\uc720\ucc44, \uc774\uc131\ubbfc, \uac15\uc9c0\uc6b0, &#8220;\uc0dd\uc131\uc801 \uc801\ub300 \uc2e0\uacbd\uc744 \uc774\uc6a9\ud55c \ud14d\uc2a4\ud2b8-\uc774\ubbf8\uc9c0 \uc0dd\uc131 \uae30\uc220,&#8221;<br \/><em><strong>2025 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd98\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \uc911\uc559\ub300\ud559\uad50, \uc11c\uc6b8\uc2dc, 2025.05.08 ~ 2025.05.10.<br \/><br \/><\/li>\r\n<li><strong>(\uc6b0\uc218\ub17c\ubb38\uc0c1) <\/strong>\uace0\uc7ac\uc740, \uc804\uc720\ucc44, \uac15\uc9c0\uc6b0, &#8220;\uc815\ubc00\ud55c Text-to-3D \uc7ac\uad6c\uc131\uc744 \uc704\ud55c VAE \uae30\ubc18 \uc7ac\uad6c\uc131 \ud504\ub808\uc784\uc6cc\ud06c,&#8221;<br \/><em><strong>2025 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd98\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \uc911\uc559\ub300\ud559\uad50, \uc11c\uc6b8\uc2dc, 2025.05.08 ~ 2025.05.10.<br \/><br \/><\/li>\r\n<li>\uc804\uc720\ucc44, \uac15\uc9c0\uc6b0, &#8220;\uc2dc\uc810 \uc815\ubcf4\ub97c \ud65c\uc6a9\ud55c \ub2e4\uc911 \ubdf0 \ud655\uc0b0 \ubaa8\ub378\uc744 \uc774\uc6a9\ud55c \ud14d\uc2a4\ud2b8 \uae30\ubc18 3\ucc28\uc6d0 \uc7ac\uad6c\uc131,&#8221;<br \/><em><strong>2024 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd94\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \uc81c\uc8fc\ub300\ud559\uad50, \uc81c\uc8fc\uc2dc, 2024.11.07 ~ 2024.11.09.<br \/><br \/><\/li>\r\n<li>\uace0\uc7ac\uc740, \uac15\uc9c0\uc6b0, &#8220;\ub2e4\uc911 \uc2dc\uc810 \uc774\ubbf8\uc9c0 \ud655\uc0b0 \ubaa8\ub378\uc758 \ubdf0 \uc77c\uad00\uc131 \ud5a5\uc0c1\uc744 \uc704\ud55c 3D Attention Block \uc124\uacc4,&#8221;<br \/><em><strong>2024 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd94\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \uc81c\uc8fc\ub300\ud559\uad50, \uc81c\uc8fc\uc2dc, 2024.11.07 ~ 2024.11.09.<br \/><br \/><\/li>\r\n<li>\uc804\uc720\ucc44, \uac15\uc9c0\uc6b0, &#8220;\ub2e8\uc77c \uc774\ubbf8\uc9c0 \uae30\ubc18 3D \uac1d\uccb4 \uc7ac\uad6c\uc131,&#8221;<br \/><em><strong>2024 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd98\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \uc219\uba85\uc5ec\uc790\ub300\ud559\uad50, \uc11c\uc6b8\uc2dc, 2024.05.16 ~ 2024.05.18.<br \/><br \/><\/li>\r\n<li>\uace0\uc7ac\uc740, \uac15\uc9c0\uc6b0, &#8220;\uba40\ud2f0\ubdf0 \uc7ac\uad6c\uc131\uc744 \uc704\ud55c \uc2e0\uacbd\ub9dd \uae30\ubc18 3D \ud45c\uba74 \uc7ac\uad6c\uc131 \ubc29\ubc95,&#8221;<br \/><em><strong>2024 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd98\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \uc219\uba85\uc5ec\uc790\ub300\ud559\uad50, \uc11c\uc6b8\uc2dc, 2024.05.16 ~ 2024.05.18.<br \/><br \/><\/li>\r\n<li>\ud669\uc601\uc11c, \uc2e0\uacbd\uc6d0, \ud55c\ucc44\uc5f0, \uac15\uc9c0\uc6b0, &#8220;\ubd84\ud560 \ubc0f \uc640\ud551 \uae30\uc220\uc744 \ud65c\uc6a9\ud55c \uac00\uc0c1 \uc758\ub958 \ud53c\ud305,&#8221;<br \/><em><strong>2024 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd98\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \uc219\uba85\uc5ec\uc790\ub300\ud559\uad50, \uc11c\uc6b8\uc2dc, 2024.05.16 ~ 2024.05.18.<br \/><br \/><\/li>\r\n<li>\uc804\uc720\ucc44, \uac15\uc9c0\uc6b0, &#8220;\ub2e8\uc548 \uc774\ubbf8\uc9c0\ub97c \uc774\uc6a9\ud55c 3\ucc28\uc6d0 \uc790\uae30\uc9c0\ub3c4 \uc5bc\uad74 \ubaa8\ub378 \ud559\uc2b5,&#8221;<br \/><em><strong>2023 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd94\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \uacc4\uba85\ub300\ud559\uad50, \ub300\uad6c\uc2dc, 2023.11.17 ~ 2023.11.18.<br \/><br \/><\/li>\r\n<li>\uace0\uc7ac\uc740, \uac15\uc9c0\uc6b0, &#8220;\uc2e4\uc2dc\uac04 \ud37c\ud3ec\uba3c\uc2a4 \uae30\ubc18 \uc751\uc6a9\uc744 \uc704\ud55c \uc774\uc911\uc120\ud615 \uc5bc\uad74\ubaa8\ub378,&#8221;<br \/><em><strong>2023 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd94\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \uacc4\uba85\ub300\ud559\uad50, \ub300\uad6c\uc2dc, 2023.11.17 ~ 2023.11.18.<br \/><br \/><\/li>\r\n<li>\ubc15\uc18c\ud604, \uac15\uc9c0\uc6b0, &#8220;\ub2e4\uc911 \uc2dc\uc810 \uc774\ubbf8\uc9c0\uc640 \ub2e8\uc77c \uc2e0\uacbd\ub9dd\uc744 \uc0ac\uc6a9\ud55c 3\ucc28\uc6d0 \uc7ac\uad6c\uc131\uacfc \ub80c\ub354\ub9c1,&#8221;<br \/><em><strong>2023 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd94\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \uacc4\uba85\ub300\ud559\uad50, \ub300\uad6c\uc2dc, 2023.11.17 ~ 2023.11.18.<br \/><br \/><\/li>\r\n<li>\uae40\ubbfc\uc9c0, \uc1a1\uc9c0\ube48, \uc2e0\uc815\uc740, \uac15\uc9c0\uc6b0, &#8220;\uc2e4\uc2dc\uac04 \ub3d9\uc601\uc0c1 \uc5bc\uad74 \ubaa8\uc790\uc774\ud06c \uad6c\ud604 \uae30\uc220,&#8221;<br \/><em><strong>2023 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd94\uacc4\ud559\uc220\ubc1c\ud45c\ub300\ud68c<\/strong><\/em>, \uacc4\uba85\ub300\ud559\uad50, \ub300\uad6c\uc2dc, 2023.11.17 ~ 2023.11.18.<br \/><br \/><\/li>\r\n<li>\ubc15\uc18c\ud604, \uc804\uc720\ucc44, \uace0\uc7ac\uc740, <strong>\uac15\uc9c0\uc6b0<\/strong>, &#8220;<a href=\"https:\/\/kiss.kstudy.com\/Detail\/Ar?key=4059418\" target=\"_blank\" rel=\"noopener\">3\ucc28\uc6d0 \ubaa8\uc158\uc744 \ud1b5\ud55c \uc544\ubc14\ud0c0 \uc0dd\uc131 \uae30\uc220<\/a>,&#8221;<br \/><strong><em>\ud55c\uad6d\uc815\ubcf4\ucc98\ub9ac\ud559\ud68c \ucd94\uacc4\ud559\uc220\ub300\ud68c ACK 2023<\/em><\/strong>, \ubd80\uacbd\ub300\ud559\uad50, \ubd80\uc0b0\uc2dc, 2023.11.02 ~ 2023.11.04.<br \/><br \/><\/li>\r\n<li>\ubc15\uc18c\ud604, <strong>\uac15\uc9c0\uc6b0<\/strong>, &#8220;\uc74c\uc545\uc5d0 \uc5b4\uc6b8\ub9ac\ub294 \ucda4 \uc790\ub3d9 \uc0dd\uc131\uc744 \uc704\ud55c \ud2b8\ub79c\uc2a4\ud3ec\uba38 \uae30\ubc18 GAN \ubaa8\ub378,&#8221;<br \/><strong><em>2023\ub144\ub3c4 \ud55c\uad6d\uba40\ud2f0\ubbf8\ub514\uc5b4\ud559\ud68c \ucd98\uacc4\ud559\uc220\ub300\ud68c<\/em><\/strong>, \uc21c\ucc9c\ub300\ud559\uad50, \uc21c\ucc9c\uc2dc, 2023.05.19 ~ 2023.05.20.<br \/><br \/><\/li>\r\n<li><strong>(\ub17c\ubb38\uacbd\uc9c4\ub300\ud68c \ub3d9\uc0c1) <\/strong>\uba85\uc9c0\uc5f0, \uc724\uc5f0\uacbd, <strong>\uac15\uc9c0\uc6b0<\/strong>, &#8220;<a href=\"https:\/\/kiss.kstudy.com\/Detail\/Ar?key=4028354\" target=\"_blank\" rel=\"noopener\">\uc9c0\uc5ed \uae30\ubc18 \ubc18\ub824\uacac \uc0b0\ucc45 \uc5b4\ud50c\ub9ac\ucf00\uc774\uc158<\/a>,&#8221; <br \/><strong><em>\ud55c\uad6d\uc815\ubcf4\ucc98\ub9ac\ud559\ud68c \ucd98\uacc4\ud559\uc220\ub300\ud68c ASK 2023<\/em><\/strong>, \uc11c\uc6b8\ub300\ud559\uad50, \uc11c\uc6b8\uc2dc, 2023.05.18 ~ 2023.05.20.\u00a0<strong><br \/><br \/><\/strong><\/li>\r\n<li>\ubc15\uc18c\ud604, \uc815\uc720\uc9c4, \ubc15\uadfc\uc601, <strong>\uac15\uc9c0\uc6b0<\/strong>, &#8220;<a href=\"https:\/\/kiss.kstudy.com\/Detail\/Ar?key=4028446\" target=\"_blank\" rel=\"noopener\">\uc74c\uc545\uc5d0 \uc5b4\uc6b8\ub9ac\ub294 \ucda4 \uc790\ub3d9 \uc0dd\uc131 \ubc0f \uc2e4\uc2dc\uac04 \ucda4 \ubaa8\uc158 \ud310\uc815<\/a>,&#8221; <br \/><strong><em>\ud55c\uad6d\uc815\ubcf4\ucc98\ub9ac\ud559\ud68c \ucd98\uacc4\ud559\uc220\ub300\ud68c ASK 2023<\/em><\/strong>, \uc11c\uc6b8\ub300\ud559\uad50, \uc11c\uc6b8\uc2dc, 2023.05.18 ~ 2023.05.20.<br \/><br \/><\/li>\r\n<li><strong>(\ub17c\ubb38\uacbd\uc9c4\ub300\ud68c \ub300\uc0c1) <\/strong>\ubc15\uc18c\ud604, <strong>\uac15\uc9c0\uc6b0<\/strong>, &#8220;<a href=\"https:\/\/kiss.kstudy.com\/thesis\/thesis-view.asp?key=3988505\">\ud76c\uc18c \ud68c\uadc0\uc790\ub97c \uace0\ub824\ud55c 3\ucc28\uc6d0 \uc778\uccb4 \ubaa8\ub378 \ub2e4\uc6b4 \uc0d8\ud50c\ub9c1<\/a>,&#8221; <br \/><strong><em>\ud55c\uad6d\uc815\ubcf4\ucc98\ub9ac\ud559\ud68c \ucd94\uacc4\ud559\uc220\ub300\ud68c ACK 2022<\/em><\/strong>, \ud55c\ub9bc\ub300\ud559\uad50, \uac15\uc6d0\ub3c4, 2022.11.03 ~ 2022.11.05.<\/li>\r\n<!-- \/wp:list-item --><\/ol>\r\n<\/li>\r\n<!-- \/wp:list -->\r\n\r\n<!-- wp:paragraph --><\/ol>\r\n<\/li>\r\n<\/ol>\r\n<p>&nbsp;<\/p>\r\n<ol><!-- \/wp:paragraph --><\/ol>\r\n<!-- \/wp:list -->\r\n<p>&nbsp;<\/p>\r\n<div style=\"all: initial !important;\">\u00a0<\/div>\r\n<div style=\"all: initial !important;\">\u00a0<\/div>","protected":false},"excerpt":{"rendered":"<p>*\u00a0 corresponding author, + equally contribution &nbsp; ONGOING WORKS &nbsp; (NRF CS Top Conference) UC. Jun, J. Ko, and J. Kang*, &#8220;Latent Diffusion Meets GAN: Adversarial Learning in the Autoencoded Latent Space,&#8221; In Proceedings of the Eurographics 2026 (Computer Graphics &hellip; <a href=\"https:\/\/ai.sookmyung.ac.kr\/?page_id=75\">Continued<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"kt_blocks_editor_width":"","footnotes":""},"class_list":["post-75","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ai.sookmyung.ac.kr\/index.php?rest_route=\/wp\/v2\/pages\/75","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ai.sookmyung.ac.kr\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ai.sookmyung.ac.kr\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ai.sookmyung.ac.kr\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ai.sookmyung.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=75"}],"version-history":[{"count":333,"href":"https:\/\/ai.sookmyung.ac.kr\/index.php?rest_route=\/wp\/v2\/pages\/75\/revisions"}],"predecessor-version":[{"id":1175,"href":"https:\/\/ai.sookmyung.ac.kr\/index.php?rest_route=\/wp\/v2\/pages\/75\/revisions\/1175"}],"wp:attachment":[{"href":"https:\/\/ai.sookmyung.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=75"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}