Joonghyuk Shin

Or you can also call me Alex 😀

joonghyuk AT snu.ac.kr

Hi! I am a CS graduate student at SNU, advised by Professor Jaesik Park. I am interested in building fast and interactive generative models, but I'm open to other areas too. Recently, I am working closely with Xun Huang on video models.

News 🗞️📰

2025/06
A paper about text-based image editing is accepted in ICCV 2025 (more details coming soon).
2024/10
I will be joining Adobe Research in San Francisco as an intern on Eli Shechtman's team in Summer 2025, followed by a research visit to CMU's Robotics Institute in Fall 2025, hosted by Jun-Yan Zhu.

Education 🎓🏫

Seoul National University (SNU)
Sep. 2023 - Present, CSE, Integrated M.S. and Ph.D. (Adviser: Jaesik Park)

Pohang University of Science and Technology (POSTECH)
Feb. 2019 - Feb. 2023, CSE, B.S. (Summa Cum Laude)

Experience 💼🔬

Carnegie Mellon University (CMU)
Sep. 2025 - Dec. 2025 (Planned), Robotics Institute, Visiting Researcher (Host: Jun-Yan Zhu)

Adobe Research
June 2025 - Sep. 2025, Research Scientist Intern
Working with Xun Huang, Zhengqi Li, Richard Zhang, Jun-Yan Zhu, and Eli Shechtman on fast and interactive video generative models

Publications 📄📑

* Equal contribution, † Equal advising

Image
Coming Soon

JAM-Flow: Joint Audio-Motion Synthesis with Flow Matching (arXiv 2025)
Mingi Kwon*, Joonghyuk Shin*, Jaeseok Jung, Jaesik Park†, Youngjung Uh
More details coming soon!

Image
Coming Soon

Exploring Multimodal Diffusion Transformers for Enhanced Prompt-based Image Editing (ICCV 2025)
Joonghyuk Shin, Alchan Hwang, Yujin Kim, Daneul Kim, Jaesik Park
More details coming soon!

InstantDrag: Improving Interactivity in Drag-based Image Editing (SIGGRAPH Asia 2024)
Joonghyuk Shin, Daehyeon Choi, Jaesik Park - [Paper | Project Page | Code]
We present InstantDrag, an optimization-free pipeline for fast, interactive drag-based image editing that requires only an image and drag instruction as input, learning from real-world video datasets.

Fill-Up: Balancing Long-Tailed Data with Generative Models (arXiv 2023)
Joonghyuk Shin, Minguk Kang, Jaesik Park - [Paper | Project Page]
We present a two-stage method for long-tailed recognition using textual-inverted tokens to generate synthetic images, achieving state-of-the-art results on standard benchmarks when trained from scratch.

StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis (TPAMI 2023)
Minguk Kang, Joonghyuk Shin, Jaesik Park - [Paper | Code (3400+)]
We present StudioGAN, a comprehensive library for GANs that reproduces over 30 popular models, providing extensive benchmarks and a fair evaluation protocol for image synthesis tasks.

Honors and Awards 🎖️🎉

  • Award for Outstanding Poster Presentation, IPIU (2023, 2025)
    • Awarded to the paper "Using Large Scale Text-to-Image Model as a Data Source for Classification" (2023)
    • Awarded to the paper "멀티모달 디퓨전 트랜스포머를 활용한 텍스트 조건 기반 정밀 이미지 편집 기법" (2025)
  • Summa Cum Laude, POSTECH (2023)
    • Highest graduation honor
  • National Science and Engineering Scholarship, Korea Student Aid Foundation (2021, 2022)
    • Scholarship based on academic excellence
  • Best Graduation Project, POSTECH CSE (2022)
    • Awarded to the project “Large scale generative model as a data source for vision tasks”
  • Silver Award UNI-DTHON Datathon, UNI-D (Union of Korean University Student for CS) (2021)
    • Competition on classifying food images
  • Global Leadership Program, POSTECH CSE (2020, 2021)
    • Scholarship based on academic excellence
  • Best Undergraduate Research Program, POSTECH (2020)
    • Awarded for the project “Neural Point Cloud Rendering of POSTECH”
  • Jigok Scholarship, POSTECH (2019, 2020)
    • Scholarship based on academic performance

Personal ⚾️🧑🏻‍💻

I am a big fan of baseball. I played for POSTECH baseball team (Tachyons) for 5 years, as a captain and a catcher.
I love animals. I live with a dog named Poby. I also like Pokemon, travelling, and FIFA video games.