Mingkai Chen
Email  / 
CV  / 
Google Scholar  / 
Github
I am currently a first-year doctoral student at Department of Computer Engineering, Rochester Institute of Technology.
Prior to that, I graduated from Stony Brook University with Bachelor of Science in Computer Science.
Under the supervision of Prof. Dongfang Liu, I am actively conducting research in the field of Artificial Inteligence.
My research interests cover a wide range of topics in this field, such as AI for Science, Large Language Models, and Multi-modal Models, among others.
I'm a highly self-motivated student researcher and am now looking for internship opportunities during Summer 2025.
Please feel free to email me if my experience might be a good fit for your lab.
|
|
Research
My research interests lie in the field of Artificial Inteligence. The most of my current works focused on
AI for Science, Large Language Models and Multi-modal Models.
|
Inertial Confinement Fusion Forecasting via Large Language Models
Mingkai Chen,
Taowen Wang,
Shihui Cao,
James Chenhao Liang,
Chuan Liu,
Chunshu Wu,
Qifan Wang,
Ying Nian Wu,
Michael Huang,
Chuang Ren,
Ang Li,
Tong Geng,
Dongfang Liu
arXiv preprint. Under review by ACL 2025
We developed LPI-LLM, integrating Large Language Models with reservoir computing to address Laser-Plasma Instabilities (LPI) in Inertial Confinement Fusion (ICF). Our designed fusion-specific models for accurate hot electron predictions and actionable insights, achieving state-of-the-art performance in forecasting Hard X-ray (HXR) energies with a negligible computational cost compared to traditional simulation methods. In addtion, we created LPI4AI, the first experimental benchmark for advancing AI-driven fusion research.
|
Diff-PIC: Revolutionizing Particle-In-Cell Nuclear Fusion Simulation with Diffusion Models
Chuan Liu,
Chunshu Wu,
Shihui Cao,
Mingkai Chen,
James Chenhao Liang,
Ang Li,
Michael Huang,
Chuang Ren,
Dongfang Liu,
Ying Nian Wu,
Tong Geng,
ICLR 2025
We developed Diff-PIC, a paradigm using conditional diffusion models to efficiently simulate Laser-Plasma Interaction (LPI) for nuclear fusion research. Designed a distillation process to capture physical patterns from Particle-in-Cell (PIC) simulations, and addressed key challenges with a physically-informed model and rectified flow technique, enhancing efficiency and fidelity. This innovation significantly reduces computational barriers in nuclear fusion research, advancing sustainable energy solutions.
|
A Benchmark and Chain-of-Thought Prompting Strategy for Large Multimodal Models with Multiple Image Inputs
Daoan Zhang*,
Junming Yang*,
Hanjia Lyu*,
Zijian Jin,
Yuan Yao,
Mingkai Chen,
Jiebo Luo
ICPR 2024
We investigated Large Multimodal Models' (LMMs) ability to process multiple image inputs, focusing on fine-grained perception and information blending. Our research involved image-to-image matching and multi-image-to-text matching assessments, using models like GPT-4V and Gemini. We developed a Contrastive Chain-of-Thought (CoCoT) prompting method to improve LMMs' multi-image understanding, significantly enhancing model performance in our evaluations.
|
Aggregation of Disentanglement: Reconsidering Domain Variations in Domain Generalization
Daoan Zhang*,
Mingkai Chen*,
Chenming Li,
Lingyun Huang,
Jianguo Zhang
arXiv preprint. Under review by IJCV
We proposed a new perspective to utilize class-aware domain variant features in training, and in the
inference period, our model effectively maps target domains into the latent space where the known
domains lie. We also designed a contrastive learning based paradigm to calculate the weights for
unseen domains.
|
* equal contribution.
|
|
Research Associate Department of Computer Engineering, Rochester Institute of Technology
|
|
Student Assistant Department of Computer Science, Stony Brook University
|
|