About me

Self Introduction

I am Jiacheng Luo (罗嘉诚), a junior student majoring in CSE , specializing in Computer Science and Technology at SUSTech. Currently, my academic advisor is Prof. Jianguo Zhang, and my life advisor is Assistant Prof. Bin Zhu.

  • Prof. Jianguo Zhang is the leader of the CVIP Group laboratory at SUSTech and has previously served as a Reader in the School of Science and Engineering at the University of Dundee, UK, as well as the Director of International Cooperation in the Department of Computer Science.
  • Dr. Bin Zhu is an assistant professor and doctoral supervisor of the SPHEM (School of Public Health and Emergency Management) in SUSTech.

The main research areas of the CVIP Group laboratory are computer vision, medical image and information processing, machine learning, and artificial intelligence.

My research interests include Multi-Modal Machine Learning (MMML), Few-Shot Learning (FSL), and Parameter-Efficient Fine-Tuning (PEFT).

Academic Background

  • Sept 2021 - Jun 2025: Southern University of Science and Technology (BEng.)

Research Interests

Multi-Modal Machine Learning (MMML)
    Humans perceive the world through various sensory organs, such as the eyes, ears, and tactile senses. Multi-Modal Machine Learning (MMML) research addresses machine learning problems with different modalities of data. Common modalities include vision, text, and sound. They usually come from different sensors, and the formation of data and internal structure differ significantly. For example, images are a continuous space that naturally exists in the world, while text is a discrete space organized by human knowledge and grammar rules. The heterogeneity of multimodal data poses challenges for learning the correlations and complementarities among them.
Few-Shot Learning (FSL)
    Few-shot learning (FSL) is a machine learning method that trains with limited information datasets. The common practice in machine learning application fields is to provide models with as much data as possible. This is because in most machine learning applications, providing more data helps the model in making better predictions. However, few-shot learning aims to construct accurate machine learning models with fewer training data. Since the dimensionality of input data determines the cost of resources (such as time cost, computational cost, etc.), people can lower the cost of data analysis/machine learning (ML) by using few-shot learning.
Parameter-Efficient Fine-Tuning (PEFT)
    In recent years, there have been many large pre-trained models in deep learning research, such as GPT-3, BERT, ViT, etc., which can achieve excellent performance in various natural language and even visual image processing tasks. However, the training cost of these large pre-trained models is very high, requiring a huge amount of computational resources and data. The Parameter-Efficient Fine-Tuning (PEFT) technique aims to improve the performance of pre-trained models on new tasks by minimizing the number of fine-tuning parameters and computational complexity, thus easing the training cost of large pre-trained models and achieving efficient transfer learning.

News and Updates

  • [Aug 31,2023] My personal academic website is online!
  • [Jul 18,2023] Honored to join the CVIP Group as a formal member and hope to do a good job!
  • [Aug 22,2022] Happy to join the CVIP Group as an unofficial attending student!