Yu Lu

PhD Student

University of Technology Sydney (UTS)
Email: aniki.yulu [AT] gmail dot com

Yu Lu (路雨) is a PhD candidate at ReLER Lab, Australian Artificial Intelligence Institute (AAII), University of Technology Sydney (UTS). His academic advisors are Prof. Yi Yang. and Dr. Linchao Zhu. His current research interests encompass the development of multi-modal large language models and exploring video generation techniques.

We are seeking a research intern specializing in multi-modal video understanding. Please feel free to drop me an email if you are interested in working with us.


Experiences

Post-Thesis Research Visiting: WeXin Group, Tencent, Jun. 2024 - Now
Advisior: Dr. Fengyun Rao

Research Visiting: CCAI, Zhejiang University, Jun. 2023 - Sep. 2023
Advisior: Pro. Yi Yang

Research Intern: Kolors Team, Kwai Tech, September 2021 - March 2022
Advisior: Dr. Debing Zhang

Research Intern: AI Tencent, April 2021 - June 2021
Advisior: Dr. Yuchen Yuan

Research Intern: IDL Baidu Research, July 2019 - July 2020
Advisior: Prof. Guodong Guo


Publications

FreeLong: Training-Free Long Video Generation with SpectralBlend Temporal Attention
Yu Lu, Yuanzhi Liang, Linchao Zhu, Yi Yang
NIPS 2024
[Paper] [Project]

Automated Multi-level Preference for MLLMs
Mengxi Zhang, Wenhao Wu, Yu Lu, Yuxin Song, Kang Rong, Huanjin Yao, Jianbo Zhao, Fanglong Liu, Yifan Sun, Haocheng Feng, Jingdong Wang
NIPS 2024
[Paper] [Code]

FlowZero: Zero-Shot Text-to-Video Synthesis with LLM-Driven Dynamic Scene Syntax
Yu Lu, Linchao Zhu, Hehe Fan, Yi Yang
Arxiv
[Paper] [Project] [Code]

Exploiting Unlabeled Videos for Video-Text Retrieval via Pseudo-Supervised Learning
Yu Lu, Ruijie Quan, Linchao Zhu, Yi Yang
Under Review at TIP
[Paper] [code]

Zero-shot Video Grounding with Pseudo Query Lookup and Verification
Yu Lu, Ruijie Quan, Linchao Zhu, Yi Yang
IEEE Transactions on Image Processing (TIP) , 2024
[Paper] [code]

Show Me a Video: A Large-Scale Narrated Video Dataset for Coherent Story Illustration
Yu Lu, Feiyue Ni, Haofan Wang, Xiaofeng Guo, Linchao Zhu, Zongxin Yang, Ruihua Song, Lele Cheng, Yi Yang
IEEE Transactions on MultiMedia (TMM) , 2023
[Paper] [Project]

ECLIP: Efficient Contrastive Language-Image Pretraining via Ensemble Confidence Learning and Masked Language Modeling
Jue Wang, Haofan Wang, Weijia Wu, Jincan Deng, Yu Lu, Xiaofeng Guo, Debing Zhang
ICML 2022 Pre-training Workshop
[Paper] [Code]

CRIS: CLIP-Driven Referring Image Segmentation
Zhaoqing Wang*, Yu Lu*, Qiang Li, Xunqiang Tao, Yandong Guo, MingMing Gong, Tongliang Liu (* means equal contribution)
CVPR 2022
[Paper] [Code]

GINet: Graph Interaction Network for Scene Parsing
Tianyi Wu*, Yu Lu*, Yu Zhu, Chang Zhang, Ming Wu, Zhanyu Ma, Guodong Guo (* means equal contribution)
ECCV 2020
[Paper] [Code]


Professional Activities

Journal Review:
TPAMI, TIP, KBS

Conference Review:
CVPR, ICCV, ECCV, ACL, NIPS, ICLR

News


Sep 2024
Two papers (FreeLong and AMP) accepted by NIPS 2024!

Sep 2024
The defense of my Ph.D. thesis "Zero-shot Natural Language-Driven Video Analysis and Synthesis" !

25 Jan 2024
Our paper "Zero-shot Video Grounding with Pseudo Query Lookup and Verification" is accepted by TIP2024.

4 July 2023
Our paper "Show Me a Video: A Large-Scale Narrated Video Dataset for Coherent Story Illustration " is accepted by TMM2023.

14 Jun 2022
Our paper "EfficientCLIP: Efficient Cross-Modal Pre-training by Ensemble Confident Learning and Language Modeling" is accepted by ICML Pre-training Workshop.

4 March 2022
Our paper "CRIS: CLIP-Driven Referring Image Segmentation " is accepted by CVPR2022.

11 July 2020
Our paper "GINet: Graph Interaction Network for Scene Parsing " is accepted by ECCV2020.