Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in Journal 1, 2009
This paper is about the number 1. The number 2 is left for future work.
Recommended citation: Your Name, You. (2009). "Paper Title Number 1." Journal 1. 1(1). http://academicpages.github.io/files/paper1.pdf
Published in Journal 1, 2010
This paper is about the number 2. The number 3 is left for future work.
Recommended citation: Your Name, You. (2010). "Paper Title Number 2." Journal 1. 1(2). http://academicpages.github.io/files/paper2.pdf
Published in Journal 1, 2015
This paper is about the number 3. The number 4 is left for future work.
Recommended citation: Your Name, You. (2015). "Paper Title Number 3." Journal 1. 1(3). http://academicpages.github.io/files/paper3.pdf
Published in EMNLP 2025, 2025
We propose LM-Searcher, a novel approach for cross-domain neural architecture search using LLMs via unified numerical encoding.
Recommended citation: Yuxuan Hu, Jihao Liu, Ke Wang, Jinliang Zhen, Weikang Shi, Manyuan Zhang, Qi Dou, Rui Liu, Aojun Zhou, Hongsheng Li (2025). "LM-Searcher: Cross-domain Neural Architecture Search with LLMs via Unified Numerical Encoding." EMNLP 2025. https://arxiv.org/abs/2509.05657
Published in ICLR 2026, 2026
We introduce Mmsearch-plus, a benchmark for provenance-aware search in multimodal browsing agents.
Recommended citation: Xijia Tao, Yihua Teng, Xinxing Su, Xinyu Fu, Jihao Wu, Chaofan Tao, Ziru Liu, Haoli Bai, Rui Liu, Lingpeng Kong (2026). "Mmsearch-plus: Benchmarking Provenance-aware Search for Multimodal Browsing Agents." ICLR 2026. https://arxiv.org/abs/2508.21475
Published in ICLR 2026, 2026
We present Pusa v1.0, an Image-to-Video model that surpasses Wan-I2V with only $500 training cost using vectorized timestep adaptation.
Recommended citation: Yaofang Liu, Yumeng Ren, Aitor Artola, Yuxuan Hu, Xiaodong Cun, Xiaotong Zhao, Alan Zhao, Raymond H. Chan, Suiyun Zhang, Rui Liu, Dandan Tu, Jean-Michel Morel (2026). "Pusa v1.0: Surpassing Wan-I2V with $500 Training Cost by Vectorized Timestep Adaptation." ICLR 2026. https://arxiv.org/abs/2507.16116
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.