About me
I am currently an Assistant Professor at School of Computer Science & Engineering, Beihang University, China. My research mainly focuses on software engineering, including AI for SE, program understanding, code generation, and program repair.
Education
- 2017/09-2022/07, Peking University, Ph.D. in Computer Science
- 2013/09-2017/06, Chongqing University, B.S. in Computer Science
Teaching
- Compiler Technology at Beihang University(Fall)
- Compulsory Course
- Credits/Hours: 4.5/96 (48 Lecture Hours + 48 Lab Hours)
Publications (# means co-first author; * means corresponding author)
- [TOSEM’25] Jia Li, Chongyang Tao, Jia Li, Ge Li*, Zhi Jin*, Huangzhao Zhang, Zheng Fang, Fang Liu, Large Language Model-Aware In-Context Learning for Code Generation (CCF-A)
- [ASE’24] Fang Liu#, Zhenwei Liu#, Qianhui Zhao, Jing Jiang*, Li Zhang, Zian Sun, Ge Li, Zhongqi Li, Yuchi Ma, FastFixer: An Efficient and Effective Approach for Repairing Programming Assignments (CCF-A)
- [ASE’24] Jiuang Zhao, Donghao Yang, Li Zhang, Xiaoli Lian, Zitian Yang, Fang Liu, Enhancing Automated Program Repair with Solution Design (CCF-A)
- [FSE’24] Zhen Yang, Fang Liu*, Zhongxing Yu*, Jacky Wai Keung, Jia Li, Shuo Liu, Yifan Hong, Xiaoxue Ma, Zhi Jin, Ge Li, Exploring and Unleashing the Power of Large Language Models in Automated Code Translation (CCF-A)
- [ICSE’24 poster] Xiaoli Lian, Shuaisong Wang, Jieping Ma, Xin Tan, Fang Liu, Lin Shi, Cuiyun Gao, Li Zhang, Imperfect Code Generation: Uncovering Weaknesses in Automatic Code Generation by Large Language Models (CCF-A)
- [TOSEM’24] Fang Liu, Zhiyi Fu, Ge Li, Zhi Jin, Hui Liu, Yiyang Hao, Li Zhang, Non-Autoregressive Line-Level Code Completion (CCF-A)
- [LREC-COLING’24] Qingfu Zhu, Xianzhen Luo, Fang Liu, Cuiyun Gao and Wanxiang Che, A Survey on Natural Language Processing for Programming (CCF-B)
- [SANER’24] Xin Tan, Taichuan Li, Ruohe Chen, Fang Liu*, Li Zhang, Challenges of Using Pre-trained Models: the Practitioners’ Perspective (CCF-B)
- [SANER’24] Shuo Liu, Jacky Wai Keung, Zhen Yang, Fang Liu*, Qilin Zhou, Yihan Liao, Delving into Parameter-Efficient Fine-Tuning in Code Change Learning: An Empirical Study (CCF-B)
- [SCIS’24] Huangzhao Zhang, Kechi Zhang, Zhuo Li, Jia Li, Jia Li, Yongmin Li, Yunfei Zhao, Yuqi Zhu, Fang Liu, Ge Li, Zhi Jin, Deep Learning for Code Generation: A Survey (CCF-A)
- [SCIS’23] Fang Liu, Ge Li, Qianhui Zhao, Li Zhang, Learning to Represent Code Semantics (CCF-A)
- [ASE’23] Jia Li, Chongyang Tao, Zhi Jin, Fang Liu, Jia Li, Ge Li, ZC^3: Zero-Shot Cross-Language Code Clone Detection (CCF-A)
- [Internetware’23] Jia Li, Fang Liu, Jia Li, Yunfei Zhao, Ge Li, Zhi Jin, MCodeSearcher: Multi-View Contrastive Learning for Code Search (CCF-C)
- [ICSE’23] Fang Liu, Jia Li, Li Zhang, Syntax and Domain Aware Model for Unsupervised Program Translation (CCF-A)
- [ASEJ’23] Zejun Wang, Fang Liu, Yiyang Hao, Zhi Jin, AdaComplete: Improve DL-based Code Completion Method’s Domain Adaptability (CCF-B)
- [ICSE’22] Fang Liu, Ge Li, Zhiyi Fu, Shuai Lu, Yiyang Hao, Zhi Jin, Learning to Recommend Method Names with Global Context (CCF-A)
- [EMSE’22] Fang Liu, Ge Li, Bolin Wei, Xin Xia, Zhiyi Fu, Zhi Jin, A Unified Multi-task Learning Model for AST-level and Token-level Code Completion (CCF-B)
- [ASE’20] Fang Liu, Ge Li, Yunfei Zhao, Zhi Jin, Multi-task Learning based Pre-trained Language Model for Code Completion (CCF-A)
- [ICPC’20] Fang Liu, Ge Li, Bolin Wei, Xin Xia, Zhiyi Fu, Zhi Jin, A Self-Attentional Neural Architecture for Code Completion with Multi-Task Learning (CCF-B, ACM Distinguished Paper Award)
- [JSS’20] Fang Liu, Lu Zhang, Zhi Jin, Modeling programs hierarchically with stack-augmented LSTM (CCF-B)
- [Journal of Computer Research and Developmen’19] Fang Liu, Ge Li, Xing Hu, Zhi Jin, Program Comprehension Based on Deep Learning (CCF中文-A)
- [Journal of Software’19] Xing Hu, Ge Li, Fang Liu, Zhi Jin. Program Generation and Code Completion Techniques Based on Deep Learning: Literature Review (CCF中文-A)
- [Journal of Software’19] Zhi Jin, Fang Liu, Ge Li, Program Comprehension: Present and Future (CCF中文-A)
On-going Work
- [arxiv] Fang Liu, Yang Liu, Lin Shi, Houkun Huang, Ruifeng Wang, Zhen Yang, Li Zhang, Exploring and Evaluating Hallucinations in LLM-Powered Code Generation
- [arxiv] Qianhui Zhao#, Fang Liu#, Li Zhang, Yang Liu, Zhen Yan, Zhenghao Chen, Yufei Zhou, Jing Jiang, Ge Li, Peer-aided Repairer: Empowering Large Language Models to Repair Advanced Student Assignments
- [arxiv] Donghao Yang#, Aolang Wu#, Tianyi Zhang#, Li Zhang, Fang Liu*, Xiaoli Lian*, Yuming Ren, Jiaji Tian, A Multi-Agent Framework for Extensible Structured Text Generation in PLCs
Contact
Colleage Road,Haidian District, No.37, Beijing, P.R.China 100191