Publications

Learning to Recommend Method Names with Global Context

Published in International Conference on Software Engineering (ICSE), 2022

Propose a novel global approach for method name suggestion, which considers the local context, the project level context, and the documentation of the method simultaneously. We employ a transformer-based seq2seq framework to generate the method names and use the attention mechanism to allow the model to attend to different level contexts when generating the names. The model substantially improves the performance of the previous approaches on suggesting Java method names.

Recommended citation: Liu, F. (2022). "Learning to Recommend Method Names with Global Context." International Conference on Software Engineering. 2022. https://arxiv.org/pdf/2201.10705.pdf

A Self-Attentional Neural Architecture for Code Completion

Published in International Conference on Program Comprehension (ICPC, ACM Distinguished Paper Award), 2020

This paper built a multi-task learning model for source code modeling and code completion, which predicts next node’s type and value jointly. Employed Transformer-XL network as the base model and consider the path from the predicting node to the root node.

Recommended citation: Liu, F. (2020). "A Self-Attentional Neural Architecture for Code Completion." International Conference on Program Comprehension. 2020. https://arxiv.org/pdf/1909.06983.pdf

Multi-task Learning based Pre-trained Language Model for Code Completion

Published in International Conference on Automated Software Engineering (ASE), 2020

This paper presented a pre-trained language model with a transformer-based architecture for code understanding and generation. And also utilize static type information of the identifiers to help the model understand programs better.

Recommended citation: Liu, F. (2020). "Multi-task learning based pre-trained language model for code completion." International Conference on Automated Software Engineering. 2020. https://arxiv.org/pdf/2012.14631.pdf

Modeling Programs Hierarchically with Stack-augmented LSTM

Published in Journal of Systems and Software (JSS), 2020

This paper proposed a neural language model aiming at modeling the hierarchical structure of the programs, which strengthens the LSTM network with a stack to store and restore the contextual information depending on the program’s structure.

Recommended citation: Liu, F. (2020). "Modeling Programs Hierarchically with Stack-augmented LSTM." Journal of Systems and Software. 2020). https://arxiv.org/pdf/2002.04516.pdf