Welcome to my homepage! I am a Ph.D. student at HKU CS, advised by Prof. C.M. Kao. I also work with the NLP group of Shanghai AI Lab and HKUNLP, focusing on topics related to LLM Agents. Previously, I was a masterβs student at NUS, advised by Dr. Xiaoli Li at I2R, A*STAR. Before that, I completed my B.Eng with distinction in the School of Data Science and Engineering at East China Normal University, where I was privileged to be instructed by Prof. Weining Qian, Prof. Xuesong Lu, and Prof. Xiang Li for research and engineering projects. My research interests include neural code intelligence, LLM-based agents, and broad deep learning topics in general.
π Office hours: I am holding office hours (1~2 hours per week) dedicated to offering consultation for COMP7607 students and mentorship programs. If you want to have a chat (whether or not itβs about research), please book me through this link!
π₯ News
- 2024.08: βοΈβοΈ (Physically) started my PhD at The University of Hong Kong ππ°!
- 2024.07: ππ One paper get accepted by COLM 2024! See you at Upenn!
- 2024.05: π₯π₯ Four papers are accepted by ACL 2024! See you in Bangkok!
- 2024.03: ππ Check out our Code Intelligence Survey Paperπ₯
- 2024.02: ππ Graduated from National University of Singapore.
- 2023.12: β±οΈβ±οΈ Attending EMNLP 2023 in SG πΈπ¬
- 2023.07: β¨β¨ Started my research intern at NLP Group, Shanghai AI Lab
- 2023.05: ππ HugNLP Framework (CIKM'23 Best Demo Paper) is ready for use! Please check our Paper, Repo and Blogs
- 2023.05: ππ We release SelfAware for benchmarking LLMs' self-knowledge
- 2023.01: ππ Started my research intern at I2R, A*STAR, Singapore
- 2022.12: ππ Our team won second prize (100k RMB) in the International Algorithm Case Competition: PLM Tuning Track.
- 2022.08: ππ Started my master's studies at National University of Singapore. πΈπ¬
- 2022.07: ππ Awarded outstanding UG thesis and graduated from ECNU as a Shanghai Outstanding Graduate.
- 2021.09: ππ Started serving as a TA for Deep Learning for Computer Vision course this semester.
- 2021.05: ππ Led my team to win the Finalist Award in the Mathematical and Interdisciplinary Contest in Modeling!
- 2021.02: βοΈβοΈ Attending Data Science Winter School at Imperial College London.
π Publications
A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond π₯π₯
Qiushi Sun, Zhirui Chen, Fangzhi Xu, Chang Ma, Kanzhi Cheng, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Pengcheng Yin, Qipeng Guo, Xipeng Qiu, Xiaoli Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu
[Paper] | [Slides] | [Project] | [Video] |
Let me walk you through the development of neural code intelligence:
- Follow LMs for code as a thread to trace the fieldβs development π
- Explore cross-domain synergies and opportunities π±
- Present a broad array of promising research avenues π‘
Preprint
Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models, Fangzhi Xu, Qiushi Sun, Kanzhi Cheng, Jun Liu and Zhiyong Wu.Preprint
KS-Lottery: Finding Certified Lottery Tickets for Multilingual Language Models, Fei Yuan, Chang Ma, Shuai Yuan, Qiushi Sun and Lei Li.COLM 2024
Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration, Qiushi Sun, Zhangyue Yin, Xiang Li, Zhiyong Wu, Xipeng Qiu and Lingpeng Kong. [LLMAgents @ ICLR 2024] SlidesACL 2024
Boosting Language Models Reasoning with Chain-of-Knowledge Prompting, Jianing Wang*, Qiushi Sun*, Xiang Li and Ming Gao. SlidesACL 2024
SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents, Kanzhi Cheng, Qiushi Sun, Yougang Chu, Fangzhi Xu, Yantao Li, Jianbing Zhang, Zhiyong Wu. [LLMAgents @ ICLR 2024]ACL 2024
Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models, Fangzhi Xu, Zhiyong Wu, Qiushi Sun, Siyu Ren, Fei Yuan, Shuai Yuan, Qika Lin, Yu Qiao and Jun Liu.LREC-COLING 2024
TransCoder: Towards Unified Transferable Code Representation Learning Inspired by Human Skills, Qiushi Sun, Nuo Chen, Jianing Wang, Xiang Li and Ming Gao.LREC-COLING 2024
Make Prompt-based Black-Box Tuning Colorful: Boosting Model Generalization from Three Orthogonal Perspectives, Qiushi Sun, Chengcheng Han, Nuo Chen, Renyu Zhu, Jingyang Gong, Xiang Li and Ming Gao. | π₯ 100K RMB Award-winning SolutionLREC-COLING 2024
Aggregation of Reasoning: A Hierarchical Framework for Enhancing Answer Selection in Large Language Models, Zhangyue Yin, Qiushi Sun, Qipeng Guo, Zhiyuan Zeng, Xiaonan Li, Tianxiang Sun, Cheng Chang, Xipeng Qiu and Xuanjing Huang.EMNLP 2023
Exchange-of-Thought: Enhancing Large Language Model Capabilities through Cross-Model Communication, Zhangyue Yin, Qiushi Sun, Cheng Chang, Qipeng Guo, Junqi Dai, Xuanjing Huang and Xipeng Qiu. Slides | VideoEMNLP 2023
Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised Language Understanding, Jianing Wang, Qiushi Sun, Nuo Chen, Chengyu Wang, Xiang Li, Ming Gao and Jun Huang.EMNLP 2023
Pass-Tuning: Towards Structure-Aware Parameter-Efficient Tuning for Code Representation Learning, Nuo Chen, Qiushi Sun, Jianing Wang, Xiang Li and Ming Gao.CIKM 2023 (Demo)
HugNLP: A Unified and Comprehensive Library for Natural Language Processing, Jianing Wang, Nuo Chen, Qiushi Sun, Wenkang Huang, Chengyu Wang and Ming Gao. | π Best Demo Paper AwardACL 2023
Do Large Language Models Know What They Donβt Know?, Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu and Xuanjing Huang. Slides | Video |ACL 2023
When Gradient Descent Meets Derivative-Free Optimization: A Match Made in Black-Box Scenario, Chengcheng Han, Liqing Cui, Renyu Zhu, Jianing Wang, Nuo Chen, Qiushi Sun, Xiang Li and Ming Gao.EMNLP 2022
CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure, Nuo Chen*, Qiushi Sun*, Renyu Zhu*, Xiang Li, Xuesong Lu and Ming Gao. Slides | Video |
*Denotes equal contribution. More working drafts / preprints under review will be released later βοΈ
π» Internships
- 2023.06 - 2024.10, NLP Research Intern, NLP Group, Shanghai AI Laboratory , Shanghai, China.
- 2023.01 - 2023.06, NLP Research Intern, Institute for Infocomm Research, A*STAR , Singapore.
- 2021.10 - 2022.07, Research Assistant, School of Data Science and Engineering , ECNU, Shanghai, China.
π Honors and Awards
- 2023.10 CIKM Best Demo Paper Award, ACM & SIGIR
- 2022.12 International Algorithm Case Competition: PLM Tuning, Second Prize, Guangdong-Hong Kong-Macao Greater Bay Area
- 2022.07 Shanghai Outstanding Graduates, Shanghai Municipal Education Commission
- 2022.05 Excellent Bachelor Thesis Award, East China Normal University
- 2021.11 Outstanding Student, East China Normal University
- 2021.10 First Class Scholarship, East China Normal University
- 2021.05 Finalist (Special Class Prize), Mathematical and Interdisciplinary Contest in Modeling
π§βπ« Teaching
I serve(d) as a teaching assistant for:
- COMP7607: Natural Language Processing (for masters), HKU, 2024 Fall. Instructor: Lingpeng Kong
- Deep Learning for Computer Vision (for UG), ECNU, 2021 Fall. Instructor: Xuesong Lu
π Educations
- 2024.08 - Present, Ph.D, The University of Hong Kong , Hong Kong SAR.
- 2022.08 - 2024.01, Master, National University of Singapore , Singapore.
- 2018.09 - 2022.07, Undergraduate, School of Data Science and Engineering, East China Normal University , Shanghai, China.
- 2011.09 - 2018.07, Middle School, Pudong Foreign Languages School , SISU, Shanghai, China.
π Services
- I serve(d) as a reviewer / program committee member for the following conferences, journals, and workshops:
- Conferences: EMNLPβ22, ACLβ23, CIKMβ23, EMNLPβ23(Best Reviewer Award), ICLRβ24, NeurIPSβ24, NLPCCβ24, ICLRβ25, COLINGβ25, NAACLβ25.
- Journals: ACL Rolling Review (2023.4 - Present), Frontiers of Computer Science
- Workshops: WiNLPβ24.
βWhatβs past is prologue.β β William Shakespeare (The Tempest)