About Me

Hello! I’m Xuhan Huang, a senior mathematics student at The Chinese University of Hong Kong, Shenzhen, where I am fortunate to be advised by Prof. Benyou Wang and Prof. Zhongxiang Dai. Currently, I am mentored by Prof. Jie Fu.

My research goal is to build verifiably safe and aligned AI. I believe that the path to trustworthy artificial intelligence lies in moving from today’s empirical, feedback-based systems to a new paradigm grounded in the mathematical certainty of formal languages.

Formal languages offer what I call rigorous verifiability—the ability to automatically and objectively prove that a system’s behavior aligns with its specifications. This transforms AI development by providing two key advantages: a perfect signal for scalable training and a direct pathway to provable safety. My work aims to leverage these properties to mature AI safety from an empirical art into a rigorous science.

My Research Philosophy

  • Problem First: I believe that what to solve is more important than how to solve it. My work begins by identifying critical challenges.
  • Principled Solutions: I work to understand problems from first principles. The deep, theoretical insight gained from this approach is what guides the development of scalable and practical solutions.

Research Interests

  1. Scalable Formal Reasoning: Harnessing the perfect, self-contained signals within formal languages to train powerful and reliable AI agents.
  2. AI Safety via Formal Methods: Using rigorous verification to guarantee that autonomous systems operate transparently and without unintended behaviors, ensuring they align perfectly with user intentions.
  3. Compositional Generalization: Investigating how models can learn to systematically combine existing skills to generalize and solve complex, unseen tasks, for which I believe formal language reasoning is the ideal test bed.

Preprints

(* denotes equal contribution)

  1. Re:Form—Reducing Human Priors in Scalable Formal Software Verification with RL in LLMs
    Chuanhao Yan*, Fengdi Che*, Xuhan Huang*, Xu Xu*, Xin Li*, Yizhi Li*, Xingwei Qu*, Jingzhe Shi, Chenghua Lin, Yaodong Yang, Binhang Yuan, Hang Zhao, Yu Qiao, Bowen Zhou, Jie Fu.
    Preprint, 2025. [paper] [code]

  2. Differentiable Evolutionary Reinforcement Learning
    Sitao Cheng*, Tianle Li*, Xuhan Huang*, Xunjian Yin, Difan Zou.
    Preprint, 2025. [paper] [code]

  3. CALM Before the STORM: Unlocking Native Reasoning for Optimization Modeling
    Zhengyang Tang*, Zihan Ye*, Chenyu Huang*, Xuhan Huang, Chengpeng Li, Sihang Li, Guanhua Chen, Ming Yan, Zizhuo Wang, Hongyuan Zha, Dayiheng Liu, Benyou Wang.
    Preprint, 2025. [paper]

Publications

  1. Federated Linear Dueling Bandits
    Xuhan Huang, Yan Hu, Zhiyan Li, Zhiyong Wang, Benyou Wang, Zhongxiang Dai.
    AAAI Conference on Artificial Intelligence (AAAI), 2026. [paper]

  2. LLMs for Mathematical Modeling: Towards Bridging the Gap between Natural and Mathematical Languages
    Xuhan Huang, Qingning Shen, Yan Hu, Anningzhe Gao, Benyou Wang.
    Findings of the Association for Computational Linguistics (NAACL), 2025. [paper] [code]

  3. VeriEquivBench: An Equivalence Score for Ground-Truth-Free Evaluation of Formally Verifiable Code
    Lingfei Zeng*, Fengdi Che*, Xuhan Huang, Fei Ye, Xu Xu, Binhang Yuan, Jie Fu.
    To Appear in the 14th International Conference on Learning Representations (ICLR), 2026. [paper] [code]


Beyond Research

I’m passionate about fitness and can often be found at the gym or on the basketball court. Feel free to reach out if you’re ever looking for a partner for either!