UPDF AI

MetaCoder: Generating Code from Multiple Perspectives

Xin Chen,Zhijie Jiang,4 Authors,Shanshan Li

2025 · DOI: 10.1145/3755881.3755884
0 Citations

TLDR

MetaCoder generates code in high-proficiency language, and then summarizes the code, and generates target code using the task description, generated code, and summary, and detects and corrects syntax errors in the target code.

Abstract

Large Language Models (LLMs) have already demonstrated excellent performance in code generation tasks. However, their proficiency varies considerably among different programming languages, performing well in languages like Python, but struggling with languages such as C++ and Java. This discrepancy limits their utility in scenarios requiring multi-language support. Existing methods aimed at enhancing the code generation capabilities of LLMs typically emphasize general performance improvements while overlooking discrepancies between languages, resulting in suboptimal outcomes for less proficient languages. To address this challenge, we propose MetaCoder. Given a task description, MetaCoder first generates code in high-proficiency language, and then summarizes the code. Finally, MetaCoder generates target code using the task description, generated code, and summary. Additionally, MetaCoder detects and corrects syntax errors in the target code. We evaluate MetaCoder on HumanEval-x, and compared with Zero-Shot, the Pass@1 in generating C++ and Java code has improved by up to 13.09% and 16.98%, respectively.