Towards Realistic Evaluation of Commit Message Generation by Matching Online and Offline Settings Paper • 2410.12046 • Published Oct 15
Towards Realistic Evaluation of Commit Message Generation by Matching Online and Offline Settings Paper • 2410.12046 • Published Oct 15
Towards Realistic Evaluation of Commit Message Generation by Matching Online and Offline Settings Paper • 2410.12046 • Published Oct 15
Long Code Arena: a Set of Benchmarks for Long-Context Code Models Paper • 2406.11612 • Published Jun 17 • 24
Long Code Arena: a Set of Benchmarks for Long-Context Code Models Paper • 2406.11612 • Published Jun 17 • 24
Long Code Arena: a Set of Benchmarks for Long-Context Code Models Paper • 2406.11612 • Published Jun 17 • 24
Long Code Arena: a Set of Benchmarks for Long-Context Code Models Paper • 2406.11612 • Published Jun 17 • 24
On The Importance of Reasoning for Context Retrieval in Repository-Level Code Editing Paper • 2406.04464 • Published Jun 6
Long Code Arena: a Set of Benchmarks for Long-Context Code Models Paper • 2406.11612 • Published Jun 17 • 24
Long Code Arena: a Set of Benchmarks for Long-Context Code Models Paper • 2406.11612 • Published Jun 17 • 24
From Commit Message Generation to History-Aware Commit Message Completion Paper • 2308.07655 • Published Aug 15, 2023
All You Need Is Logs: Improving Code Completion by Learning from Anonymous IDE Usage Logs Paper • 2205.10692 • Published May 21, 2022
On The Importance of Reasoning for Context Retrieval in Repository-Level Code Editing Paper • 2406.04464 • Published Jun 6
All You Need Is Logs: Improving Code Completion by Learning from Anonymous IDE Usage Logs Paper • 2205.10692 • Published May 21, 2022
From Commit Message Generation to History-Aware Commit Message Completion Paper • 2308.07655 • Published Aug 15, 2023
From Commit Message Generation to History-Aware Commit Message Completion Paper • 2308.07655 • Published Aug 15, 2023
Out of the BLEU: how should we assess quality of the Code Generation models? Paper • 2208.03133 • Published Aug 5, 2022 • 2
Out of the BLEU: how should we assess quality of the Code Generation models? Paper • 2208.03133 • Published Aug 5, 2022 • 2
CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsourced Audio Transcription Paper • 2107.01091 • Published Jul 2, 2021