Understanding the limits of AI coding

Authors
Hendler, James A.
ORCID
No Thumbnail Available
Other Contributors
Issue Date
2023-02-09
Keywords
Degree
Terms of Use
Attribution-NonCommercial-NoDerivs 3.0 United States
Full Citation
James Hendler (2023). Understanding the limits of AI coding. Science, 379(6632), 548-548.
Abstract
In the 9 December 2022 issue, the Research Article “Competition-level code generation with AlphaCode” (Y. Li et al., p. 1092) and the accompanying Perspective, “AlphaCode and ‘datadriven’ programming” (J. Z. Kolter, p. 1056) describe an artificial intelligence (AI)–based system for generating code. The authors explain that the system can be used for small coding problems, such as tests for computing students, and that they are far from being useful in computing applications that include millions of lines of code, such as word processing. As we enter an era of AI where tools like AlphaCode and chatGPT will change how tasks are performed, it is important to understand the boundaries of what they can and cannot do. To make sure that code can be maintained and managed by other programmers, human developers use mnemonic variable names and embed explanatory comments. Understanding, debugging, and extending code written by other humans remains a formidable challenge—perhaps even more difficult than producing the code in the first place. In addition, many techniques are used for validation and verification, and code used in mission-critical applications, such as airline flight systems, goes through substantial quality assurance testing. AI models have yet to address the challenges of maintaining code, ensuring that users can decipher it, and subjecting programs to safety protocols. Understanding and evaluating the limits of these techniques is crucial before they are put into real-world use. Some testing of capabilities has been applied to language generation tools (1, 2), but AI coding remains a nascent field. The Technology Policy Committee of the Association for Computing Machinery recommends more investment in transparency and accountability for AI algorithms (3). The promise of systems like AlphaCode must be carefully balanced against the risks of their use. The interaction between AI code-generation systems and human programmers must be resolved before such systems can become an integral part of the future of computing.
Description
Department
Publisher
AAAS
Relationships
Access