Understanding the limits of AI coding
AuthorHendler, James A.
Full CitationJames Hendler (2023). Understanding the limits of AI coding. Science, 379(6632), 548-548.
MetadataShow full item record
URIhttp://doi.org/10.1126/science.adg4246; https://www.science.org/doi/full/10.1126/science.adg4246; https://hdl.handle.net/20.500.13015/6515
AbstractIn the 9 December 2022 issue, the Research Article “Competition-level code generation with AlphaCode” (Y. Li et al., p. 1092) and the accompanying Perspective, “AlphaCode and ‘datadriven’ programming” (J. Z. Kolter, p. 1056) describe an artificial intelligence (AI)–based system for generating code. The authors explain that the system can be used for small coding problems, such as tests for computing students, and that they are far from being useful in computing applications that include millions of lines of code, such as word processing. As we enter an era of AI where tools like AlphaCode and chatGPT will change how tasks are performed, it is important to understand the boundaries of what they can and cannot do. To make sure that code can be maintained and managed by other programmers, human developers use mnemonic variable names and embed explanatory comments. Understanding, debugging, and extending code written by other humans remains a formidable challenge—perhaps even more difficult than producing the code in the first place. In addition, many techniques are used for validation and verification, and code used in mission-critical applications, such as airline flight systems, goes through substantial quality assurance testing. AI models have yet to address the challenges of maintaining code, ensuring that users can decipher it, and subjecting programs to safety protocols. Understanding and evaluating the limits of these techniques is crucial before they are put into real-world use. Some testing of capabilities has been applied to language generation tools (1, 2), but AI coding remains a nascent field. The Technology Policy Committee of the Association for Computing Machinery recommends more investment in transparency and accountability for AI algorithms (3). The promise of systems like AlphaCode must be carefully balanced against the risks of their use. The interaction between AI code-generation systems and human programmers must be resolved before such systems can become an integral part of the future of computing.;
The following license files are associated with this item: