Show simple item record

dc.contributor.authorHendler, James A.
dc.date.accessioned2023-02-14T13:37:46Z
dc.date.available2023-02-14T13:37:46Z
dc.date.issued2023-02-09
dc.identifier.citationJames Hendler (2023). Understanding the limits of AI coding. Science, 379(6632), 548-548.en_US
dc.identifier.urihttp://doi.org/10.1126/science.adg4246
dc.identifier.urihttps://www.science.org/doi/full/10.1126/science.adg4246
dc.identifier.urihttps://hdl.handle.net/20.500.13015/6515
dc.description.abstractIn the 9 December 2022 issue, the Research Article “Competition-level code generation with AlphaCode” (Y. Li et al., p. 1092) and the accompanying Perspective, “AlphaCode and ‘datadriven’ programming” (J. Z. Kolter, p. 1056) describe an artificial intelligence (AI)–based system for generating code. The authors explain that the system can be used for small coding problems, such as tests for computing students, and that they are far from being useful in computing applications that include millions of lines of code, such as word processing. As we enter an era of AI where tools like AlphaCode and chatGPT will change how tasks are performed, it is important to understand the boundaries of what they can and cannot do. To make sure that code can be maintained and managed by other programmers, human developers use mnemonic variable names and embed explanatory comments. Understanding, debugging, and extending code written by other humans remains a formidable challenge—perhaps even more difficult than producing the code in the first place. In addition, many techniques are used for validation and verification, and code used in mission-critical applications, such as airline flight systems, goes through substantial quality assurance testing. AI models have yet to address the challenges of maintaining code, ensuring that users can decipher it, and subjecting programs to safety protocols. Understanding and evaluating the limits of these techniques is crucial before they are put into real-world use. Some testing of capabilities has been applied to language generation tools (1, 2), but AI coding remains a nascent field. The Technology Policy Committee of the Association for Computing Machinery recommends more investment in transparency and accountability for AI algorithms (3). The promise of systems like AlphaCode must be carefully balanced against the risks of their use. The interaction between AI code-generation systems and human programmers must be resolved before such systems can become an integral part of the future of computing.en_US
dc.publisherAAASen_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.titleUnderstanding the limits of AI codingen_US
dc.typeArticleen_US


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States