학술논문

Dynamic Human-in-the-Loop Assertion Generation
Document Type
Periodical
Source
IEEE Transactions on Software Engineering IIEEE Trans. Software Eng. Software Engineering, IEEE Transactions on. 49(4):2337-2351 Apr, 2023
Subject
Computing and Processing
Complexity theory
Codes
Semantics
Human in the loop
Testing
Runtime
Libraries
Language
ISSN
0098-5589
1939-3520
2326-3881
Abstract
Test cases use assertions to check program behaviour. While these assertions may not be complex, they are themselves code that must be written correctly in order to determine whether a test case should pass or fail. We claim that most test assertions are relatively repetitive and straight-forward, making their construction well suited to automation and that this automation can reduce developer effort while improving assertion quality. Examining 33,873 assertions from 105 projects revealed that developer-written assertions fall into twelve high-level categories, confirming that the vast majority ($>$>90%) of test assertions are fairly simple in practice. We created AutoAssert, a human-in-the-loop tool to fit naturally into a developer's test-writing workflow by automatically generating assertions for JavaScript and TypeScript test cases. A developer invokes AutoAssert by identifying the variable they want validated; AutoAssert uses dynamic analysis to generate assertions relevant for this variable and its runtime values, injecting the assertions into the test case for the developer to accept, modify, delete. Comparing AutoAssert's assertions to those written by developers, we found that the assertions generated by AutoAssert are the same kind of assertion as was written by developers 84% of the time in a sample of over 1,000 assertions. Additionally we validated the utility of AutoAssert-generated assertions with 17 developers who found the majority of generated assertions to be useful and expressed considerable interest in using such a tool for their own projects.