The relationship between computer program complexity and error detection capability is investigated by representing a program as a directed graph and simulating the detection and correction of errors. Variables of interest are test coverage, number of inputs, residual errors, execution time, correction time and node-arc-loop relationships. One applica- tion is in software design where the information provided by the model would be used to select pro- gram structures which are easy to test. A second application is in software testing where test strategies and allocation of test effort would be based on error detection and complexity considerations
Models of programming and debugging suggest many causes of errors, and many classifications of error...
When a computational task tolerates a relaxation of its specification or when an algorithm tolerates...
Two major factors influence the number of faults uncovered by a fault-detection process applied to a...
Includes a correction to the previous article published.The propensity to make programming errors an...
Several research studies have shown a strong relationship between complexity, as measured by the str...
Increasingly, the quantitative evaluation of computer software is recognized as critically important...
Hardware errors are projected to increase in modern computer systems due to shrinking feature sizes ...
A review of work on the occurrence and detection of errors in computer programs is presented. This i...
Human reliability in computer programming can be improved by reducing human errors. The traditional ...
This thesis addresses three important steps in the selection of error detection mechanisms for micro...
International audienceThis paper presents two error models to evaluate safety of a software error de...
The ability for scientific simulation software to detect and recover from errors and failures of sup...
The purpose of the paper is to describe a model for statistically analyzing software error detection...
Modern computer software systems are prone to various classes of runtime faults due to their relianc...
Methods of modeling the detection time or latency period of a hardware fault in a digital system are...
Models of programming and debugging suggest many causes of errors, and many classifications of error...
When a computational task tolerates a relaxation of its specification or when an algorithm tolerates...
Two major factors influence the number of faults uncovered by a fault-detection process applied to a...
Includes a correction to the previous article published.The propensity to make programming errors an...
Several research studies have shown a strong relationship between complexity, as measured by the str...
Increasingly, the quantitative evaluation of computer software is recognized as critically important...
Hardware errors are projected to increase in modern computer systems due to shrinking feature sizes ...
A review of work on the occurrence and detection of errors in computer programs is presented. This i...
Human reliability in computer programming can be improved by reducing human errors. The traditional ...
This thesis addresses three important steps in the selection of error detection mechanisms for micro...
International audienceThis paper presents two error models to evaluate safety of a software error de...
The ability for scientific simulation software to detect and recover from errors and failures of sup...
The purpose of the paper is to describe a model for statistically analyzing software error detection...
Modern computer software systems are prone to various classes of runtime faults due to their relianc...
Methods of modeling the detection time or latency period of a hardware fault in a digital system are...
Models of programming and debugging suggest many causes of errors, and many classifications of error...
When a computational task tolerates a relaxation of its specification or when an algorithm tolerates...
Two major factors influence the number of faults uncovered by a fault-detection process applied to a...