Get Latest CSE Projects in your Email


A Human Proximity Operations System Test Case Validation Approach

ABSTRACT

A Human Proximity Operations System (HPOS) poses numerous risks in a real world environment. These risks range from mundane tasks such as avoiding walls and fixed obstacles to the critical need to keep people and processes safe in the context of the HPOS’s situation-specific decision making. Validating the performance of an HPOS, which must operate in a real-world environment, is an ill posed problem due to the complexity that is introduced by erratic (non-computer) actors.

In order to prove the HPOS’s usefulness, test cases must be generated to simulate possible actions of these actors, so the HPOS can be shown to be able perform safely in environments where it will be operated. The HPOS must demonstrate its ability to be as safe as a human, across a wide range of foreseeable circumstances.

This paper evaluates the use of test cases to validate HPOS performance and utility. It considers an HPOS’s safe performance in the context of a common human activity, moving through a crowded corridor, and extrapolates (based on this) to the suitability of using test cases for AI validation in other areas of prospective application.

The collection of test cases must be sufficient to demonstrate that the HPOS is able to perform tasks while avoiding people that are also within the operating space. Generating and considering test cases beyond the amount required to do this is wasteful. However, it is not clear, a priori, what number or configuration of test cases satisfies this minimum requirement.

An infinite number of prospective scenarios exist to be tested; however, these can be abstracted in a variety of ways into a manageable number. For example, all test cases that, while occupying different positions in real space, are so insubstantially different as to be perceived and/or processed as the same by the HPOS can be grouped. An automated (limited-scope AI) use case producer (UCP) will be used to generate test cases for testing the HPOS. The AI UCP will employ pseudo-random operation script generation techniques to generate paths for simulating the human actors.

This paper presents work on using use and test cases for validating the safe and effective operations of a robust HPOS. The challenge of validating HPOS safety, given a nearly infinite set of possible circumstances that it may encounter, is explored. The benefits and drawbacks of using the AI UCP are then considered and differences in testing throughput from using the AI UCP as opposed to human test case generation is characterized.

Source: IEEE
Authors: Justin Huber | Jeremy Straub

For Free CSE Project Downloads:

Enter your email address:
( Its Free 100% )


Leave a Comment

Your email address will not be published. Required fields are marked *