SmartTS: A Component-based and Model-Driven Approach to Software Testing in Robotic Software Ecosystem

Validating the behaviour of commercial off-theshelf components and of interactions between them is a complex, and often a manual task. Treated like any other software product, a software component for a robot system is often tested only by the component developer. Test sets and results are often not available to the system builder, who may need to verify functional and non-functional claims made by the component. Availability of test records is key in establishing compliance and thus selection of the most suitable components for system composition. To provide empirically verifiable test records consistent with a component’s claims would greatly improve the overall safety and dependability of robotic software systems in open-ended environments. Additionally, a test and validation suite for a system built from the model package of that system empirically codifies its behavioural claims. In this paper, we present the “SmartTS methodology”: A component-based and model-driven approach to generate modelbound test-suites for software components and systems. SmartTS methodology and tooling are not restricted to the robotics domain. The core contribution of SmartTS is support for test and validation suites derived from the model packages of components and systems. The test-suites in SmartTS are tightly bound to an application domain’s data and service models as defined in the RobMoSys (EU H2020 project) compliant SmartMDSD toolchain. SmartTS does not break component encapsulation for system builders while providing them complete access to the way that component is tested and simulated. Keywords—Model-Driven Engineering (MDE); Componentbased Software Engineering (CBSE); Model-Driven Testing (MDT); Component-based Software Testing (CBST); Service Robotics; Software Quality; Automated Software Testing


I. INTRODUCTION
A software product may be difficult to understand and modify, it might be prone to misuse and difficult to use, it might not integrate well with another piece of software and it might work on only a particular machine and only under some very hard assumptions. Unless one looks under the hood to measure software on these parameters, the quality of the software can not be judged purely on the basis that it works and was delivered on time [1]- [3]. Fig. 1 shows key characteristics that can be used to evaluate the overall quality of a software product 1 . A good software product can be qualified as testable if it performs well on the following quality parameters as suggested by Boehm et al. [1]. 1 Throughout this paper, Indicators like , , , , & are internal connections between notions presented in the paper. B1 Accountability: Code 2 allows for mechanisms to measure its usage, e.g. code instrumentation .
B2 Accessibility: Code allows selective usage of its parts and provides side entrances for test access .
B3 Communicativeness: Code allows specification of inputs and provides corresponding outputs in a usable form .
B4 Self-descriptiveness: Code provides enough information, e.g., a test model, for its use 3 and verification 4 .
B5 Structured: Code is organized into definite interdependent parts, i.e. the software system is composed of part-wise/component-based testable units .
In summary, a testable code has an established verification criterion and supports performance evaluation. Testing in the software industry is broadly categorised into unit, integration and system testing [4]- [7]. A fourth level, namely, acceptance testing may be added on some occasions (see Fig. 2). L1 Unit testing as the name suggests tests individual code components .
L2 When different code modules are joined, the test of data flowing through the interface is integration testing .
L3 A system prepared by several code units is tested against functional and non-functional system-level requirements in system testing .
L4 An additional enforcement check performed against a contract, after the final delivery of the software product is acceptance testing .
High speed, repeatable, low effort and inexpensive testing improves the overall quality of the software product and its development workflow. A well-structured test and validation mechanism can be made cheaper when automated. Removing the human element from the testing and enforcement equation 2 Code in software engineering is short for source code unless stated otherwise. 3 Component blocks, ports (required and provided services), connectors. 4 Objectives, assumptions, constraints, revisions and usage history. is of great value for high complexity, incrementally growing software systems. Machine delivered tests allow for frequent regression checks on legacy systems in a long-term life cycle of a software product. The importance of automation in software testing was realized very early in the development of software development as an industrial process [8], [9]. A trade-off exists between the degree of automation in testing and the regularity by which those tests will be executed, which tilts heavily in favour of automation as a system grows more elaborate and mission-critical [10]. Thus, it will be safe to assume that any modern software development methodology will be incomplete without a comprehensive plan to automate testing in all test levels ( -)@ and across all test quality parameters ( -)@.
Testing robotic software differs from software testing in general for several reasons. R1 Robots interact with the physical world with sensors and actuators which are inherently prone to noise and faulty execution [11] .
R2 Environments, where robots work, are often open-ended with very few assumptions [12]. There is a physical danger involved while working with robotic systems, especially during their testing .
R3 Safety concerns are high especially when robots work near humans .
R4 Robot cost is high and their availability for extensive testing is low .
R5 Robots, sensors mounted on those robots and the software driving the two are often built by different vendors with a wide scope of utilization in mind. Robust and verifiable hardware and software composition thus become ever more important for robotic systems [13] .
R6 Very few off-the-shelf software components fit realworld applications, a large portion of code has to be custom made for a particular robot .
R7 Robots are built using several hardware components like gears, wheels and consumables which wear down, get damaged or replaced over time. The challenge is to reuse, with confidence, the existing software against worn-out or some new, slightly different hardware [11], [14] .
R8 Difficult to specify what constitutes a correct behaviour for a robotic system [15] i.e. it is not always clear what needs to be tested. The challenge comes in particular from the open-ended world for which full coverage testing is not possible .
R9 People from various domains work on a robot, not all of them are trained software engineers .
R10 Lack of communities and uniform standards for the robotic industry. Standards for robotic hardware and software should be made abstract and encapsulated to hide the intellectual property of a business, while not compromising its usability and configurability for the end-user .
A recent extensive study on the challenges of testing robotic systems concluded with three important themes describing major challenges in testing robots. Following are the themes and suggested solutions reasoned in the study [16].
A1 Real-world complexities: A robot's interactions with the real world is a key difficulty in testing robotic software . A1S1 Rapid development of reliable simulators with better Application Programming Interface (APIs) and User Interfaces (UI), leading to better automated simulation testing . A1S2 Research on tools and techniques for automated testing of robotic software .
A2 Communities and standards: Community-driven standards promote product quality and incentivise member businesses . A2S1 Developing a robotic software ecosystem with special emphasis on software quality standards . A2S2 Guidelines and tools to promote healthy practices for the growth of the ecosystem .
A3 Component integration: Robotic software demands an integrated (hardware + software) approach to system testing . A3S1 Better availability of hardware components for testing within the community . A3S2 Promote tools like hardware-in-the-loop simulation borrowed from other industries . A3S3 Develop better test oracles 5 to ensure that the testing apparatus knows with confidence the results of running a test .
In the past decade, there have been some applicationspecific proposals for systems for testing software for robots [17]- [27]. These approaches present tools and algorithms to address the above ( -)@ concerns but they are either too specific for a particular robotic application [18]- [20] or are generic extensions of general software testing techniques and standards [21], [22] with little to no planning specifically for robotic software testing ( -)@. In either case, none of these is community-driven or providing tools or standards for businesses to follow. A notable exception to this is the Robot Testing Framework (RTF) [23] which is the right step towards test automation by a plug-in based approach to testing that is independent of platform, middleware and programming language used for writing the test plug-ins. There are works focusing on improving simulation testing [24]- [26] and model-driven performance testing [27]. An idea borrowed from the computer hardware industry [28], the builtin test enabled components [29]- [31], is to have a functionally separate maintenance mode to provide access to Built-In Tests (BITs). Using test-wrappers is another common approach that works along with the BIT approach to envelop the software component in a single or multi-layer software wrapper that is transparent to both the component being tested and its peers in the environment. These wrappers, when used with BITs, enable testing without breaking component encapsulation. The RESOLVE [32] approach is one such approach, it proposes a two-layer wrapper to achieve automated black-box testing of software components.
The SmartTS approach to component testing is to create a tester component whose model is derived from the model of the component being tested. This tester component implements the BITs for automated testing of the component and its code is transparent (white-boxed) to the ecosystem, thus the component encapsulation is maintained (black-boxed) while no additional operational overload is attached to the component for implementation and execution of BITs in maintenance/test mode (See Fig. 3).
According to our experience of working with robots in the service robot industry, community-driven models with a special focus on Component-Based Software Development (CBSD) works best for the development of the robotic software component. Although there exist several Model-Based Engineering (MDE) approaches for software development in general [31], [33]- [37], approaches with a special focus on the robotic industry are essential for the growth of the robotics industry (Multi-annual roadmap [38], the European SPARC Robotics [39] initiative). One such effort towards creating an ecosystem for model-driven and component-based development of robotic software is the EU H2020 RobMoSys: Composable Models and Software for Robotics [40], [41] project. Meta-models that promote separation of concerns [42], [43] along different roles such as robotic experts and application domain experts are highly desirable for the industry. MDE supports the separation of concerns and of roles since it provides operational modules dedicated to use by specific stakeholders. The RobMoSys approach has a special emphasis on a clear separation of concerns and roles and promotes community building for efficient collaboration between stakeholders. Other model-based efforts towards a robotic software ecosystem [44]- [47] are also taking separation of concerns and roles as an essential part of their working philosophy, which will be essential for their success [48].
In CBSD for software-intensive service robotic systems, validating the behaviour of a supplied component and its interactions with other components is a complex, and often a manual task. In EU Robotics Strategic Research, Innovation and Deployment Agenda 2020 on AI, Data and Robotics Partnership [49], trustworthiness was identified as one of the core characteristics that robotics and AI systems need to display. Trustworthiness is a property of the system derived from the trustworthiness of its constituents and their interactions. Treated like any other software product, a software component for a robot system is tested by the component developer (and/or component tester) at the vendor's (component supplier) end. Test-sets and records of testresults are often not available to the system builder, who may need them to verify functional and non-functional claims made by the vendor about the component. Availability of test records is key in establishing compliance and thus selection of the most suitable component for system composition. To provide empirically verifiable test records consistent with a component's claims would greatly improve the overall safety and dependability of robotic software systems in openended environments. It is of added benefit that when a system is composed of several components, a part of the system's test and validation suite is automatically generated from the test-suites of the constituent components. This further helps empirically codify a system's functional and non-functional behavioural claims. To the best of our knowledge, there is an absence of a wholistic model-driven approach towards CBSD for robotic systems, that integrates support for test and validation suites 6 within the model-package 7  E7 Does not break component encapsulation .
E8 Does not break system encapsulation to enforcement and verification agents .
In this paper, we present the "SmartTS methodology": A component-based and model-driven approach to generate model-bound test-suites for software components and systems. The test-suites in SmartTS are tightly bound to an application domain's data and service models as defined in the RobMoSys [40] (EU H2020 project) compliant SmartMDSD [50], [51] Toolchain. SmartTS provides automated generation, execution and transformation of test-suite models and test-suite results across a service domain, component and system models, enabling automated testing and verification of components and systems. Component test-suite results are used for selecting an appropriate component for composition. System test-suite results are used for documenting or sensing system behaviour during composition, acceptance testing, enforcement or for run-time diagnosis. SmartTS does not break component encapsulation for system builders while providing them complete access to the way that a component is tested and simulated (Supporting composition and separation of roles).
The rest of the paper is organized as follows. Section titled SmartTS Overview introduces the intended goals and contributions of the SmartTS toolchain. It presents the principles and methodologies that have inspired SmartTS. Sections Since the trustworthiness of a system is derived from its components, the system-level test-suite is partly derived from tester components and models of components that constitute the system. Model-driven and component-based software development form the base on which the SmartTS methodology is placed. In this section, we will walk through the methodology and present the mechanism by which SmartTS proposes a component-based and model-driven approach to software testing in a robotic software ecosystem.
SmartTS is a member of the RobMoSys/SmartMDSD ecosystem. In this paper, we are presenting the SmartTS methodology in the context of its core ecosystem (Rob-MoSys/SmartMDSD). The principles and mechanisms described here though can be transported as-is to any component-based and model-driven software ecosystem. Fig. 4 shows the anatomy of a typical SmartMDSD component. It is typical in CBSD to represent components and systems using the blocks ports connectors notation (see Fig. 2). In this paper, we use a custom blocks ports connectors notation (Fig. 5) to present the SmartTS methodology. Note that SmartMDSD components can have any number of input, output, request or answer ports and exactly one coordination & configuration  slave port unlike what one may infer from the simplified representation in Fig. 4. If a component has a master port of a coordination & configuration interface integrated then it can coordinate and control other components. Furthermore, the four-side arrangement of ports in Fig. 4 is only for representation and the SmartMDSD toolchain GUI doesn't tightly bound these ports to particular sides of a component. SmartMDSD Services [52] is an item being transported (communication object [53]) in a particular manner (communication pattern [54]). Depending on the communication pattern, the Smart-MDSD component could possess an input, output, request or answer port (Fig. 4). The 'send' communication pattern is one-way while a 'query' communication pattern is for two-way communication of communication objects. A publish/subscribe mechanism is available for one-to-many communication using 'push' (distribution) and 'event' (asynchronous notification) communication patterns. Coordination ports are for a two-way exchange of 'coordination' patterns ('parameters', 'states', 'dynamic wiring' and 'monitoring data'). These 'coordination' patterns are internally built on top of 'data' patterns ('send', 'query', 'push' and 'event'). A system built using the Smart-MDSD toolchain has a default coordination master (e.g. a sequencer) with all constituent components as its coordination clients, in a configuration similar to the system XYZSys (examples) 9 .
For the benefit of the reader, it is enough to retain that a SmartMDSD component typically acts as a service consumer as well as a service provider at the same time. It is coordinated by a global coordination master component (sequencer) which in normal usage is hidden from the user. A SmartTS tester component to a component would thus become a consumer to every service provided by the component and provider to all services requested by the component. It will also act as a coordination master to the component and the system built using the component and its tester component would have a configuration similar to the one shown in (GO-JU) 9 between component X (HP 9 ) and its tester component Y (HR 9 ). In shorthand notation, this system would be written as IT 9 . The reader is advised to go through the notation given in Fig. 5. A component can have more than one instance in a system (CV 9 ) and can have differently named operating modes (FV 9 ). Systems are represented with their names in curly brackets (examples 9 ). Mapping of SmartMDSD component notation (Fig. 4) to SmartTS custom notation is given in (AA-JF) 9 . Fig. 6 shows key SmartTS transformations and validation mechanisms. A component C (Fig. 6.i) is transformed to its tester component C ts (Fig. 6.ii, Fig. 6.ix). In SmartMDSD, the component code is generated from its component model package. The same model package is transformed into the model package for the component C ts . This tester component provides an empty code template which is later filled to implement BITs for component C. Once BITs are implemented and linked to associated test and validation data (discussed later in the Section SmartTS and the SmartMDSD Toolchain), the component C ts is deployed to test component C (Fig. 6.x). The component C ts has three principal operating modes namely test, train and simulate (Fig. 6.iii-v). In test mode, the component C ts is deployed with component C to form the test system {C test } for component C (Fig. 6.vii, x). The test results C tr from the system {C test } are used to validate the claims made by the component (Fig. 6.vi, xi). The component C ts is deployed in train mode to form the training system {C train } for component C (Fig. 6.viii, xii). This training system {C train } trains the component C ts to work in the simulated mode. Note that the C ts is simulated against BITs which may not match the BITs implemented for its test mode. The difference between these two sets of BITs is only in terms of the motivation behind their existence.

SmartTS Tester Component
SmartTS tester component for a component is a consumer to every service provided by the component, provider to all services requested by the component and it acts as a coordination master to the component.

SmartTS Test System
SmartTS test system is a system with a component and its tester component deployed to execute the BITs implemented by the tester component and generate corresponding test results.

SmartTS Trainer System
SmartTS trainer system is a system with a component and its trainer component deployed to execute the BITs implemented by the trainer component and generate a fully trained simulator component.

SmartTS Simulator Component
SmartTS simulator component is a tester component operating in the simulate mode. The simulator component can reproduce the service and coordination behaviour of the component for a specific set of BITs.
In Principal, once trained, the component C ts in simulate mode can reproduce the service and coordination behaviour of component C for a specific set of BITs. This simulated mode component C ts is then used in various simulated variants ({SsC}, {SpC} and {S sim }: Fig. 6.xiv, xv, xvii, xix-xxi) of a given system {S} (Fig. 6.xiii, xviii). Results (Fig. 6.xvi, xix-xxii) from these simulated variants of the system are transformed to a single set of simulation test results {S} tr for the system {S} (Fig. 6.xxii). Trustworthiness (Conformance to claims and agreed upon BITs) is a property of the System ({S}) derived from the trustworthiness of its constituents (A tr , B tr and C tr ) and their interactions ({SsA} tr , {SpA} tr , {SsB} tr , {SpB} tr , {SsC} tr , {SpC} tr , {S sim } tr ). The simulation test results {S} tr for the system {S} along with test results of its constituents (A tr , B tr and C tr ) are used to validate the claims made by the system (Fig. 6.xxiii).

III. SMARTTS AND THE SMARTMDSD TOOLCHAIN
SmartMDSD toolchain [50], [51] is a RobMoSys [40] compliant model-driven tooling for component-based robotic software development based on the SMARTSOFT methodology [55]. SmartTS: Test-suite extensions for SmartMDSD toolchain, presented for the first time through this paper is an addition to the existing SmartMDSD toolchain and provides constructs for modelling built-in contract testing in systems built using the SmartMDSD toolchain. SmartTS provides models to associate a test and validation suite with any of the existing SmartMDSD models. It also allows for the creation and usage of data elements associated with the test and validation suites. Eclipse features and plug-ins for SmartTS are available for download [56]. Context and video tutorials on the use of SmartTS will soon be available online at SRRC wiki web page [57]. Fig. 7 shows the key elements of the SmartTS methodology.
These tier-3 contracts (28P 10 ) can be written for claims made by a component developer, requirements made by a system builder, mutually agreed upon behaviour or any other contractual requirement that any of the models from a Smart-MDSD model package should adhere to. The tier-3 contracts (28P 10 ) in built-in contract testing could also be shared between different component developers as standard tests that all components of a particular kind should pass or between system builders as standard tests for quality assurance or enforcementrelated requirements. Standard tests for domain requirements, quality assurance and enforcement can be distributed as tier-2 contracts (20P 10 ) between domain experts (15N 10 ) and ecosystem users (25N 10 ).

IV. SMARTTS IN THE CONTEXT OF ROBMOSYS
RobMoSys: Composable models and software for robotics [40], [41] is an EU H2020 funded project (2017-2020, Grant number 732410) to create better modelling standards and tooling for robotic systems. RobMoSys has a three-tier ecosystem for model-driven, component-based software development for robotic systems (RobMoSys: Wiki [41]). Fig. 9 shows the three tiers of the RobMoSys ecosystem and the roles that participate in these tiers. Members of a lower-tier conform to models defined by members of a higher-tier in the ecosystem. SmartMDSD toolchain [50], [51] is a RobMoSys conformant toolchain that enables ecosystem users to share components and compose systems that are according to the principles dictated by RobMoSys. SmartTS is an addition to SmartMDSD tooling and provides a model-based methodology for software testing in the RobMoSys ecosystem. ACTION   Fig. 10 shows some of the key features of SmartTS acting along with the SmartMDSD toolchain. Tier-2 domain experts and Tier-3 users transform SmartMDSD models to contracts ( Fig. 10.a) and documents (Fig. 10.b). SmartTS documents are transformed into data sets (Fig. 10.c) which are referred to in SmartTS contracts. Component models are transformed to their SmartTS tester component ( Fig. 10.(d,e,f)) which is used to test ( Fig. 10.g) or simulate ( Fig. 10.h) the component. SmartTS tooling as it stands today is functionally complete for the workflow described in Fig. 8. Automation and visualization of some workflow elements (e.g. validation graphs) is planned to further improve user experience. Eclipse features and plug-ins for SmartTS are available for download [56]. Context and video tutorials on the use of SmartTS will soon be available online at the SRRC wiki web page [57].

VI. CONCLUSIONS AND FUTURE WORKS
Validating the behaviour of commercial off-the-shelf components and system interactions is enhanced by the availability of empirically verifiable test records consistent with a component's claims. The trustworthiness of a system is derived from the trustworthiness of its constituents. Test and validation suite for a system can be built using model-driven test and validation suites of its components. In this paper, we presented the "SmartTS methodology: A component-based and model-driven approach to generate model-bound testsuites for software components and systems". The test-suites in SmartTS are tightly bound to an application domain's data and service models as defined in RobMoSys (EU H2020 project) compliant SmartMDSD toolchain. SmartTS does not break component encapsulation for system builders while providing them complete access to the way that component is tested and simulated. At present, the SmartTS functionality is partially consolidated in the SmartMDSD toolchain ( -). Plans to automate remaining SmartTS transformations are marked for incorporation in future releases of the SmartMDSD toolchain as SmartDBE (Smart digital business ecosystem) features and plug-ins [56].