Full data obtained in this experiment are presented in (Balera and Santiago Júnior 2017). Also note that P is the submitted set of parameters, V is the set of values of the parameters, and t is the strength. As we have just pointed out, TTR 1.1 follows the same general 3 steps as we have in TTR 1.0. In Section 3, we show the main definitions and procedures of versions 1.1 and 1.2 of our algorithm. Section 4 shows all the details of the first controlled experiment when we compare TTR 1.1 against TTR 1.2.

In version 1.1 (Balera and Santiago Júnior 2016), we made a change where we do not order the input parameters. In the last version, 1.2, the algorithm no longer generates the matrix of t-tuples (Θ) but rather it works on a t-tuple by t-tuple creation and reallocation into M. Computer scientists and mathematicians both work on algorithms to generate pairwise test suites. Numerous exist to generate such test suites as there is no efficient exact solution for every possible input and constraints scenarios. MC is a process in which all reachable states of a given system are generated in the form of a directed graph.

## 3 Description of the experiment

For this experiment, we identified the algorithm/tool for CIT test case generation. The dependent variables allow us to observe the result of manipulation of the independent ones. For this study, we identified the number of generated test cases and the time to generate each set of test cases and we jointly considered them. In the context of CIT, meta-heuristics such as simulated annealing (Garvin et al. 2011), genetic algorithms (Shiba et al. 2004), and Tabu Search Approach (TSA) (Hernandez et al. 2010) have been used.

Mixed covering arrays (MCAs) are combinatorial structures that can be used to represent these test-suites. MCAs are combinatorial objects represented as matrices having a test case per row. MCAs are small, in comparison to an exhaustive approach, and guarantee a level of interaction coverage among the parameters involved. This study presents a metaheuristic approach based on a simulated annealing (SA) algorithm for constructing MCAs. This algorithm incorporates several distinguishing features, including an efficient heuristic to generate good quality initial solutions, and a compound neighbourhood function that combines two carefully designed neighbourhood functions.

## A minimum centre distance rule activation method for extended belief rule-based classification systems

In computer science, all-pairs testing or pairwise testing is a combinatorial method of software testing that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by “parallelizing” the tests of parameter pairs. Nowadays, software systems are diverse as well as complex and have many possible configurations. These qualities and features in the software systems has inaugurated a demand of software and applications that are uniquely designed and have innovative as well as creative features.

Now, we can still reduce the combination further into All-pairs technique. TTR was implemented in Java and C (TTR 1.2) and we developed three versions of our algorithm. In this paper, we focused on the description of versions 1.1 and 1.2 since version 1.0 was detailed elsewhere (Balera and Santiago Júnior 2015). It is an adaptation of IPOG where constraint handling is provided via a SAT solver.

At first, we represent a method to extract information about the system under test (SUT) from its model using model checking (MC) techniques. MC is a method that scans all possible states of the system for detecting errors. After that, we propose another new approach using genetic algorithm to generate the optimal CA in terms of speed and size. To evaluate the results, we implemented the proposed strategy along with several other metaheuristic algorithms in the GROOVE tool, an open toolset for designing and model checking graph transformation specifications. The results represent that the proposed strategy performs better than others.

In Section 6, the second controlled experiment is presented where TTR is confronted with the other 5 greedy tools. In Section 8, we show the conclusions and future directions of our research. Results of the first controlled experiment indicate that TTR 1.2 is more adequate than TTR 1.1 especially for higher strengths (5, 6). In the second controlled experiment, TTR 1.2 also presents better performance for higher strengths (5, 6) where only in one case it is not superior (in the comparison with IPOG-F).

- We performed two controlled experiments addressing cost-efficiency and only cost.
- It is an adaptation of IPOG where constraint handling is provided via a SAT solver.
- The conclusion validity has to do with how sure we are that the treatment we used in an experiment is really related to the actual observed outcome (Wohlin et al. 2012).
- Software Product Lines (SPL) are difficult to validate due to com- binatorics induced by variability, which in turn leads to combinatorial explo- sion of the number of derivable products.

The first one is extracting information about parameters, identifying constraints and detecting interactions between subsystems automatically. In most of the existing approaches, this information is fed to the system manually which makes it difficult or even impossible for testing modern software systems. Even though most of the existing approaches are concentrated on this challenge, their results show that there is still room for improvement.

The strategy based on the Particle swarm optimization (PSO) algorithm [2], called Discrete Particle Swarm Optimization (DPSO), has the best performance among the strategies in terms of array size. Genetic strategy (GS) [3] based on GA has the highest rate among AI-based strategies and has good results in terms of array size. Also, Harmony Search Strategy (HSS) [4], Cuckoo Search (CS) [5] and Particle Swarm-based t-way Test Generator (PSTG) [6] strategies have also acceptable results in terms of time and array size among AI-based strategies. In the meantime, the TCAS tool is the best strategy in terms of time and array size.

In the construct validity, the goal is to ensure that the treatment reflects the construction of the cause, and the result the construction of the effect. Even considering y, it is also important to note that not always the expected targets will be reached with the current configurations of the M and Θ matrices. In other words, in certain cases, there will be times when no existing t-tuple will allow the test cases of the M matrix to reach its goals. It is at this point that it becomes necessary to insert new test cases in M. This insertion is done in the same way as the initial solution for M is constructed, as described in the section above.

However, it is not entirely clear whether the IPOG algorithm (Lei et al. 2007) was implemented in the tool or if another approach was chosen for t-way testing. In our empirical evaluation, TTR 1.2 was superior to IPO-TConfig not only for higher strengths (5, 6) but also for all strengths (from 2 to 6). Moreover, IPO-TConfig was unable to generate test cases in 25% of the instances (strengths 4, 5, 6) we selected.

The model checker searches the state space completely and examines the correctness a property. Using The MC is commonplace in the CT, and various work has been done with the combination of these two [18], [19], [20]. In these strategies, the input parameters are usually given separately to the CT, and after the test suite is generated, the test is performed on the SUT. A software testing strategy that tests all possible pairs of the parameter values. The independent variable is the algorithm/tool for CIT test case generation for both assessments (cost-efficiency, cost). This section presents a controlled experiment where we compare versions 1.1 and 1.2 of TTR in order to realize whether there is significant difference between both versions of our algorithm.

The most common bugs in a program are usually found and triggered either by an input parameter or by an interaction between pair of parameters. Bugs involving interactions between three or more parameters are both progressively less common as well as progressively more expensive to find, such testing has as its limit the testing of all possible inputs. In this case a combinatorial technique for picking test cases like all pair is a very useful cost benefit compromise that enables a significant reduction in the number of test cases without drastically compromising functional coverage.

Also, we can save these models as well and export them to a different type of files. Another advantage of this tool is that it can use three different algorithms (FIPOG, FIPOG-F, FIPOG-F2) to generate the combinatorial object array. Are the extra tests required by Orthogonal Array-based solutions (compared to pair-wise solutions) worth it? Probably not in software testing https://www.globalcloudteam.com/ as our CEO, Justin Hunter, explains in software testing, you are not seeking some ideal point in a continuum; you’re looking to see what two specific pieces of data will trigger a defect. Therefore, considering the metrics we defined in this work and based on both controlled experiments, TTR 1.2 is a better option if we need to consider higher strengths (5, 6).