Besides optimization and parameter estimation, discrimination between rival models has always been a prime objective of experimental design. A good review of the early developments is given in Hill (1978). A big leap from these rather ad-hoc approaches was Atkinson & Fedorov (1975), who introduced $T$-optimality derived from the likelihood ratio test under the assumption that one model is true and its parameters are fixed at nominal values. Maximization of the noncentrality parameter is equivalent to maximizing the power of the respective likelihood ratio test. When the models are nested, $T$-optimality can be shown to be equivalent to $D_s$-optimality for the parameters that embody the deviations from the smaller model (see eg. Fedorov and Khabarov 1986). For this setting the optimal design questions are essentially solved and everything hinges on the asymmetric nature of the NP-lemma. However, the design problem itself is inherently symmetric as it is usually the purpose of the experiment to determine which of two different models is true, and nestedness of the models is a less common situation. The purpose of the talk is to propose a new criterion, termed $\Delta$-optimality, to solve the discrimination design problem for non-nested non-linear regression models, without having to resort to asymmetry as in the references above. We will suppose that we do not have a prior probability distribution on the unknown parameters of the models, which rules out Bayesian approaches such as Felsenstein (1992). Nevertheless, we will assume a specific kind of prior knowledge about the unknown parameters, extending the approach of local optimality. We will demonstrate methodological and computational advantages of the proposed criterion and illustrate its use in a practical setting from pharmacology. |