In this paper, we implement a novel posterior predictive p-value procedure with the end to discriminate among models. The novelty of the method consists in the fact that the proposed posterior predictive p-value can be easily calibrated converting it into an upper bound of the Bayes factor. This approach may be computationally convenient in those situations in which the Bayes factor is hard to compute. As an example, we consider the case where the null model is the classical small area Fay-Herriot model whilst the alternative one accounts for the possible measurement error in the auxiliary variables. In this case, the alternative model has a different dimension, given the additional likelihood component accounting for the measurement error, then the Bayes factor may result particularly sensible to the magnitude of the additional component. In contrast, simulations show that our method does not suffer from the different dimensions of the two models.