Optimization formulations to handle decision-making under uncertainty often contain parameters needed to be calibrated from data. Examples include uncertainty set sizes in robust optimization, and Monte Carlo sample sizes in constraint sampling or scenario generation. We investigate strategies to select good parameter values based on data splitting and the validation of their performances in terms of feasibility and optimality. We analyze the effectiveness of these strategies in relation to the complexity of the optimization class and problem dimension. |