This paper is an overview of model-selection criteria for specific and general inferential situations. First, model-selection criteria for some particular highly specified problems are discussed. These include criteria for choice of a subset of explanatory variables in regression and Akaike's final prediction error (FPE) for choosing the order of an autoregression. Next, general-purpose model-selection criteria are discussed, with a view toward retracing their origins and showing their similarities. Akaike's criterion AIC follows from an expansion of Kullback-Leibler information. The approach to model-selection criteria by expansion of the log posterior probabilities of alternative models is reviewed. Schwarz's and Kashyap's criteria emerge from this approach. Bozdogan's ICOMP, based on van Emden's notion of complexity, is defined and compared and contrasted with other criteria. Some work on the choice of number of clusters in the mixture model for cluster analysis is reported. An information-theoretic approach to model selection, through minimum-bit data representation is explored, with particular reference to cluster analysis. Similarity of the asymptotic form of Rissanen's criterion to Schwarz's criterion is discussed.