In normal multivariate regression (the regression of several dependent variables on one or more explanatory variables, the dependent variables having a joint normal distribution), the maximum likelihood estimates of the regression coefficients are the same as those resulting from maximum likelihood estimation for each dependent variable separately. Intuitively it seems that one might be able to exploit the intercorrelations among the dependent variables to obtain a better estimator. This problem is cast into a decision- theoretic setting in which an estimator that is better than the maximum likelihood estimator is obtained. This estimator is a function of the usual maximum likelihood estimator and the sample covariance matrix. It has smaller expected loss than the maximum likelihood estimator no matter what the true value of the covariance matrix, even if the correlations among the dependent variables are zero.