摘要:Based on independently distributed $X_{1}\sim{N} _{p}(\theta _{1},\sigma ^{2}_{1}I_{p})$ and $X_{2}\sim{N}_{p}(\theta _{2},\sigma ^{2}_{2}I_{p})$, we consider the efficiency of various predictive density estimators for $Y_{1}\sim N_{p}(\theta _{1},\sigma ^{2}_{Y}I_{p})$, with the additional information $\theta _{1}-\theta _{2}\in A$ and known $\sigma ^{2}_{1},\sigma ^{2}_{2},\sigma ^{2}_{Y}$. We provide improvements on benchmark predictive densities such as those obtained by plug-in, by maximum likelihood, or as minimum risk equivariant. Dominance results are obtained for $\alpha -$divergence losses and include Bayesian improvements for Kullback-Leibler (KL) loss in the univariate case ($p=1$). An ensemble of techniques are exploited, including variance expansion, point estimation duality, and concave inequalities. Representations for Bayesian predictive densities, and in particular for $\hat{q}_{\pi_{U,A}}$ associated with a uniform prior for $\theta =(\theta _{1},\theta _{2})$ truncated to $\{\theta\in \mathbb{R}^{2p}:\theta _{1}-\theta _{2}\in A\}$, are established and are used for the Bayesian dominance findings. Finally and interestingly, these Bayesian predictive densities also relate to skew-normal distributions, as well as new forms of such distributions.