排序方式: 共有169条查询结果,搜索用时 15 毫秒
91.
The question on race from Census 2000 was different from previous censuses because it allowed respondents to select one or more races to indicate their racial identities. Because of this change, the race data from Census 2000 are not directly comparable with data from earlier censuses. Researchers can use `bridging' methods to assign more than one race respondents to single race categories to maximize the comparability of Census 2000 race data with earlier censuses. This paper uses several bridging methods to generate race population estimates and analyzes the variability in those estimates across six single race groups. 相似文献
92.
Using survey weights, You & Rao [You and Rao, The Canadian Journal of Statistics 2002; 30, 431–439] proposed a pseudo‐empirical best linear unbiased prediction (pseudo‐EBLUP) estimator of a small area mean under a nested error linear regression model. This estimator borrows strength across areas through a linking model, and makes use of survey weights to ensure design consistency and preserve benchmarking property in the sense that the estimators add up to a reliable direct estimator of the mean of a large area covering the small areas. In this article, a second‐order approximation to the mean squared error (MSE) of the pseudo‐EBLUP estimator of a small area mean is derived. Using this approximation, an estimator of MSE that is nearly unbiased is derived; the MSE estimator of You & Rao [You and Rao, The Canadian Journal of Statistics 2002; 30, 431–439] ignored cross‐product terms in the MSE and hence it is biased. Empirical results on the performance of the proposed MSE estimator are also presented. The Canadian Journal of Statistics 38: 598–608; 2010 © 2010 Statistical Society of Canada 相似文献
93.
Abbas Khalili 《Revue canadienne de statistique》2010,38(4):519-539
We study estimation and feature selection problems in mixture‐of‐experts models. An $l_2$ ‐penalized maximum likelihood estimator is proposed as an alternative to the ordinary maximum likelihood estimator. The estimator is particularly advantageous when fitting a mixture‐of‐experts model to data with many correlated features. It is shown that the proposed estimator is root‐$n$ consistent, and simulations show its superior finite sample behaviour compared to that of the maximum likelihood estimator. For feature selection, two extra penalty functions are applied to the $l_2$ ‐penalized log‐likelihood function. The proposed feature selection method is computationally much more efficient than the popular all‐subset selection methods. Theoretically it is shown that the method is consistent in feature selection, and simulations support our theoretical results. A real‐data example is presented to demonstrate the method. The Canadian Journal of Statistics 38: 519–539; 2010 © 2010 Statistical Society of Canada 相似文献
94.
The authors consider the problem of simulating the times of events such as extremes and barrier crossings in diffusion processes. They develop a rejection sampler based on Shepp [Shepp, Journal of Applied Probability 1979; 16:423–427] for simulating an extreme of a Brownian motion and use it in a general recursive scheme for more complex simulations, including simultaneous simulation of the minimum and maximum and application to more general diffusions. They price exotic options that are difficult to price analytically: a knock‐out barrier option with a modified payoff function, a lookback option that includes discounting at the risk‐free interest rate, and a chooser option where the choice is made at the time of a barrier crossing. The Canadian Journal of Statistics 38: 738–755; 2010 © 2010 Statistical Society of Canada 相似文献
95.
A contaminated beta model $(1-\gamma) B(1,1) + \gamma B(\alpha,\beta)$ is often used to describe the distribution of $P$ ‐values arising from a microarray experiment. The authors propose and examine a different approach: namely, using a contaminated normal model $(1-\gamma) N(0,\sigma^2) + \gamma N(\mu,\sigma^2)$ to describe the distribution of $Z$ statistics or suitably transformed $T$ statistics. The authors then address whether a researcher who has $Z$ statistics should analyze them using the contaminated normal model or whether the $Z$ statistics should be converted to $P$ ‐values to be analyzed using the contaminated beta model. The authors also provide a decision‐theoretic perspective on the analysis of $Z$ statistics. The Canadian Journal of Statistics 38: 315–332; 2010 © 2010 Statistical Society of Canada 相似文献
96.
It is important to study historical temperature time series prior to the industrial revolution so that one can view the current global warming trend from a long‐term historical perspective. Because there are no instrumental records of such historical temperature data, climatologists have been interested in reconstructing historical temperatures using various proxy time series. In this paper, the authors examine a state‐space model approach for historical temperature reconstruction which not only makes use of the proxy data but also information on external forcings. A challenge in the implementation of this approach is the estimation of the parameters in the state‐space model. The authors developed two maximum likelihood methods for parameter estimation and studied the efficiency and asymptotic properties of the associated estimators through a combination of theoretical and numerical investigations. The Canadian Journal of Statistics 38: 488–505; 2010 © 2010 Crown in the right of Canada 相似文献
97.
98.
Prior sensitivity analysis and cross‐validation are important tools in Bayesian statistics. However, due to the computational expense of implementing existing methods, these techniques are rarely used. In this paper, the authors show how it is possible to use sequential Monte Carlo methods to create an efficient and automated algorithm to perform these tasks. They apply the algorithm to the computation of regularization path plots and to assess the sensitivity of the tuning parameter in g‐prior model selection. They then demonstrate the algorithm in a cross‐validation context and use it to select the shrinkage parameter in Bayesian regression. The Canadian Journal of Statistics 38:47–64; 2010 © 2010 Statistical Society of Canada 相似文献
99.
本文讨论了JPEG2000的相关核心算法,它是基于EBCOT算法,使用DWT,采用两层编码策略,能够获得较好的压缩率。 相似文献
100.
卢宏才 《陇东学院学报(社会科学版)》2011,(6)
随着信息化时代的加速,传统的就业管理模式面临很大的挑战,为了适应当今高校就业信息化的需求,通过ASP技术,采用SQL Server 2000作为后台数据库,结合高校就业服务的实际需求,对就业管理突出问题进行了分析,介绍了系统的总体结构和设计方案,用ASP和excel相结合的技术实现了报表统计,并给出系统的实现过程. 相似文献