首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   36篇
  免费   1篇
管理学   1篇
统计学   36篇
  2022年   1篇
  2020年   1篇
  2019年   3篇
  2018年   1篇
  2017年   3篇
  2016年   1篇
  2015年   1篇
  2014年   2篇
  2013年   15篇
  2012年   2篇
  2011年   1篇
  2010年   1篇
  2009年   3篇
  2003年   1篇
  1982年   1篇
排序方式: 共有37条查询结果,搜索用时 15 毫秒
1.
Many applications of nonparametric tests based on curve estimation involve selecting a smoothing parameter. The author proposes an adaptive test that combines several generalized likelihood ratio tests in order to get power performance nearly equal to whichever of the component tests is best. She derives the asymptotic joint distribution of the component tests and that of the proposed test under the null hypothesis. She also develops a simple method of selecting the smoothing parameters for the proposed test and presents two approximate methods for obtaining its P‐value. Finally, she evaluates the proposed test through simulations and illustrates its application to a set of real data.  相似文献   
2.
We develop an omnibus two-sample test for ranked-set sampling (RSS) data. The test statistic is the conditional probability of seeing the observed sequence of ranks in the combined sample, given the observed sequences within the separate samples. We compare the test to existing tests under perfect rankings, finding that it can outperform existing tests in terms of power, particularly when the set size is large. The test does not maintain its level under imperfect rankings. However, one can create a permutation version of the test that is comparable in power to the basic test under perfect rankings and also maintains its level under imperfect rankings. Both tests extend naturally to judgment post-stratification, unbalanced RSS, and even RSS with multiple set sizes. Interestingly, the tests have no simple random sampling analog.  相似文献   
3.
In disease screening and diagnosis, often multiple markers are measured and combined to improve the accuracy of diagnosis. McIntosh and Pepe [Combining several screening tests: optimality of the risk score, Biometrics 58 (2002), pp. 657–664] showed that the risk score, defined as the probability of disease conditional on multiple markers, is the optimal function for classification based on the Neyman–Pearson lemma. They proposed a two-step procedure to approximate the risk score. However, the resulting receiver operating characteristic (ROC) curve is only defined in a subrange (L, h) of false-positive rates in (0,1) and the determination of the lower limit L needs extra prior information. In practice, most diagnostic tests are not perfect, and it is usually rare that a single marker is uniformly better than the other tests. Using simulation, I show that multivariate adaptive regression spline is a useful tool to approximate the risk score when combining multiple markers, especially when ROC curves from multiple tests cross. The resulting ROC is defined in the whole range of (0,1) and is easy to implement and has intuitive interpretation. The sample code of the application is shown in the appendix.  相似文献   
4.
We present a surprising though obvious result that seems to have been unnoticed until now. In particular, we demonstrate the equivalence of two well-known problems—the optimal allocation of the fixed overall sample size n among L strata under stratified random sampling and the optimal allocation of the H = 435 seats among the 50 states for apportionment of the U.S. House of Representatives following each decennial census. In spite of the strong similarity manifest in the statements of the two problems, they have not been linked and they have well-known but different solutions; one solution is not explicitly exact (Neyman allocation), and the other (equal proportions) is exact. We give explicit exact solutions for both and note that the solutions are equivalent. In fact, we conclude by showing that both problems are special cases of a general problem. The result is significant for stratified random sampling in that it explicitly shows how to minimize sampling error when estimating a total TY while keeping the final overall sample size fixed at n; this is usually not the case in practice with Neyman allocation where the resulting final overall sample size might be near n + L after rounding. An example reveals that controlled rounding with Neyman allocation does not always lead to the optimum allocation, that is, an allocation that minimizes variance.  相似文献   
5.
There are a number of situations in which the experimental data observed are record statistics. In this paper, optimal confidence intervals as well as uniformly most powerful (MP) tests for one-sided alternatives are developed. Since a uniformly MP test for a two-sided alternative does not exist, generalized likelihood ratio and uniformly unbiased and invariant tests are derived for the two parameters of the exponential distribution based on record data. For illustrative purposes, a data set on the times between consecutive telephone calls to a company's switchboard is analysed using the proposed procedures. Finally, some open problems in this direction are pointed out.  相似文献   
6.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   
7.
In this article, we consider some nonparametric goodness-of-fit tests for right censored samples, viz., the modified Kolmogorov, Cramer–von Mises–Smirnov, Anderson–Darling, and Nikulin–Rao–Robson χ2 tests. We also consider an approach based on a transformation of the original censored sample to a complete one and the subsequent application of classical goodness-of-fit tests to the pseudo-complete sample. We then compare these tests in terms of power in the case of Type II censored data along with the power of the Neyman–Pearson test, and draw some conclusions. Finally, we present an illustrative example.  相似文献   
8.
The problems of assessing, comparing and combining probability forecasts for a binary events sequence are considered. A Gaussian threshold model (analytically of closed form) is introduced which allows generation of different probability forecast sequences valid for the same events. Chi - squared type test statistics, and also a marginal-conditional method are proposed for the assessment problem, and an asymptotic normality result is given. A graphical method is developed for the comparison problem, based upon decomposing arbitrary proper scoring rules into certain elementary scoring functions. The special role of the logarithmic scoring rule is examined in the context of Neyman - Pearson theory.  相似文献   
9.
In this paper we consider the Neyman accuracy and the Wolfowitz accuracy of the Stein type improved confidence interval I?S for the disturbance variance in a linear regression model. The Neyman accuracy is a measure related to the unbiasedness of a confidence interval, and the Wolfowitz accuracy is related to the closeness of the endpoints to the true parameter. We show that I?S is not unbiased and give some numerical results for the Neyman accuracy. As for the Wolfowitz accuracy we derive the sufficient condition for I?S to improve on the usual confidence interval under this criterion and show numerically that a large degree of improvement can be obtainted.  相似文献   
10.
This paper examines both theoretically and empirically whether the common practice of using OLS multivariate regression models to estimate average treatment effects (ATEs) under experimental designs is justified by the Neyman model for causal inference. Using data from eight large U.S. social policy experiments, the paper finds that estimated standard errors and significance levels for ATE estimators are similar under the OLS and Neyman models when baseline covariates are included in the models, even though theory suggests that this may not have been the case. This occurs primarily because treatment effects do not appear to vary substantially across study subjects.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号