首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   153篇
  免费   6篇
管理学   9篇
丛书文集   2篇
理论方法论   2篇
综合类   46篇
社会学   5篇
统计学   95篇
  2023年   1篇
  2021年   1篇
  2020年   3篇
  2019年   7篇
  2018年   3篇
  2017年   6篇
  2016年   9篇
  2015年   9篇
  2014年   3篇
  2013年   24篇
  2012年   10篇
  2011年   4篇
  2010年   9篇
  2009年   7篇
  2008年   7篇
  2007年   5篇
  2006年   6篇
  2005年   6篇
  2004年   5篇
  2003年   3篇
  2002年   4篇
  2001年   3篇
  2000年   5篇
  1999年   4篇
  1998年   2篇
  1997年   3篇
  1995年   4篇
  1993年   1篇
  1992年   2篇
  1989年   1篇
  1985年   2篇
排序方式: 共有159条查询结果,搜索用时 445 毫秒
1.
Generally, the semiclosed-form option pricing formula for complex financial models depends on unobservable factors such as stochastic volatility and jump intensity. A popular practice is to use an estimate of these latent factors to compute the option price. However, in many situations this plug-and-play approximation does not yield the appropriate price. This article examines this bias and quantifies its impacts. We decompose the bias into terms that are related to the bias on the unobservable factors and to the precision of their point estimators. The approximated price is found to be highly biased when only the history of the stock price is used to recover the latent states. This bias is corrected when option prices are added to the sample used to recover the states' best estimate. We also show numerically that such a bias is propagated on calibrated parameters, leading to erroneous values. The Canadian Journal of Statistics 48: 8–35; 2020 © 2019 Statistical Society of Canada  相似文献   
2.
描述了影响DBF系统特性的主要因素,研究了阵元间互耦对自适应方向图旁瓣和零深的影响及校正方法,讨论了在DBF阵中校正接收通道幅、相误差和I/Q支路正交误差的技术途径。计算机模拟和测试证明,按照所述方法进行校正可以得到满意的结果。另外,为了减小I/Q支路产生正交误差,建议采用中频直接采样和数字化的接收机方案。  相似文献   
3.
MODEL-ASSISTED HIGHER-ORDER CALIBRATION OF ESTIMATORS OF VARIANCE   总被引:1,自引:0,他引:1  
In survey sampling, interest often centres on inference for the population total using information about an auxiliary variable. The variance of the estimator used plays a key role in such inference. This study develops a new set of higher‐order constraints for the calibration of estimators of variance for various estimators of the population total. The proposed strategy requires an appropriate model for describing the relationship between the response and auxiliary variable, and the variance of the auxiliary variable. It is therefore referred to as a model‐assisted approach. Several new estimators of variance, including the higher‐order calibration estimators of the variance of the ratio and regression estimators suggested by Singh, Horn & Yu and Sitter & Wu are special cases of the proposed technique. The paper presents and discusses the results of an empirical study to compare the performance of the proposed estimators and existing counterparts.  相似文献   
4.
Standard algorithms for the construction of iterated bootstrap confidence intervals are computationally very demanding, requiring nested levels of bootstrap resampling. We propose an alternative approach to constructing double bootstrap confidence intervals that involves replacing the inner level of resampling by an analytical approximation. This approximation is based on saddlepoint methods and a tail probability approximation of DiCiccio and Martin (1991). Our technique significantly reduces the computational expense of iterated bootstrap calculations. A formal algorithm for the construction of our approximate iterated bootstrap confidence intervals is presented, and some crucial practical issues arising in its implementation are discussed. Our procedure is illustrated in the case of constructing confidence intervals for ratios of means using both real and simulated data. We repeat an experiment of Schenker (1985) involving the construction of bootstrap confidence intervals for a variance and demonstrate that our technique makes feasible the construction of accurate bootstrap confidence intervals in that context. Finally, we investigate the use of our technique in a more complex setting, that of constructing confidence intervals for a correlation coefficient.  相似文献   
5.
Inverse probability weighting (IPW) and multiple imputation are two widely adopted approaches dealing with missing data. The former models the selection probability, and the latter models data distribution. Consistent estimation requires correct specification of corresponding models. Although the augmented IPW method provides an extra layer of protection on consistency, it is usually not sufficient in practice as the true data‐generating process is unknown. This paper proposes a method combining the two approaches in the same spirit of calibration in sampling survey literature. Multiple models for both the selection probability and data distribution can be simultaneously accounted for, and the resulting estimator is consistent if any model is correctly specified. The proposed method is within the framework of estimating equations and is general enough to cover regression analysis with missing outcomes and/or missing covariates. Results on both theoretical and numerical investigation are provided.  相似文献   
6.
Abstract. Family‐based case–control designs are commonly used in epidemiological studies for evaluating the role of genetic susceptibility and environmental exposure to risk factors in the etiology of rare diseases. Within this framework, it is often reasonable to assume genetic susceptibility and environmental exposure being conditionally independent of each other within families in the source population. We focus on this setting to explore the situation of measurement error affecting the assessment of the environmental exposure. We correct for measurement error through a likelihood‐based method. We exploit a conditional likelihood approach to relate the probability of disease to the genetic and the environmental risk factors. We show that this approach provides less biased and more efficient results than that based on logistic regression. Regression calibration, instead, provides severely biased estimators of the parameters. The comparison of the correction methods is performed through simulation, under common measurement error structures.  相似文献   
7.
Zusammenfassung: In dieser Studie wird ein Konzept zur Kumulation von laufenden Haushaltsbudgetbefragungen im Rahmen des Projektes Amtliche Statistik und sozioökonomische Fragestellungen entwickelt und zur Diskussion gestellt. Dafür werden die theoretischen Grundlagen und Bausteine gelegt und die zentrale Aufgabe einer strukturellen demographischen Gewichtung mit einem Hochrechnungs–/Kalibrierungsansatz auf informationstheoretischer Basis gelöst.Vor dem Hintergrund der Wirtschaftsrechnungen des Statistischen Bundesamtes (Lfd. Wirtschaftsrechnungen und EVS) wird darauf aufbauend ein konkretes Konzept für die Kumulation von jährlichen Haushaltsbudgetbefragungen vorgeschlagen. Damit kann das Ziel einer Kumulation von Querschnitten mit einer umfassenderen Kumulationsstichprobe für tief gegliederte Analysen erreicht werden. Folgen sollen die Simulationsrechnungen zur Evaluation des Konzepts.
Summary: In this study a concept for cumulating periodic household surveys within the frame of the project Official Statistics and Socio–Economic Questions is developed and asks for discussion. We develop the theoretical background and solve the central task of a structural demographic weighting/calibration based on an information theoretical approach.Based on the household budget surveys of the Federal Statistical Office (Periodic Household Budget Surveys and Income and Consumption Sample (EVS)) a practical concept is proposed to cumulate yearly household surveys. This allows a cumulation of cross–sections by a comprehensive cumulated sample for deeply structured analyses. In a following study this concept shall be evaluated.
  相似文献   
8.
The problem of estimating the sample size for a phase III trial on the basis of existing phase II data is considered, where data from phase II cannot be combined with those of the new phase III trial. Focus is on the test for comparing the means of two independent samples. A launching criterion is adopted in order to evaluate the relevance of phase II results: phase III is run if the effect size estimate is higher than a threshold of clinical importance. The variability in sample size estimation is taken into consideration. Then, the frequentist conservative strategies with a fixed amount of conservativeness and Bayesian strategies are compared. A new conservative strategy is introduced and is based on the calibration of the optimal amount of conservativeness – calibrated optimal strategy (COS). To evaluate the results we compute the Overall Power (OP) of the different strategies, as well as the mean and the MSE of sample size estimators. Bayesian strategies have poor characteristics since they show a very high mean and/or MSE of sample size estimators. COS clearly performs better than the other conservative strategies. Indeed, the OP of COS is, on average, the closest to the desired level; it is also the highest. COS sample size is also the closest to the ideal phase III sample size MI, showing averages and MSEs lower than those of the other strategies. Costs and experimental times are therefore considerably reduced and standardized. However, if the ideal sample size MI is to be estimated the phase II sample size n should be around the ideal phase III sample size, i.e. n?2MI/3. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
9.
In the estimation of a population mean or total from a random sample, certain methods based on linear models are known to be automatically design consistent, regardless of how well the underlying model describes the population. A sufficient condition is identified for this type of robustness to model failure; the condition, which we call 'internal bias calibration', relates to the combination of a model and the method used to fit it. Included among the internally bias-calibrated models, in addition to the aforementioned linear models, are certain canonical link generalized linear models and nonparametric regressions constructed from them by a particular style of local likelihood fitting. Other models can often be made robust by using a suboptimal fitting method. Thus the class of model-based, but design consistent, analyses is enlarged to include more realistic models for certain types of survey variable such as binary indicators and counts. Particular applications discussed are the estimation of the size of a population subdomain, as arises in tax auditing for example, and the estimation of a bootstrap tail probability.  相似文献   
10.
阵列通道幅相不一致性严重影响测向性能。基于辅助源的相关校正理论,研究了通过在天线馈电口输入辅助信号,再对信号求相关来实现对阵列通道不一致性的校正;并分析了基于多基线数字干涉仪体制的测向原理,利用五元圆形天线阵列估计出了地面目标的到达角。此外,还讨论了所有算法的DSP实现,计算机模拟证实了方法的有效性。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号