首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   50篇
  免费   0篇
管理学   8篇
人口学   6篇
丛书文集   1篇
理论方法论   1篇
综合类   2篇
社会学   2篇
统计学   30篇
  2022年   1篇
  2020年   1篇
  2019年   1篇
  2018年   2篇
  2017年   6篇
  2016年   1篇
  2015年   2篇
  2014年   1篇
  2013年   9篇
  2012年   3篇
  2011年   2篇
  2010年   1篇
  2009年   2篇
  2008年   3篇
  2007年   3篇
  2005年   2篇
  2004年   1篇
  2002年   1篇
  2001年   2篇
  2000年   2篇
  1998年   2篇
  1997年   1篇
  1995年   1篇
排序方式: 共有50条查询结果,搜索用时 156 毫秒
21.
The use of surrogate endpoint, which is becoming widely popular now a days, was introduced in medical research to reduce the time of experiment required for the approval of a drug. Through cost cutting and time saving, the surrogate endpoints can bring profit to the medicine producers. We obtain the expression for the reduction of true samples, in proportion, at an expense of surrogates to achieve a fixed power while comparing the two treatments. We present our discussion in the two-treatment set up in the context of odds ratio as a measure of treatment difference. We illustrate the methodology with some real dataset.  相似文献   
22.

Purpose

To prevent the recurrence of child maltreatment, actuarial risk assessment can help child protective services (CPS) workers make more accurate and consistent decisions. However, there are few published articles describing construction methodologies and performance criteria to evaluate how well actuarial risk assessments perform in CPS. This article describes methodology to construct and revise an actuarial risk assessment, reviews criteria to evaluate the performance of actuarial tools, and applies a methodology and performance criteria in one state.

Methods

The sample included 6832 families who were followed for two years to determine whether they were re-reported and re-substantiated for maltreatment.

Results

Both the adopted and the revised tools had adequate separation and good predictive accuracy for all families and for the state's three largest ethnic/racial groups (White, Latino, and African American). The adopted tool classified relatively few families in the low-risk category; the revised tool distributed families across risk categories.

Conclusions

The revised tool classified more families as low-risk, allowing CPS to allocate more resources to higher-risk families, but at the cost of more false negatives.  相似文献   
23.
This paper deals with the analysis of proportional rate model for recurrent event data when covariates are subject to missing. The true covariate is measured only on a randomly chosen validation set, whereas auxiliary information is available for all cohort subjects. To further utilize the auxiliary information to improve study efficiency, we propose an estimated estimating equation for the regression parameters. The resulting estimators are shown to be consistent and asymptotically normal. Both graphical and numerical techniques for checking the adequacy of the model are presented. Simulations are conducted to evaluate the finite sample performance of the proposed estimators. Illustration with a real medical study is provided.  相似文献   
24.
Cross-validation as a means of choosing the smoothing parameter in spline regression has achieved a wide popularity. Its appeal comprises of an automatic method based on an attractive criterion and along with many other methods it has been shown to minimize predictive mean square error asymptotically. However, in practice there may be a substantial proportion of applications where a cross-validation style choice may lead to drastic undersmoothing often as far as interpolation. Furthermore, because the criterion is so appealing the user may be misled by an inappropriate, automatically-chosen value. In this paper we investigate the nature of cross-validatory methods in spline smoothing regression and suggest variants which provide small sample protection against undersmoothing.  相似文献   
25.
Book review     
This paper encompasses three parts of validating risk models. The first part provides an understanding of the precision of the standard statistics used to validate risk models given varying sample sizes. The second part investigates jackknifing as a method to obtain a confidence interval for the Gini coefficient and K–S statistic for small sample sizes. The third and final part investigates the odds at various cutoff points as to its efficiency and appropriateness relative to the K–S statistic and Gini coefficient in model validation. There are many parts to understanding the risk associated with the extension of credit. This paper focuses on obtaining a better understanding of present methodology for validating existing risk models used for credit scoring, by investigating the three parts mentioned. The empirical investigation shows the precision of the Gini coefficient and K–S statistic is driven by the sample size of the smaller, either successes or failures. In addition, a simple adaption of the standard jackknifing formula is possible to use to get an understanding of the variability of the Gini coefficient and K–S statistic. Finally, the odds is not a reliable statistic to use without a considerably large sample of both successes and failures.  相似文献   
26.
As key components of Davis's technology acceptance model (TAM), the perceived usefulness and perceived ease-of-use instruments are widely accepted among the MIS research community as tools for evaluating information system applications and predicting usage. Despite this wide acceptance, a series of incremental cross-validation studies have produced conflicting and equivocal results that do not provide guidance for researchers or practitioners who might use the TAM for decision making. Using a sample of 902 “initial exposure” responses, this research conducts: (1) a confirmatory factor analysis to assess the validity and reliability of the original instruments proposed by Davis, and (2) a multigroup invariance analysis to assess the equivalence of these instruments across subgroups based on type of application, experience with computing, and gender. In contrast to the mixed results of prior cross-validation efforts, the results of this confirmatory study provide strong support for the validity and reliability of Davis's sixitem perceived usefulness and six-item ease-of-use instruments. The multigroup invariance analysis suggests the usefulness and ease-of-use instruments have invariant true scores across most, but not all, subgroups. With notable exemptions for word processing applications and users with no prior computing experience, this research provides evidence that the item-factor loadings (true scores) are invariant across spread sheet, database, and graphic applications. The implications of the results for managerial decision making are discussed.  相似文献   
27.
Time-series data are often subject to measurement error, usually the result of needing to estimate the variable of interest. Generally, however, the relationship between the surrogate variables and the true variables can be rather complicated compared to the classical additive error structure usually assumed. In this article, we address the estimation of the parameters in autoregressive models in the presence of function measurement errors. We first develop a parameter estimation method with the help of validation data; this estimation method does not depend on functional form and the distribution of the measurement error. The proposed estimator is proved to be consistent. Moreover, the asymptotic representation and the asymptotic normality of the estimator are also derived, respectively. Simulation results indicate that the proposed method works well for practical situation.  相似文献   
28.
The widespread use of regression analysis as a business forecasting tool and renewed interest in the use of cross-validation to aid in regression model selection make it essential that decision makers fully understand methods of cross-validation in forecasting, along with the advantages and limitations of such analysis. Only by fully understanding the process can managers accurately interpret the important implications of statistical cross-validation results in their determination of the robustness of regression forecasting models. Through a multiple regression analysis of a large insurance company's customer database, the Herzberg equation for determining the criterion of validity [11] and analysis of samples of different size from the two regions covered by the database, we illustrate the use of statistical cross-validation and test a set of factors hypothesized to be related to the statistical accuracy of validation. We find that increasing sample size will increase reliability. When the magnitude of population model differences is small, validation results are found to be unreliable, and increasing sample size has little or no effect on reliability. In addition, the relative fit of the model for the derivative sample and the validation sample has an impact on validation accuracy, and should be used as an indicator of when further analysis should be undertaken. Furthermore, we find that the probability distribution of the population independent variables has no effect on validation accuracy.  相似文献   
29.
30.
This paper considers 2×2 tables arising from case–control studies in which the binary exposure may be misclassified. We found circumstances under which the inverse matrix method provides a more efficient odds ratio estimator than the naive estimator. We provide some intuition for the findings, and also provide a formula for obtaining the minimum size of a validation study such that the variance of the odds ratio estimator from the inverse matrix method is smaller than that of the naive estimator, thereby ensuring an advantage for the misclassification corrected result. As a corollary of this result, we show that correcting for misclassification does not necessarily lead to a widening of the confidence intervals, but, rather, in addition to producing a consistent estimate, can also produce one that is more efficient.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号