首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   657篇
  免费   19篇
管理学   32篇
丛书文集   1篇
综合类   6篇
统计学   637篇
  2023年   2篇
  2022年   6篇
  2021年   3篇
  2020年   13篇
  2019年   33篇
  2018年   25篇
  2017年   56篇
  2016年   19篇
  2015年   22篇
  2014年   23篇
  2013年   134篇
  2012年   106篇
  2011年   14篇
  2010年   23篇
  2009年   24篇
  2008年   15篇
  2007年   10篇
  2006年   15篇
  2005年   15篇
  2004年   13篇
  2003年   8篇
  2002年   13篇
  2001年   14篇
  2000年   16篇
  1999年   11篇
  1998年   9篇
  1997年   3篇
  1996年   5篇
  1995年   1篇
  1994年   2篇
  1993年   6篇
  1992年   9篇
  1991年   4篇
  1990年   2篇
  1989年   2篇
排序方式: 共有676条查询结果,搜索用时 15 毫秒
661.
In this article, we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo expectation–maximization (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix-based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group study of significant cervical lesion diagnosis in women with atypical glandular cells of undetermined significance to compare the diagnostic accuracy of a histology-based evaluation, a carbonic anhydrase-IX biomarker-based test and a human papillomavirus DNA test.  相似文献   
662.
Abstract. Parameter estimation in diffusion processes from discrete observations up to a first‐passage time is clearly of practical relevance, but does not seem to have been studied so far. In neuroscience, many models for the membrane potential evolution involve the presence of an upper threshold. Data are modelled as discretely observed diffusions which are killed when the threshold is reached. Statistical inference is often based on a misspecified likelihood ignoring the presence of the threshold causing severe bias, e.g. the bias incurred in the drift parameters of the Ornstein–Uhlenbeck model for biological relevant parameters can be up to 25–100 per cent. We compute or approximate the likelihood function of the killed process. When estimating from a single trajectory, considerable bias may still be present, and the distribution of the estimates can be heavily skewed and with a huge variance. Parametric bootstrap is effective in correcting the bias. Standard asymptotic results do not apply, but consistency and asymptotic normality may be recovered when multiple trajectories are observed, if the mean first‐passage time through the threshold is finite. Numerical examples illustrate the results and an experimental data set of intracellular recordings of the membrane potential of a motoneuron is analysed.  相似文献   
663.
Guaranteed Conditional Performance of Control Charts via Bootstrap Methods   总被引:1,自引:0,他引:1  
To use control charts in practice, the in‐control state usually has to be estimated. This estimation has a detrimental effect on the performance of control charts, which is often measured by the false alarm probability or the average run length. We suggest an adjustment of the monitoring schemes to overcome these problems. It guarantees, with a certain probability, a conditional performance given the estimated in‐control state. The suggested method is based on bootstrapping the data used to estimate the in‐control state. The method applies to different types of control charts, and also works with charts based on regression models. If a non‐parametric bootstrap is used, the method is robust to model errors. We show large sample properties of the adjustment. The usefulness of our approach is demonstrated through simulation studies.  相似文献   
664.
We are interested in estimating prediction error for a classification model built on high dimensional genomic data when the number of genes (p) greatly exceeds the number of subjects (n). We examine a distance argument supporting the conventional 0.632+ bootstrap proposed for the $n > p$ scenario, modify it for the $n < p$ situation and develop learning curves to describe how the true prediction error varies with the number of subjects in the training set. The curves are then applied to define adjusted resampling estimates for the prediction error in order to achieve a balance in terms of bias and variability. The adjusted resampling methods are proposed as counterparts of the 0.632+ bootstrap when $n < p$ , and are found to improve on the 0.632+ bootstrap and other existing methods in the microarray study scenario when the sample size is small and there is some level of differential expression. The Canadian Journal of Statistics 41: 133–150; 2013 © 2012 Statistical Society of Canada  相似文献   
665.
666.
Many survey questions allow respondents to pick any number out of c possible categorical responses or “items”. These kinds of survey questions often use the terminology “choose all that apply” or “pick any”. Often of interest is determining if the marginal response distributions of each item differ among r different groups of respondents. Agresti and Liu (1998, 1999) call this a test for multiple marginal independence (MMI). If respondents are allowed to pick only 1 out of c responses, the hypothesis test may be performed using the Pearson chi-square test of independence. However, since respondents may pick more or less than 1 response, the test's assumptions that responses are made independently of each other is violated. Recently, a few MMI testing methods have been proposed. Loughin and Scherer (1998) propose using a bootstrap method based on a modified version of the Pearson chi-square test statistic. Agresti and Liu (1998, 1999) propose using marginal logit models, quasisymmetric loglinear models, and a few methods based on Pearson chi-square test statistics. Decady and Thomas (1999) propose using a Rao-Scott adjusted chi-squared test statistic. There has not been a full investigation of these MMI testing methods. The purpose here is to evaluate the proposed methods and propose a few new methods. Recommendations are given to guide the practitioner in choosing which MMI testing methods to use.  相似文献   
667.
In this article, we propose some procedures to get confidence intervals for the reliability in stress-strength models. The confidence intervals are obtained either through a parametric bootstrap procedure or using asymptotic results, and are applied to the particular context of two independent normal random variables. The performance of these estimators and other known approximate estimators are empirically checked through a simulation study which considers several scenarios.  相似文献   
668.
669.
This paper presents methods of estimation of the parameters and acceleration factor for Nadarajah–Haghighi distribution based on constant-stress partially accelerated life tests. Based on progressive Type-II censoring, Maximum likelihood and Bayes estimates of the model parameters and acceleration factor are established, respectively. In addition, approximate confidence interval are constructed via asymptotic variance and covariance matrix, and Bayesian credible intervals are obtained based on importance sampling procedure. For comparison purpose, alternative bootstrap confidence intervals for unknown parameters and acceleration factor are also presented. Finally, extensive simulation studies are conducted for investigating the performance of the our results, and two data sets are analyzed to show the applicabilities of the proposed methods.  相似文献   
670.
Data snooping occurs when a given set of data is used more than once for purposes of inference or model selection. When such data reuse occurs, there is always the possibility that any satisfactory results obtained may simply be due to chance rather than to any merit inherent in the method yielding the results. This problem is practically unavoidable in the analysis of time‐series data, as typically only a single history measuring a given phenomenon of interest is available for analysis. It is widely acknowledged by empirical researchers that data snooping is a dangerous practice to be avoided, but in fact it is endemic. The main problem has been a lack of sufficiently simple practical methods capable of assessing the potential dangers of data snooping in a given situation. Our purpose here is to provide such methods by specifying a straightforward procedure for testing the null hypothesis that the best model encountered in a specification search has no predictive superiority over a given benchmark model. This permits data snooping to be undertaken with some degree of confidence that one will not mistake results that could have been generated by chance for genuinely good results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号