首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   612篇
  免费   4篇
管理学   20篇
人口学   9篇
丛书文集   4篇
综合类   28篇
社会学   4篇
统计学   551篇
  2023年   1篇
  2022年   4篇
  2021年   6篇
  2020年   8篇
  2019年   19篇
  2018年   18篇
  2017年   32篇
  2016年   19篇
  2015年   10篇
  2014年   19篇
  2013年   229篇
  2012年   68篇
  2011年   14篇
  2010年   15篇
  2009年   20篇
  2008年   16篇
  2007年   11篇
  2006年   11篇
  2005年   9篇
  2004年   9篇
  2003年   5篇
  2002年   5篇
  2001年   7篇
  2000年   9篇
  1999年   8篇
  1998年   15篇
  1997年   1篇
  1996年   5篇
  1995年   1篇
  1994年   2篇
  1993年   1篇
  1992年   2篇
  1991年   2篇
  1988年   2篇
  1985年   1篇
  1984年   3篇
  1983年   1篇
  1982年   4篇
  1981年   3篇
  1977年   1篇
排序方式: 共有616条查询结果,搜索用时 15 毫秒
101.
Group testing procedures, in which groups containing several units are tested without testing each unit, are widely used as cost-effective procedures in estimating the proportion of defective units in a population. A problem arises when we apply these procedures to the detection of genetically modified organisms (GMOs), because the analytical instrument for detecting GMOs has a threshold of detection. If the group size (i.e., the number of units within a group) is large, the GMOs in a group are not detected due to the dilution even if the group contains one unit of GMOs. Thus, most people conventionally use a small group size (which we call conventional group size) so that they can surely detect the existence of defective units if at least one unit of GMOs is included in the group. However, we show that we can estimate the proportion of defective units for any group size even if a threshold of detection exists; the estimate of the proportion of defective units is easily obtained by using functions implemented in a spreadsheet. Then, we show that the conventional group size is not always optimal in controlling a consumer's risk, because such a group size requires a larger number of groups for testing.  相似文献   
102.
A global measure of biomarker effectiveness is the Youden index, the maximum difference between sensitivity, the probability of correctly classifying diseased individuals, and 1-specificity, the probability of incorrectly classifying healthy individuals. The cut-point leading to the index is the optimal cut-point when equal weight is given to sensitivity and specificity. Using the delta method, we present approaches for estimating confidence intervals for the Youden index and corresponding optimal cut-point for normally distributed biomarkers and also those following gamma distributions. We also provide confidence intervals using various bootstrapping methods. A comparison of interval width and coverage probability is conducted through simulation over a variety of parametric situations. Confidence intervals via delta method are shown to have both closer to nominal coverage and shorter interval widths than confidence intervals from the bootstrapping methods.  相似文献   
103.
ABSTRACT

Most statistical analyses use hypothesis tests or estimation about parameters to form inferential conclusions. I think this is noble, but misguided. The point of view expressed here is that observables are fundamental, and that the goal of statistical modeling should be to predict future observations, given the current data and other relevant information. Further, the prediction of future observables provides multiple advantages to practicing scientists, and to science in general. These include an interpretable numerical summary of a quantity of direct interest to current and future researchers, a calibrated prediction of what’s likely to happen in future experiments, a prediction that can be either “corroborated” or “refuted” through experimentation, and avoidance of inference about parameters; quantities that exists only as convenient indices of hypothetical distributions. Finally, the predictive probability of a future observable can be used as a standard for communicating the reliability of the current work, regardless of whether confirmatory experiments are conducted. Adoption of this paradigm would improve our rigor for scientific accuracy and reproducibility by shifting our focus from “finding differences” among hypothetical parameters to predicting observable events based on our current scientific understanding.  相似文献   
104.
In this paper, when a jointly Type-II censored sample arising from k independent exponential populations is available, the conditional MLEs of the k exponential mean parameters are derived. The moment generating functions and the exact densities of these MLEs are obtained using which exact confidence intervals are developed for the parameters. Moreover, approximate confidence intervals based on the asymptotic normality of the MLEs and credible confidence regions from a Bayesian viewpoint are also discussed. An empirical comparison of the exact, approximate, bootstrap, and Bayesian intervals is also made in terms of coverage probabilities. Finally, an example is presented in order to illustrate all the methods of inference developed here.  相似文献   
105.
Monte Carlo evidence shows that in structural VAR models with fat-tailed or skewed innovations the coverage accuracy of impulse response confidence intervals may deterorate substantially compared to the same model with Gaussian innovations. Empirical evidance suggests that such departures from normality are quite plausible for economic time series. The simulation results suggest that applied researchers are best off using nonparametric bootstrap intervals for impulse responses, regardless of whether or not there is evidence of fat tails or skewness in the error distribution. Allowing for departures from normality is shown to considerably weaken the evidence of the delayed overshooting puzzle in Eichenbaum and Evans (1995).  相似文献   
106.
An account is given of methods used to predict the outcome of the 1997 general election from early declared results, for use by the British Broadcasting Corporation (BBC) in its election night television and radio coverage. Particular features of the 1997 election include extensive changes to constituency boundaries, simultaneous local elections in many districts and strong tactical voting. A new technique is developed, designed to eliminate systematic sources of bias such as differential refusal, for incorporating prior information from the BBC's exit poll. The sequence of forecasts generated on election night is displayed, with commentary.  相似文献   
107.
The concepts of guarded weights of evidence and acceptability profiles have been extended to the distribution-free setting in Dollinger, Kulinskaya & Staudte (1999). In that first of two parts the advantages of these concepts relative to traditional ones such as p -values and confidence intervals derived from hypothesis tests are emphasized for small samples. Here in Part II asymptotic expressions are found for guarded weights of evidence for hypothesesregarding the median of a symmetric distribution and related acceptability profiles for the median. It is also seen that for local alternatives the efficacy and Pitman asymptotic relative efficiency of the sign statistic for testing hypotheses carries over to the more general setting of guarded weights of evidence.  相似文献   
108.
Threshold models have a wide variety of applications in economics. Direct applications include models of separating and multiple equilibria. Other applications include empirical sample splitting when the sample split is based on a continuously‐distributed variable such as firm size. In addition, threshold models may be used as a parsimonious strategy for nonparametric function estimation. For example, the threshold autoregressive model (TAR) is popular in the nonlinear time series literature. Threshold models also emerge as special cases of more complex statistical frameworks, such as mixture models, switching models, Markov switching models, and smooth transition threshold models. It may be important to understand the statistical properties of threshold models as a preliminary step in the development of statistical tools to handle these more complicated structures. Despite the large number of potential applications, the statistical theory of threshold estimation is undeveloped. It is known that threshold estimates are super‐consistent, but a distribution theory useful for testing and inference has yet to be provided. This paper develops a statistical theory for threshold estimation in the regression context. We allow for either cross‐section or time series observations. Least squares estimation of the regression parameters is considered. An asymptotic distribution theory for the regression estimates (the threshold and the regression slopes) is developed. It is found that the distribution of the threshold estimate is nonstandard. A method to construct asymptotic confidence intervals is developed by inverting the likelihood ratio statistic. It is shown that this yields asymptotically conservative confidence regions. Monte Carlo simulations are presented to assess the accuracy of the asymptotic approximations. The empirical relevance of the theory is illustrated through an application to the multiple equilibria growth model of Durlauf and Johnson (1995).  相似文献   
109.
Longitudinal studies suffer from patient dropout. The dropout process may be informative if there exists an association between dropout patterns and the rate of change in the response over time. Multiple patterns are plausible in that different causes of dropout might contribute to different patterns. These multiple patterns can be dichotomized into two groups: quantitative and qualitative interaction. Quantitative interaction indicates that each of the multiple sources is biasing the estimate of the rate of change in the same direction, although with differing magnitudes. Alternatively, qualitative interaction results in the multiple sources biasing the estimate of the rate of change in opposing directions. Qualitative interaction is of special concern, since it is less likely to be detected by conventional methods and can lead to highly misleading slope estimates. We explore a test for qualitative interaction based on simultaneous confidence intervals. The test accommodates the realistic situation where reasons for dropout are not fully understood, or even entirely unknown. It allows for an additional level of clustering among participating subjects. We apply these methods to a study exploring tumor growth rates in mice as well as a longitudinal study exploring rates of change in cognitive functioning for Alzheimer's patients.  相似文献   
110.
The problem of detecting multiple undocumented change-points in a historical temperature sequence with simple linear trend is formulated by a linear model. We apply adaptive least absolute shrinkage and selection operator (Lasso) to estimate the number and locations of change-points. Model selection criteria are used to choose the Lasso smoothing parameter. As adaptive Lasso may overestimate the number of change-points, we perform post-selection on change-points detected by adaptive Lasso using multivariate t simultaneous confidence intervals. Our method is demonstrated on the annual temperature data (year: 1902–2000) from Tuscaloosa, Alabama.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号