首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1639篇
  免费   43篇
  国内免费   9篇
管理学   128篇
民族学   4篇
人口学   10篇
丛书文集   27篇
理论方法论   20篇
综合类   250篇
社会学   26篇
统计学   1226篇
  2024年   2篇
  2023年   17篇
  2022年   27篇
  2021年   27篇
  2020年   42篇
  2019年   47篇
  2018年   51篇
  2017年   109篇
  2016年   38篇
  2015年   47篇
  2014年   57篇
  2013年   491篇
  2012年   174篇
  2011年   57篇
  2010年   51篇
  2009年   55篇
  2008年   52篇
  2007年   32篇
  2006年   41篇
  2005年   36篇
  2004年   25篇
  2003年   30篇
  2002年   26篇
  2001年   28篇
  2000年   26篇
  1999年   15篇
  1998年   9篇
  1997年   5篇
  1996年   4篇
  1995年   8篇
  1994年   7篇
  1993年   7篇
  1992年   6篇
  1991年   5篇
  1990年   5篇
  1989年   4篇
  1988年   3篇
  1987年   3篇
  1986年   4篇
  1984年   4篇
  1983年   1篇
  1982年   4篇
  1981年   6篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
排序方式: 共有1691条查询结果,搜索用时 15 毫秒
61.
In this paper we investigate the asymptotic critical value behaviour of certain multiple decision procedures as e.g. simultaneous confidence intervals and simultaneous as well as stepwise multiple test procedures. Supposing that n hypotheses or parameters of interest are under consideration we investigate the critical value behaviour when n increases. More specifically, we answer e.g. the question by which amount the lengths of confidence intervals increase when an additional parameter is added to the statistical analysis. Furthermore, critical values of different multiple decision procedures as for instance step-down and step-up procedures will be compared. Some general theoretic results are derived and applied for various distributions.  相似文献   
62.
Assignment of individuals to correct species or population of origin based on a comparison of allele profiles has in recent years become more accurate due to improvements in DNA marker technology. A method of assessing the error in such assignment problems is présentés. The method is based on the exact hypergeometric distributions of contingency tables conditioned on marginal totals. The result is a confidence region of fixed confidence level. This confidence level is calculable exactly in principle, and estimable very accurately by simulation, without knowledge of the true population allele frequencies. Various properties of these techniques are examined through application to several examples of actual DNA marker data and through simulation studies. Methods which may reduce computation time are discussed and illustrated.  相似文献   
63.
We show that, in the context of double-bootstrap confidence intervals, linear interpolation at the second level of the double bootstrap can reduce the simulation error component of coverage error by an order of magnitude. Intervals that are indistinguishable in terms of coverage error with theoretical, infinite simulation, double-bootstrap confidence intervals may be obtained at substantially less computational expense than by using the standard Monte Carlo approximation method. The intervals retain the simplicity of uniform bootstrap sampling and require no special analysis or computational techniques. Interpolation at the first level of the double bootstrap is shown to have a relatively minor effect on the simulation error.  相似文献   
64.
In sequential studies, formal interim analyses are usually restricted to a consideration of a single null hypothesis concerning a single parameter of interest. Valid frequentist methods of hypothesis testing and of point and interval estimation for the primary parameter have already been devised for use at the end of such a study. However, the completed data set may warrant a more detailed analysis, involving the estimation of parameters corresponding to effects that were not used to determine when to stop, and yet correlated with those that were. This paper describes methods for setting confidence intervals for secondary parameters in a way which provides the correct coverage probability in repeated frequentist realizations of the sequential design used. The method assumes that information accumulates on the primary and secondary parameters at proportional rates. This requirement will be valid in many potential applications, but only in limited situations in survival analysis.  相似文献   
65.
The main objective of this work is to evaluate the performance of confidence intervals, built using the deviance statistic, for the hyperparameters of state space models. The first procedure is a marginal approximation to confidence regions, based on the likelihood test, and the second one is based on the signed root deviance profile. Those methods are computationally efficient and are not affected by problems such as intervals with limits outside the parameter space, which can be the case when the focus is on the variances of the errors. The procedures are compared to the usual approaches existing in the literature, which includes the method based on the asymptotic distribution of the maximum likelihood estimator, as well as bootstrap confidence intervals. The comparison is performed via a Monte Carlo study, in order to establish empirically the advantages and disadvantages of each method. The results show that the methods based on the deviance statistic possess a better coverage rate than the asymptotic and bootstrap procedures.  相似文献   
66.
Assessment of analytical similarity of tier 1 quality attributes is based on a set of hypotheses that tests the mean difference of reference and test products against a margin adjusted for standard deviation of the reference product. Thus, proper assessment of the biosimilarity hypothesis requires statistical tests that account for the uncertainty associated with the estimations of the mean differences and the standard deviation of the reference product. Recently, a linear reformulation of the biosimilarity hypothesis has been proposed, which facilitates development and implementation of statistical tests. These statistical tests account for the uncertainty in the estimation process of all the unknown parameters. In this paper, we survey methods for constructing confidence intervals for testing the linearized reformulation of the biosimilarity hypothesis and also compare the performance of the methods. We discuss test procedures using confidence intervals to make possible comparison among recently developed methods as well as other previously developed methods that have not been applied for demonstrating analytical similarity. A computer simulation study was conducted to compare the performance of the methods based on the ability to maintain the test size and power, as well as computational complexity. We demonstrate the methods using two example applications. At the end, we make recommendations concerning the use of the methods.  相似文献   
67.
Benjamin Laumen 《Statistics》2019,53(3):569-600
In this paper, we revisit the progressive Type-I censoring scheme as it has originally been introduced by Cohen [Progressively censored samples in life testing. Technometrics. 1963;5(3):327–339]. In fact, original progressive Type-I censoring proceeds as progressive Type-II censoring but with fixed censoring times instead of failure time based censoring times. Apparently, a time truncation has been added to this censoring scheme by interpreting the final censoring time as a termination time. Therefore, not much work has been done on Cohens's original progressive censoring scheme with fixed censoring times. Thus, we discuss distributional results for this scheme and establish exact distributional results in likelihood inference for exponentially distributed lifetimes. In particular, we obtain the exact distribution of the maximum likelihood estimator (MLE). Further, the stochastic monotonicity of the MLE is verified in order to construct exact confidence intervals for both the scale parameter and the reliability.  相似文献   
68.
In this paper, we discuss some theoretical results and properties of the discrete Weibull distribution, which was introduced by Nakagawa and Osaki [The discrete Weibull distribution. IEEE Trans Reliab. 1975;24:300–301]. We study the monotonicity of the probability mass, survival and hazard functions. Moreover, reliability, moments, p-quantiles, entropies and order statistics are also studied. We consider likelihood-based methods to estimate the model parameters based on complete and censored samples, and to derive confidence intervals. We also consider two additional methods to estimate the model parameters. The uniqueness of the maximum likelihood estimate of one of the parameters that index the discrete Weibull model is discussed. Numerical evaluation of the considered model is performed by Monte Carlo simulations. For illustrative purposes, two real data sets are analyzed.  相似文献   
69.
The central limit theorem indicates that when the sample size goes to infinite, the sampling distribution of means tends to follow a normal distribution; it is the basis for the most usual confidence interval and sample size formulas. This study analyzes what sample size is large enough to assume that the distribution of the estimator of a proportion follows a Normal distribution. Also, we propose the use of a correction factor in sample size formulas to ensure a confidence level even when the central limit theorem does not apply for these distributions.  相似文献   
70.
In this paper, we consider inference of the stress-strength parameter, R, based on two independent Type-II censored samples from exponentiated Fréchet populations with different index parameters. The maximum likelihood and uniformly minimum variance unbiased estimators, exact and asymptotic confidence intervals and hypotheses testing for R are obtained. We conduct a Monte Carlo simulation study to evaluate the performance of these estimators and confidence intervals. Finally, two real data sets are analysed for illustrative purposes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号