首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1936篇
  免费   68篇
  国内免费   7篇
管理学   97篇
民族学   2篇
人口学   10篇
丛书文集   42篇
理论方法论   36篇
综合类   386篇
社会学   32篇
统计学   1406篇
  2024年   1篇
  2023年   23篇
  2022年   11篇
  2021年   18篇
  2020年   37篇
  2019年   51篇
  2018年   55篇
  2017年   81篇
  2016年   50篇
  2015年   56篇
  2014年   64篇
  2013年   568篇
  2012年   172篇
  2011年   61篇
  2010年   52篇
  2009年   53篇
  2008年   62篇
  2007年   68篇
  2006年   51篇
  2005年   59篇
  2004年   34篇
  2003年   52篇
  2002年   40篇
  2001年   46篇
  2000年   33篇
  1999年   21篇
  1998年   22篇
  1997年   24篇
  1996年   18篇
  1995年   16篇
  1994年   17篇
  1993年   10篇
  1992年   11篇
  1991年   6篇
  1990年   11篇
  1989年   5篇
  1988年   7篇
  1987年   1篇
  1986年   3篇
  1985年   4篇
  1984年   10篇
  1983年   9篇
  1982年   5篇
  1980年   3篇
  1979年   3篇
  1978年   3篇
  1977年   2篇
  1975年   2篇
排序方式: 共有2011条查询结果,搜索用时 31 毫秒
991.
In this paper, a new censoring scheme named by adaptive progressively interval censoring scheme is introduced. The competing risks data come from Marshall–Olkin extended Chen distribution under the new censoring scheme with random removals. We obtain the maximum likelihood estimators of the unknown parameters and the reliability function by using the EM algorithm based on the failure data. In addition, the bootstrap percentile confidence intervals and bootstrap-t confidence intervals of the unknown parameters are obtained. To test the equality of the competing risks model, the likelihood ratio tests are performed. Then, Monte Carlo simulation is conducted to evaluate the performance of the estimators under different sample sizes and removal schemes. Finally, a real data set is analyzed for illustration purpose.  相似文献   
992.
Let X1 X2…denote Independent and Identically distributed random vectors whose common distributions form a multiparameter exponential family, and consider the problem of sequentially testing separated hypotheses. It is known that the sequential procedure which continues sampling until the likelihood ratio statistic for testing one of the hypotheses exceeds a given level approximates the optimal Bayesian procedure, under general conditions on the loss function and prior distribution. Here we ask whether the approximate procedure is Bayes risk efficient--that is, whether the ratio of the Bayes risk of the approximate procedure to the Bayes risk of the optimal procedure approaches one as the cost of samping approaches zero. We show that the answer depends on the choice of certain parameters in the approximation and the dimensions of the hypotheses.  相似文献   
993.
Let X1,… Xm be a random sample of m failure times under normal conditions with the underlying distribution F(x) and Y1,…,Yn a random sample of n failure times under accelerated condititons with underlying distribution G(x);G(x)=1?[1?F(x)]θ with θ being the unknown parameter under study.Define:Uij=1 otherwise.The joint distribution of ijdoes not involve the distribution F and thus can be used to estimate the acceleration parameter θ.The second approach for estimating θ is to use the ranks of the Y-observations in the combined X- and Y-samples.In this paper we establish that the rank of the Y-observations in the pooled sample form a sufficient statistic for the information contained in the Uii 's about the parameter θ and that there does not exist an unbiassed estimator for the parameter θ.We also construct several estimators and confidence interavals for the parameter θ.  相似文献   
994.
This paper is concerned primarily with subset selection procedures based on the sample mediansof logistic populations. A procedure is given which chooses a nonempty subset from among kindependent logistic populations, having a common known variance, so that the populations with thelargest location parameter is contained in the subset with a pre‐specified probability. Theconstants required to apply the median procedure with small sample sizes (≤= 19) are tabulated and can also be used to construct simultaneous confidence intervals. Asymptotic formulae are provided for application with larger sample sizes. It is shown that, under certain situations, rules based on the median are substantially more efficient than analogous procedures based either on sample means or on the sum of joint ranks.  相似文献   
995.
Andrews et al (1972) carried out an extensive Monte Carlo study of robust estimators of location. Their conclusions were that the hampel and the skipped estimates, as classes, seemed to be preferable to some of the other currently fashionable estimators. The present study extends this work to include estimators not previously examined. The estimators are compared over short-tailed as well as long-tailed alternatives and also over some dependent data generated by first-order autoregressive schemes. The conclusions of the present study are threefold. First, from our limited study, none of the so-called robust estimators are very efficient over short-tailed situations. More work seems to be necessary in this situation. Second, none of the estimators perform very well in dependent data situations, particularly when the correlation is large and positive. This seems to be a rather pressing problem. Finally, for long-tailed alternatives, the hampel estimators and Hogg-type adaptive versions of the hampels are the strongest classes. The adaptive hampels neither uniformly outperform nor are they outperformed by the hampels. However, the superiority in terms of maximum relative efficiency goes to the adaptive hampels. That is, the adaptive hampels, under their worst performance.  相似文献   
996.
The powers of the likelihood ratio (LR) test and an “asymptotically (in some sense) optimum” invariant test are examined and compared by simulation techniques with those of several other relevant tests for the problem of testing the equality of two univariate normal population means under the assumption of heterogeneous variances but homogeneous coefficients of variation. It is seen that the LR test is highly satisfactory for all values of the coefficient of variation and the “asymptotically optimum” invariant test, which is computationally much simpler than the LR test, is a reasonably good competitor for cases where the value of the coefficient of variation is greater than or equal to 3. Also, a  相似文献   
997.
In designing experiments the researcher frequently must decide as to how to allocate fixed resources among k factor levels (Cox (1958)). This study investigates the effects on the power of a test caused by changes in: the sample size (n); the number of factor levels (k); the allocation of fixed total observations (N) among k and n: the shift parameter (ø); the type of parent population sampled; and, the type of ordered location alternative involved. Using Monte Carlo methods the powers of eight test procedures specifically devised to detect ordered treatment effects under completely randomized designs were evaluated along with those of the more general one-way F test. The results are of interest to researchers in all fields of application.  相似文献   
998.
Formulas are given for the asymptotic distribution, mean, and variance of m-1Nm,where NNm is the random sample size of the curtailed version of a fixed-sample most powerful test based on sample size m. The adequacy of the formulas is numerically investigated in some important applications where exact formulas can also be derived  相似文献   
999.
Many applications of the Inverse Gaussian distribution, including numerous reliability and life testing results are presented in statistical literature. The paper studies the problem of using entropy tests to examine the goodness of fit of an Inverse Gaussian distribution with unknown parameters. Some entropy tests based on different entropy estimates are proposed. Critical values of the test statistics for various sample sizes are obtained by Monte Carlo simulations. Type I error of the tests is investigated and then power values of the tests are compared with the competing tests against various alternatives. Finally, recommendations for the application of the tests in practice are presented.  相似文献   
1000.
Variable screening for censored survival data is most challenging when both survival and censoring times are correlated with an ultrahigh-dimensional vector of covariates. Existing approaches to handling censoring often make use of inverse probability weighting by assuming independent censoring with both survival time and covariates. This is a convenient but rather restrictive assumption which may be unmet in real applications, especially when the censoring mechanism is complex and the number of covariates is large. To accommodate heterogeneous (covariate-dependent) censoring that is often present in high-dimensional survival data, we propose a Gehan-type rank screening method to select features that are relevant to the survival time. The method is invariant to monotone transformations of the response and of the predictors, and works robustly for a general class of survival models. We establish the sure screening property of the proposed methodology. Simulation studies and a lymphoma data analysis demonstrate its favorable performance and practical utility.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号