首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1158篇
  免费   27篇
  国内免费   3篇
管理学   66篇
民族学   2篇
人才学   1篇
人口学   10篇
丛书文集   25篇
理论方法论   13篇
综合类   314篇
社会学   87篇
统计学   670篇
  2023年   11篇
  2022年   6篇
  2021年   2篇
  2020年   25篇
  2019年   43篇
  2018年   38篇
  2017年   56篇
  2016年   28篇
  2015年   23篇
  2014年   46篇
  2013年   301篇
  2012年   87篇
  2011年   40篇
  2010年   32篇
  2009年   52篇
  2008年   48篇
  2007年   56篇
  2006年   46篇
  2005年   43篇
  2004年   34篇
  2003年   34篇
  2002年   37篇
  2001年   16篇
  2000年   11篇
  1999年   10篇
  1998年   10篇
  1997年   7篇
  1996年   8篇
  1995年   5篇
  1994年   2篇
  1993年   8篇
  1992年   3篇
  1991年   1篇
  1990年   1篇
  1989年   2篇
  1988年   2篇
  1987年   2篇
  1986年   1篇
  1985年   2篇
  1984年   4篇
  1983年   1篇
  1982年   2篇
  1978年   1篇
  1977年   1篇
排序方式: 共有1188条查询结果,搜索用时 15 毫秒
161.
This article develops nonparametric tests of independence between two stochastic processes satisfying β-mixing conditions. The testing strategy boils down to gauging the closeness between the joint and the product of the marginal stationary densities. For that purpose, we take advantage of a generalized entropic measure so as to build a whole family of nonparametric tests of independence. We derive asymptotic normality and local power using the functional delta method for kernels. As a corollary, we also develop a class of entropy-based tests for serial independence. The latter are nuisance parameter free, and hence also qualify for dynamic misspecification analyses. We then investigate the finite-sample properties of our serial independence tests through Monte Carlo simulations. They perform quite well, entailing more power against some nonlinear AR alternatives than two popular nonparametric serial-independence tests.  相似文献   
162.
The power-generalized Weibull probability distribution is very often used in survival analysis mainly because different values of its parameters allow for various shapes of hazard rate such as monotone increasing/decreasing, ∩-shaped, ∪-shaped, or constant. Modified chi-squared tests based on maximum likelihood estimators of parameters that are shown to be -consistent are proposed. Power of these tests against exponentiated Weibull, three-parameter Weibull, and generalized Weibull distributions is studied using Monte Carlo simulations. It is proposed to use the left-tailed rejection region because these tests are biased with respect to the above alternatives if one will use the right-tailed rejection region. It is also shown that power of the McCulloch test investigated can be two or three times higher than that of Nikulin–Rao–Robson test with respect to the alternatives considered if expected cell frequencies are about 5.  相似文献   
163.
This article develops a new cumulative sum statistic to identify aberrant behavior in a sequentially administered multiple-choice standardized examination. The examination responses can be described as finite Poisson trials, and the statistic can be used for other applications which fit this framework. The standardized examination setting uses a maximum likelihood estimate of examinee ability and an item response theory model. Aberrant and non aberrant probabilities are computed by an odds ratio analogous to risk adjusted CUSUM schemes. The significance level of a hypothesis test, where the null hypothesis is non-aberrant examinee behavior, is computed with Markov chains. A smoothing process is used to spread probabilities across the Markov states. The practicality of the approach to detect aberrant examinee behavior is demonstrated with results from both simulated and empirical data.  相似文献   
164.
In this paper a specification strategy is proposed for the determination of the orders in ARMA models. The strategy is based on two newly defined concepts: the q-conditioned partial auto-regressive function and the p-conditioned partial moving average function. These concepts are similar to the generalized partial autocorrelation function which has been recently suggested for order determination. The main difference is that they are defined and employed in connection with an asymptotically efficient estimation method instead of the rather inefficient generalized Yule-Walker method. The specification is performed by using sequential Wald type tests. In contrast to the traditional testing of hypotheses, these tests use critical values which increase with the sample size at an appropriate rate  相似文献   
165.
Two simple tests which allow for unequal sample sizes are considered for testing hypothesis for the common mean of two normal populations. The first test is an exact test of size a based on two available t-statistics based on single samples made exact through random allocation of α among the two available t-tests. The test statistic of the second test is a weighted average of two available t-statistics with random weights. It is shown that the first test is more efficient than the available two t-tests with respect to Bahadur asymptotic relative efficiency. It is also shown that the null distribution of the test statistic in the second test, which is similar to the one based on the normalized Graybill-Deal test statistic, converges to a standard normal distribution. Finally, we compare the small sample properties of these tests, those given in Zhou and Mat hew (1993), and some tests given in Cohen and Sackrowitz (1984) in a simulation study. In this study, we find that the second test performs better than the tests given in Zhou and Mathew (1993) and is comparable to the ones given in Cohen and Sackrowitz (1984) with respect to power..  相似文献   
166.
Abstract

In a quantitative linear model with errors following a stationary Gaussian, first-order autoregressive or AR(1) process, Generalized Least Squares (GLS) on raw data and Ordinary Least Squares (OLS) on prewhitened data are efficient methods of estimation of the slope parameters when the autocorrelation parameter of the error AR(1) process, ρ, is known. In practice, ρ is generally unknown. In the so-called two-stage estimation procedures, ρ is then estimated first before using the estimate of ρ to transform the data and estimate the slope parameters by OLS on the transformed data. Different estimators of ρ have been considered in previous studies. In this article, we study nine two-stage estimation procedures for their efficiency in estimating the slope parameters. Six of them (i.e., three noniterative, three iterative) are based on three estimators of ρ that have been considered previously. Two more (i.e., one noniterative, one iterative) are based on a new estimator of ρ that we propose: it is provided by the sample autocorrelation coefficient of the OLS residuals at lag 1, denoted r(1). Lastly, REstricted Maximum Likelihood (REML) represents a different type of two-stage estimation procedure whose efficiency has not been compared to the others yet. We also study the validity of the testing procedures derived from GLS and the nine two-stage estimation procedures. Efficiency and validity are analyzed in a Monte Carlo study. Three types of explanatory variable x in a simple quantitative linear model with AR(1) errors are considered in the time domain: Case 1, x is fixed; Case 2, x is purely random; and Case 3, x follows an AR(1) process with the same autocorrelation parameter value as the error AR(1) process. In a preliminary step, the number of inadmissible estimates and the efficiency of the different estimators of ρ are compared empirically, whereas their approximate expected value in finite samples and their asymptotic variance are derived theoretically. Thereafter, the efficiency of the estimation procedures and the validity of the derived testing procedures are discussed in terms of the sample size and the magnitude and sign of ρ. The noniterative two-stage estimation procedure based on the new estimator of ρ is shown to be more efficient for moderate values of ρ at small sample sizes. With the exception of small sample sizes, REML and its derived F-test perform the best overall. The asymptotic equivalence of two-stage estimation procedures, besides REML, is observed empirically. Differences related to the nature, fixed or random (uncorrelated or autocorrelated), of the explanatory variable are also discussed.  相似文献   
167.
This article presents a bivariate distribution for analyzing the failure data of mechanical and electrical components in presence of a forewarning or primer event whose occurrence denotes the inception of the failure mechanism that will cause the component failure after an additional random time. The characteristics of the proposed distribution are discussed and several point estimators of parameters are illustrated and compared, in case of complete sampling, via a large Monte Carlo simulation study. Confidence intervals based on asymptotic results are derived, as well as procedures are given for testing the independence between the occurrence time of the forewarning event and the additional time to failure. Numerical applications based on failure data of cable insulation specimens and of two-component parallel systems are illustrated.  相似文献   
168.
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre‐specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre‐specifying multiple test statistics and relying on the minimum p‐value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p‐value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p‐value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p‐value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
169.
For testing the equality of two independent binomial populations the Fisher exact test and the chi-squared test with Yates's continuity correction are often suggested for small and intermediate size samples. The use of these tests is inappropriate in that they are extremely conservative. In this article we demonstrate that, even for small samples, the uncorrected chi-squared test (i.e., the Pearson chi-squared test) and the two-independent-sample t test are robust in that their actual significance levels are usually close to or smaller than the nominal levels. We encourage the use of these latter two tests.  相似文献   
170.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号