首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6374篇
  免费   177篇
  国内免费   14篇
管理学   244篇
民族学   1篇
人口学   44篇
丛书文集   23篇
理论方法论   17篇
综合类   352篇
社会学   30篇
统计学   5854篇
  2023年   32篇
  2022年   49篇
  2021年   42篇
  2020年   112篇
  2019年   236篇
  2018年   272篇
  2017年   428篇
  2016年   196篇
  2015年   155篇
  2014年   190篇
  2013年   2015篇
  2012年   601篇
  2011年   174篇
  2010年   177篇
  2009年   196篇
  2008年   175篇
  2007年   133篇
  2006年   133篇
  2005年   123篇
  2004年   141篇
  2003年   103篇
  2002年   102篇
  2001年   103篇
  2000年   87篇
  1999年   91篇
  1998年   92篇
  1997年   71篇
  1996年   35篇
  1995年   30篇
  1994年   40篇
  1993年   30篇
  1992年   33篇
  1991年   13篇
  1990年   23篇
  1989年   15篇
  1988年   18篇
  1987年   9篇
  1986年   9篇
  1985年   5篇
  1984年   17篇
  1983年   17篇
  1982年   9篇
  1981年   7篇
  1980年   5篇
  1979年   7篇
  1978年   5篇
  1977年   2篇
  1976年   2篇
  1975年   3篇
  1973年   1篇
排序方式: 共有6565条查询结果,搜索用时 125 毫秒
911.
This article proposes a locally best invariant test of the null hypothesis of seasonal stationarity against the alternative of seasonal unit roots at all or individual seasonal frequencies. An asymptotic distribution theory is derived and the finite-sample properties of the test are examined in a Monte Carlo simulation. My test is also compared with the Canova and Hansen test. The proposed test is superior to the Canova and Hansen test in terms of both size and power.  相似文献   
912.
913.
In this article we propose a nonparametric test for poolability in large dimensional semiparametric panel data models with cross-section dependence based on the sieve estimation technique. To construct the test statistic, we only need to estimate the model under the alternative. We establish the asymptotic normal distributions of our test statistic under the null hypothesis of poolability and a sequence of local alternatives, and prove the consistency of our test. We also suggest a bootstrap method as an alternative way to obtain the critical values. A small set of Monte Carlo simulations indicate the test performs reasonably well in finite samples.  相似文献   
914.
A fast and accurate method of confidence interval construction for the smoothing parameter in penalised spline and partially linear models is proposed. The method is akin to a parametric percentile bootstrap where Monte Carlo simulation is replaced by saddlepoint approximation, and can therefore be viewed as an approximate bootstrap. It is applicable in a quite general setting, requiring only that the underlying estimator be the root of an estimating equation that is a quadratic form in normal random variables. This is the case under a variety of optimality criteria such as those commonly denoted by maximum likelihood (ML), restricted ML (REML), generalized cross validation (GCV) and Akaike's information criteria (AIC). Simulation studies reveal that under the ML and REML criteria, the method delivers a near‐exact performance with computational speeds that are an order of magnitude faster than existing exact methods, and two orders of magnitude faster than a classical bootstrap. Perhaps most importantly, the proposed method also offers a computationally feasible alternative when no known exact or asymptotic methods exist, e.g. GCV and AIC. An application is illustrated by applying the methodology to well‐known fossil data. Giving a range of plausible smoothed values in this instance can help answer questions about the statistical significance of apparent features in the data.  相似文献   
915.
For Canada's boreal forest region, the accurate modelling of the timing of the appearance of aspen leaves is important to forest fire management, as it signifies the end of the spring fire season that occurs after snowmelt. This article compares two methods, a midpoint rule and a conditional expectation method used to estimate the true flush date for interval-censored data from a large set of fire-weather stations in Alberta, Canada. The conditional expectation method uses the interval censored kernel density estimator of Braun et al. (2005 Braun , J. , Duchesne , T. , Stafford , J. E. ( 2005 ). Local likelihood density estimation for interval censored data . Canadian Journal of Statistics 33 : 3960 .[Crossref], [Web of Science ®] [Google Scholar]). The methods are compared via simulation, where true flush dates were generated from a normal distribution and then converted into intervals by adding and subtracting exponential random variables. The simulation parameters were estimated from the data set and several scenarios were considered. The study reveals that the conditional expectation method is never worse than the midpoint method, and that there is a significant advantage to this method when the intervals are large. An illustration of the methodology applied to the Alberta data set is also provided.  相似文献   
916.
The Hidden semi-Markov models (HSMMs) were introduced to overcome the constraint of a geometric sojourn time distribution for the different hidden states in the classical hidden Markov models. Several variations of HSMMs were proposed that model the sojourn times by a parametric or a nonparametric family of distributions. In this article, we concentrate our interest on the nonparametric case where the duration distributions are attached to transitions and not to states as in most of the published papers in HSMMs. Therefore, it is worth noticing that here we treat the underlying hidden semi-Markov chain in its general probabilistic structure. In that case, Barbu and Limnios (2008 Barbu , V. , Limnios , N. ( 2008 ). Semi-Markov Chains and Hidden Semi-Markov Models Toward Applications: Their Use in Reliability and DNA Analysis . New York : Springer . [Google Scholar]) proposed an Expectation–Maximization (EM) algorithm in order to estimate the semi-Markov kernel and the emission probabilities that characterize the dynamics of the model. In this article, we consider an improved version of Barbu and Limnios' EM algorithm which is faster than the original one. Moreover, we propose a stochastic version of the EM algorithm that achieves comparable estimates with the EM algorithm in less execution time. Some numerical examples are provided which illustrate the efficient performance of the proposed algorithms.  相似文献   
917.
The empirical likelihood (EL) technique has been well addressed in both the theoretical and applied literature in the context of powerful nonparametric statistical methods for testing and interval estimations. A nonparametric version of Wilks theorem (Wilks, 1938 Wilks , S. S. ( 1938 ). The large-sample distribution of the likelihood ratio for testing composite hypotheses . Annals of Mathematical Statistics 9 : 6062 .[Crossref] [Google Scholar]) can usually provide an asymptotic evaluation of the Type I error of EL ratio-type tests. In this article, we examine the performance of this asymptotic result when the EL is based on finite samples that are from various distributions. In the context of the Type I error control, we show that the classical EL procedure and the Student's t-test have asymptotically a similar structure. Thus, we conclude that modifications of t-type tests can be adopted to improve the EL ratio test. We propose the application of the Chen (1995 Chen , L. ( 1995 ). Testing the mean of skewed distributions . Journal of the American Statistical Association 90 : 767772 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) t-test modification to the EL ratio test. We display that the Chen approach leads to a location change of observed data whereas the classical Bartlett method is known to be a scale correction of the data distribution. Finally, we modify the EL ratio test via both the Chen and Bartlett corrections. We support our argument with theoretical proofs as well as a Monte Carlo study. A real data example studies the proposed approach in practice.  相似文献   
918.
The good performance of logit confidence intervals for the odds ratio with small samples is well known. This is true unless the actual odds ratio is very large. In single capture–recapture estimation the odds ratio is equal to 1 because of the assumption of independence of the samples. Consequently, a transformation of the logit confidence intervals for the odds ratio is proposed in order to estimate the size of a closed population under single capture–recapture estimation. It is found that the transformed logit interval, after adding .5 to each observed count before computation, has actual coverage probabilities near to the nominal level even for small populations and even for capture probabilities near to 0 or 1, which is not guaranteed for the other capture–recapture confidence intervals proposed in statistical literature. Thus, given that the .5 transformed logit interval is very simple to compute and has a good performance, it is appropriate to be implemented by most users of the single capture–recapture method.  相似文献   
919.
In this article, we implement the Regression Method for estimating (d 1, d 2) of the FISSAR(1, 1) model. It is also possible to estimate d 1 and d 2 by Whittle's method. We also compute the estimated bias, standard error, and root mean square error by a simulation study. A comparison was made between the Regression Method of estimating d 1 and d 2 to that of the Whittle's method. It was found in this simulation study that the Regression Method of estimation was better when compare with the Whittle's estimator, in the sense that it had smaller root mean square errors (RMSE) values.  相似文献   
920.
Conditional confidence intervals for the location parameter of the double exponential distribution based on maximum likelihood estimators conditioned on a set of ancillary statistics and the corresponding unconditional confidence intervals based on the maximum likelihood estimators alone are compared in two ways. Monte Carlo techniques are used and the conditional approach appears to give slightly better results although agreement as n becomes larger is noted  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号