首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3975篇
  免费   82篇
  国内免费   15篇
管理学   207篇
民族学   1篇
人口学   39篇
丛书文集   38篇
理论方法论   23篇
综合类   378篇
社会学   29篇
统计学   3357篇
  2024年   1篇
  2023年   22篇
  2022年   37篇
  2021年   24篇
  2020年   71篇
  2019年   149篇
  2018年   160篇
  2017年   269篇
  2016年   125篇
  2015年   84篇
  2014年   118篇
  2013年   1150篇
  2012年   353篇
  2011年   107篇
  2010年   125篇
  2009年   134篇
  2008年   126篇
  2007年   93篇
  2006年   94篇
  2005年   91篇
  2004年   81篇
  2003年   68篇
  2002年   73篇
  2001年   65篇
  2000年   60篇
  1999年   60篇
  1998年   54篇
  1997年   42篇
  1996年   24篇
  1995年   20篇
  1994年   28篇
  1993年   19篇
  1992年   23篇
  1991年   8篇
  1990年   15篇
  1989年   9篇
  1988年   17篇
  1987年   8篇
  1986年   7篇
  1985年   4篇
  1984年   13篇
  1983年   13篇
  1982年   6篇
  1981年   5篇
  1980年   1篇
  1979年   6篇
  1978年   5篇
  1977年   2篇
  1975年   2篇
  1973年   1篇
排序方式: 共有4072条查询结果,搜索用时 15 毫秒
621.
Josef Kozák 《Statistics》2013,47(3):363-371
Working with the linear regression model (1.1) and having the extraneous information (1.2) about regression coefficients the problem exists how to build estimators (1.3) with the risk (1.4) which enable to utilize the known information in order to reduce their risk as compared with the risk (1.6) of the LSE (1.5). Solution of this problem is known for the positive definite matrix T, namely in form for estimators (1.8) and (1.10).First, it is shown that the proposed estimators (2.6),(2.9) and (2.16) based on psedoinversions of the matrix L represent the solution of the problem of the positive semidefinite matrix T=L'L.Further, the problem of interpretability of estimators in the sense of the inequality (3.1) exists; it is shown that all mentioned estimators are at least partially interpretable in the sense of requirements (3.2) or (3.10).  相似文献   
622.
For Canada's boreal forest region, the accurate modelling of the timing of the appearance of aspen leaves is important to forest fire management, as it signifies the end of the spring fire season that occurs after snowmelt. This article compares two methods, a midpoint rule and a conditional expectation method used to estimate the true flush date for interval-censored data from a large set of fire-weather stations in Alberta, Canada. The conditional expectation method uses the interval censored kernel density estimator of Braun et al. (2005 Braun , J. , Duchesne , T. , Stafford , J. E. ( 2005 ). Local likelihood density estimation for interval censored data . Canadian Journal of Statistics 33 : 3960 .[Crossref], [Web of Science ®] [Google Scholar]). The methods are compared via simulation, where true flush dates were generated from a normal distribution and then converted into intervals by adding and subtracting exponential random variables. The simulation parameters were estimated from the data set and several scenarios were considered. The study reveals that the conditional expectation method is never worse than the midpoint method, and that there is a significant advantage to this method when the intervals are large. An illustration of the methodology applied to the Alberta data set is also provided.  相似文献   
623.
In this article, we implement the Regression Method for estimating (d 1, d 2) of the FISSAR(1, 1) model. It is also possible to estimate d 1 and d 2 by Whittle's method. We also compute the estimated bias, standard error, and root mean square error by a simulation study. A comparison was made between the Regression Method of estimating d 1 and d 2 to that of the Whittle's method. It was found in this simulation study that the Regression Method of estimation was better when compare with the Whittle's estimator, in the sense that it had smaller root mean square errors (RMSE) values.  相似文献   
624.
The good performance of logit confidence intervals for the odds ratio with small samples is well known. This is true unless the actual odds ratio is very large. In single capture–recapture estimation the odds ratio is equal to 1 because of the assumption of independence of the samples. Consequently, a transformation of the logit confidence intervals for the odds ratio is proposed in order to estimate the size of a closed population under single capture–recapture estimation. It is found that the transformed logit interval, after adding .5 to each observed count before computation, has actual coverage probabilities near to the nominal level even for small populations and even for capture probabilities near to 0 or 1, which is not guaranteed for the other capture–recapture confidence intervals proposed in statistical literature. Thus, given that the .5 transformed logit interval is very simple to compute and has a good performance, it is appropriate to be implemented by most users of the single capture–recapture method.  相似文献   
625.
The empirical likelihood (EL) technique has been well addressed in both the theoretical and applied literature in the context of powerful nonparametric statistical methods for testing and interval estimations. A nonparametric version of Wilks theorem (Wilks, 1938 Wilks , S. S. ( 1938 ). The large-sample distribution of the likelihood ratio for testing composite hypotheses . Annals of Mathematical Statistics 9 : 6062 .[Crossref] [Google Scholar]) can usually provide an asymptotic evaluation of the Type I error of EL ratio-type tests. In this article, we examine the performance of this asymptotic result when the EL is based on finite samples that are from various distributions. In the context of the Type I error control, we show that the classical EL procedure and the Student's t-test have asymptotically a similar structure. Thus, we conclude that modifications of t-type tests can be adopted to improve the EL ratio test. We propose the application of the Chen (1995 Chen , L. ( 1995 ). Testing the mean of skewed distributions . Journal of the American Statistical Association 90 : 767772 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) t-test modification to the EL ratio test. We display that the Chen approach leads to a location change of observed data whereas the classical Bartlett method is known to be a scale correction of the data distribution. Finally, we modify the EL ratio test via both the Chen and Bartlett corrections. We support our argument with theoretical proofs as well as a Monte Carlo study. A real data example studies the proposed approach in practice.  相似文献   
626.
The Hidden semi-Markov models (HSMMs) were introduced to overcome the constraint of a geometric sojourn time distribution for the different hidden states in the classical hidden Markov models. Several variations of HSMMs were proposed that model the sojourn times by a parametric or a nonparametric family of distributions. In this article, we concentrate our interest on the nonparametric case where the duration distributions are attached to transitions and not to states as in most of the published papers in HSMMs. Therefore, it is worth noticing that here we treat the underlying hidden semi-Markov chain in its general probabilistic structure. In that case, Barbu and Limnios (2008 Barbu , V. , Limnios , N. ( 2008 ). Semi-Markov Chains and Hidden Semi-Markov Models Toward Applications: Their Use in Reliability and DNA Analysis . New York : Springer . [Google Scholar]) proposed an Expectation–Maximization (EM) algorithm in order to estimate the semi-Markov kernel and the emission probabilities that characterize the dynamics of the model. In this article, we consider an improved version of Barbu and Limnios' EM algorithm which is faster than the original one. Moreover, we propose a stochastic version of the EM algorithm that achieves comparable estimates with the EM algorithm in less execution time. Some numerical examples are provided which illustrate the efficient performance of the proposed algorithms.  相似文献   
627.
Sliced regression is an effective dimension reduction method by replacing the original high-dimensional predictors with its appropriate low-dimensional projection. It is free from any probabilistic assumption and can exhaustively estimate the central subspace. In this article, we propose to incorporate shrinkage estimation into sliced regression so that variable selection can be achieved simultaneously with dimension reduction. The new method can improve the estimation accuracy and achieve better interpretability for the reduced variables. The efficacy of proposed method is shown through both simulation and real data analysis.  相似文献   
628.
Stationary long memory processes have been extensively studied over the past decades. When we deal with financial, economic, or environmental data, seasonality and time-varying long-range dependence can often be observed and thus some kind of non-stationarity exists. To take into account this phenomenon, we propose a new class of stochastic processes: locally stationary k-factor Gegenbauer process. We present a procedure to estimate consistently the time-varying parameters by applying discrete wavelet packet transform. The robustness of the algorithm is investigated through a simulation study. And we apply our methods on Nikkei Stock Average 225 (NSA 225) index series.  相似文献   
629.
This article aims to estimate the parameters of the Weibull distribution in step-stress partially accelerated life tests under multiply censored data. The step partially acceleration life test is that all test units are first run simultaneously under normal conditions for a pre-specified time, and the surviving units are then run under accelerated conditions until a predetermined censoring time. The maximum likelihood estimates are used to obtaining the parameters of the Weibull distribution and the acceleration factor under multiply censored data. Additionally, the confidence intervals for the estimators are obtained. Simulation results show that the maximum likelihood estimates perform well in most cases in terms of the mean bias, errors in the root mean square and the coverage rate. An example is used to illustrate the performance of the proposed approach.  相似文献   
630.
In this paper we present a generalized functional form estimator, recently developed by jeffrey Wooldridge; and then we compare it empirically to the popular Box-Cox (BC) estimator using three data sets. We begin by briefly reviewing the drawbacks of the BC estimator. We Then introduce the nonlinear lest squares (NLS) alternative of Wooldridge which retains the desirable qualities of the BC estimator without the associated theoretical problems. We continue by applying both the BC and the NLS models to data from three classic hedonic regression studies and then compare the estimation resuts-point estimates, inferences and fitted values. The estimations include a wage rate equation, and two computer hedonic regression equations, one using data from a classic study by Gregory Chow and the other using an IBM data set that formed the basis of the new official BLS computer price index.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号