首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2079篇
  免费   104篇
  国内免费   2篇
管理学   82篇
劳动科学   1篇
民族学   9篇
人口学   41篇
丛书文集   62篇
理论方法论   123篇
综合类   471篇
社会学   114篇
统计学   1282篇
  2024年   4篇
  2023年   9篇
  2022年   5篇
  2021年   22篇
  2020年   16篇
  2019年   68篇
  2018年   79篇
  2017年   43篇
  2016年   59篇
  2015年   66篇
  2014年   98篇
  2013年   284篇
  2012年   104篇
  2011年   118篇
  2010年   86篇
  2009年   177篇
  2008年   127篇
  2007年   161篇
  2006年   32篇
  2005年   31篇
  2004年   29篇
  2003年   46篇
  2002年   31篇
  2001年   31篇
  2000年   20篇
  1999年   7篇
  1998年   10篇
  1997年   9篇
  1996年   14篇
  1995年   47篇
  1994年   20篇
  1993年   2篇
  1992年   3篇
  1991年   1篇
  1988年   2篇
  1986年   2篇
  1985年   41篇
  1984年   57篇
  1983年   47篇
  1982年   47篇
  1981年   36篇
  1980年   27篇
  1979年   24篇
  1978年   37篇
  1977年   5篇
  1976年   1篇
排序方式: 共有2185条查询结果,搜索用时 31 毫秒
11.
Non-central chi-squared distribution plays a vital role in statistical testing procedures. Estimation of the non-centrality parameter provides valuable information for the power calculation of the associated test. We are interested in the statistical inference property of the non-centrality parameter estimate based on one observation (usually a summary statistic) from a truncated chi-squared distribution. This work is motivated by the application of the flexible two-stage design in case–control studies, where the sample size needed for the second stage of a two-stage study can be determined adaptively by the results of the first stage. We first study the moment estimate for the truncated distribution and prove its existence, uniqueness, and inadmissibility and convergence properties. We then define a new class of estimates that includes the moment estimate as a special case. Among this class of estimates, we recommend to use one member that outperforms the moment estimate in a wide range of scenarios. We also present two methods for constructing confidence intervals. Simulation studies are conducted to evaluate the performance of the proposed point and interval estimates.  相似文献   
12.
The use of covariates in block designs is necessary when the covariates cannot be controlled like the blocking factor in the experiment. In this paper, we consider the situation where there is some flexibility for selection in the values of the covariates. The choice of values of the covariates for a given block design attaining minimum variance for estimation of each of the parameters has attracted attention in recent times. Optimum covariate designs in simple set-ups such as completely randomised design (CRD), randomised block design (RBD) and some series of balanced incomplete block design (BIBD) have already been considered. In this paper, optimum covariate designs have been considered for the more complex set-ups of different partially balanced incomplete block (PBIB) designs, which are popular among practitioners. The optimum covariate designs depend much on the methods of construction of the basic PBIB designs. Different combinatorial arrangements and tools such as orthogonal arrays, Hadamard matrices and different kinds of products of matrices viz. Khatri–Rao product, Kronecker product have been conveniently used to construct optimum covariate designs with as many covariates as possible.  相似文献   
13.
Continuous non-Gaussian stationary processes of the OU-type are becoming increasingly popular given their flexibility in modelling stylized features of financial series such as asymmetry, heavy tails and jumps. The use of non-Gaussian marginal distributions makes likelihood analysis of these processes unfeasible for virtually all cases of interest. This paper exploits the self-decomposability of the marginal laws of OU processes to provide explicit expressions of the characteristic function which can be applied to several models as well as to develop efficient estimation techniques based on the empirical characteristic function. Extensions to OU-based stochastic volatility models are provided.  相似文献   
14.
15.
This paper is concerned with semiparametric discrete kernel estimators when the unknown count distribution can be considered to have a general weighted Poisson form. The estimator is constructed by multiplying the Poisson estimate with a nonparametric discrete kernel-type estimate of the Poisson weight function. Comparisons are then carried out with the ordinary discrete kernel probability mass function estimators. The Poisson weight function is thus a local multiplicative correction factor, and is considered as the uniform measure to detect departures from the equidispersed Poisson distribution. In this way, the effects of dispersion and zero-proportion with respect to the standard Poisson distribution are also minimized. This method of estimation is also applied to the weighted binomial form for the count distribution having a finite support. The proposed estimators, in addition to being simple, easy-to-implement and effective, also outperform the competing nonparametric and parametric estimators in finite-sample situations. Two examples illustrate this new semiparametric estimation.  相似文献   
16.
A broad spectrum of flexible univariate and multivariate models can be constructed by using a hidden truncation paradigm. Such models can be viewed as being characterized by a basic marginal density, a family of conditional densities and a specified hidden truncation point, or points. The resulting class of distributions includes the basic marginal density as a special case (or as a limiting case), but also includes an array of models that may unexpectedly include many well known densities. Most of the well known skew-normal models (developed from the seed distribution popularized by Azzalini [(1985). A class of distributions which includes the normal ones. Scand. J. Statist. 12(2), 171–178]) can be viewed as being products of such a hidden truncation construction. However, the many hidden truncation models with non-normal component densities undoubtedly deserve further attention.  相似文献   
17.
The heterogeneity of error variance often causes a huge interpretive problem in linear regression analysis. Before taking any remedial measures we first need to detect this problem. A large number of diagnostic plots are now available in the literature for detecting heteroscedasticity of error variances. Among them the ‘residuals’ and ‘fits’ (R–F) plot is very popular and commonly used. In the R–F plot residuals are plotted against the fitted responses, where both these components are obtained using the ordinary least squares (OLS) method. It is now evident that the OLS fits and residuals suffer a huge setback in the presence of unusual observations and hence the R–F plot may not exhibit the real scenario. The deletion residuals based on a data set free from all unusual cases should estimate the true errors in a better way than the OLS residuals. In this paper we propose ‘deletion residuals’ and the ‘deletion fits’ (DR–DF) plot for the detection of the heterogeneity of error variances in a linear regression model to get a more convincing and reliable graphical display. Examples show that this plot locates unusual observations more clearly than the R–F plot. The advantage of using deletion residuals in the detection of heteroscedasticity of error variance is investigated through Monte Carlo simulations under a variety of situations.  相似文献   
18.
The author considers estimation under a Gamma process model for degradation data. The setting for degradation data is one in which n independent units, each with a Gamma process with a common shape function and scale parameter, are observed at several possibly different times. Covariates can be incorporated into the model by taking the scale parameter as a function of the covariates. The author proposes using the maximum pseudo‐likelihood method to estimate the unknown parameters. The method requires usage of the Pool Adjacent Violators Algorithm. Asymptotic properties, including consistency, convergence rate and asymptotic distribution, are established. Simulation studies are conducted to validate the method and its application is illustrated by using bridge beams data and carbon‐film resistors data. The Canadian Journal of Statistics 37: 102‐118; 2009 © 2009 Statistical Society of Canada  相似文献   
19.
For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow‐up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, λ, as well as age‐specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds) = (incidence rate) × (mean duration) (P/[1 ? P] = λ × µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of λ is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1 ? P] = λ × µ. Following‐up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   
20.
To enhance modeling flexibility, the authors propose a nonparametric hazard regression model, for which the ordinary and weighted least squares estimation and inference procedures are studied. The proposed model does not assume any parametric specifications on the covariate effects, which is suitable for exploring the nonlinear interactions between covariates, time and some exposure variable. The authors propose the local ordinary and weighted least squares estimators for the varying‐coefficient functions and establish the corresponding asymptotic normality properties. Simulation studies are conducted to empirically examine the finite‐sample performance of the new methods, and a real data example from a recent breast cancer study is used as an illustration. The Canadian Journal of Statistics 37: 659–674; 2009 © 2009 Statistical Society of Canada  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号