全文获取类型
收费全文 | 834篇 |
免费 | 12篇 |
国内免费 | 1篇 |
专业分类
管理学 | 24篇 |
人口学 | 3篇 |
丛书文集 | 3篇 |
理论方法论 | 2篇 |
综合类 | 29篇 |
社会学 | 2篇 |
统计学 | 784篇 |
出版年
2023年 | 6篇 |
2022年 | 6篇 |
2021年 | 4篇 |
2020年 | 15篇 |
2019年 | 23篇 |
2018年 | 21篇 |
2017年 | 58篇 |
2016年 | 12篇 |
2015年 | 15篇 |
2014年 | 19篇 |
2013年 | 317篇 |
2012年 | 81篇 |
2011年 | 12篇 |
2010年 | 14篇 |
2009年 | 29篇 |
2008年 | 19篇 |
2007年 | 24篇 |
2006年 | 10篇 |
2005年 | 19篇 |
2004年 | 14篇 |
2003年 | 5篇 |
2002年 | 14篇 |
2001年 | 10篇 |
2000年 | 10篇 |
1999年 | 7篇 |
1998年 | 8篇 |
1997年 | 7篇 |
1996年 | 3篇 |
1995年 | 5篇 |
1994年 | 7篇 |
1993年 | 3篇 |
1992年 | 5篇 |
1991年 | 1篇 |
1990年 | 6篇 |
1989年 | 11篇 |
1988年 | 2篇 |
1987年 | 2篇 |
1986年 | 4篇 |
1985年 | 2篇 |
1984年 | 1篇 |
1983年 | 6篇 |
1982年 | 2篇 |
1981年 | 1篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1978年 | 3篇 |
1976年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有847条查询结果,搜索用时 15 毫秒
91.
D. R. Cox Man Yu Wong 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(2):395-400
Summary. Given a large number of test statistics, a small proportion of which represent departures from the relevant null hypothesis, a simple rule is given for choosing those statistics that are indicative of departure. It is based on fitting by moments a mixture model to the set of test statistics and then deriving an estimated likelihood ratio. Simulation suggests that the procedure has good properties when the departure from an overall null hypothesis is not too small. 相似文献
92.
Stuart Barber Guy P. Nason 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(4):927-939
Summary. Wavelet shrinkage is an effective nonparametric regression technique, especially when the underlying curve has irregular features such as spikes or discontinuities. The basic idea is simple: take the discrete wavelet transform of data consisting of a signal corrupted by noise; shrink or remove the wavelet coefficients to remove the noise; then invert the discrete wavelet transform to form an estimate of the true underlying curve. Various researchers have proposed increasingly sophisticated methods of doing this by using real-valued wavelets. Complex-valued wavelets exist but are rarely used. We propose two new complex-valued wavelet shrinkage techniques: one based on multiwavelet style shrinkage and the other using Bayesian methods. Extensive simulations show that our methods almost always give significantly more accurate estimates than methods based on real-valued wavelets. Further, our multiwavelet style shrinkage method is both simpler and dramatically faster than its competitors. To understand the excellent performance of this method we present a new risk bound on its hard thresholded coefficients. 相似文献
93.
In this study, we propose a prior on restricted Vector Autoregressive (VAR) models. The prior setting permits efficient Markov Chain Monte Carlo (MCMC) sampling from the posterior of the VAR parameters and estimation of the Bayes factor. Numerical simulations show that when the sample size is small, the Bayes factor is more effective in selecting the correct model than the commonly used Schwarz criterion. We conduct Bayesian hypothesis testing of VAR models on the macroeconomic, state-, and sector-specific effects of employment growth. 相似文献
94.
Computing maximum likelihood estimates from type II doubly censored exponential data 总被引:1,自引:0,他引:1
Arturo J. fernández José I. Bravo Íñigo De Fuentes 《Statistical Methods and Applications》2002,11(2):187-200
It is well-known that, under Type II double censoring, the maximum likelihood (ML) estimators of the location and scale parameters, θ and δ, of a twoparameter exponential distribution are linear functions
of the order statistics. In contrast, when θ is known, theML estimator of δ does not admit a closed form expression. It is shown, however, that theML estimator of the scale parameter exists and is unique. Moreover, it has good large-sample properties. In addition, sharp
lower and upper bounds for this estimator are provided, which can serve as starting points for iterative interpolation methods
such as regula falsi. Explicit expressions for the expected Fisher information and Cramér-Rao lower bound are also derived.
In the Bayesian context, assuming an inverted gamma prior on δ, the uniqueness, boundedness and asymptotics of the highest
posterior density estimator of δ can be deduced in a similar way. Finally, an illustrative example is included. 相似文献
95.
FLORYT VAN WESEL HERBERT HOIJTINK IRENE KLUGKIST 《Scandinavian Journal of Statistics》2011,38(4):666-690
Abstract. This article combines the best of both objective and subjective Bayesian inference in specifying priors for inequality and equality constrained analysis of variance models. Objectivity can be found in the use of training data to specify a prior distribution, subjectivity can be found in restrictions on the prior to formulate models. The aim of this article is to find the best model in a set of models specified using inequality and equality constraints on the model parameters. For the evaluation of the models an encompassing prior approach is used. The advantage of this approach is that only a prior for the unconstrained encompassing model needs to be specified. The priors for all constrained models can be derived from this encompassing prior. Different choices for this encompassing prior will be considered and evaluated. 相似文献
96.
This paper considers the statistical reliability on discrete failure data and the selection of the best geometric distribution having the smallest failure probability from among several competitors. Using the Bayesian approach a Bayes selection rule based on type-I censored data is derived and its associated monotonicity is also obtained. An early selection rule which allows us to make a selection possible earlier than the censoring time of the life testing experiment is proposed. This early selection rule can be shown to be equivalent to the Bayes selection rule. An illustrative example is given to demonstrate the use and the performance of the early selection rule. 相似文献
97.
Shohei Tateishi Hidetoshi Matsui Sadanori Konishi 《Journal of statistical planning and inference》2010
We consider the problem of constructing nonlinear regression models with Gaussian basis functions, using lasso regularization. Regularization with a lasso penalty is an advantageous in that it estimates some coefficients in linear regression models to be exactly zero. We propose imposing a weighted lasso penalty on a nonlinear regression model and thereby selecting the number of basis functions effectively. In order to select tuning parameters in the regularization method, we use a deviance information criterion proposed by Spiegelhalter et al. (2002), calculating the effective number of parameters by Gibbs sampling. Simulation results demonstrate that our methodology performs well in various situations. 相似文献
98.
Small area estimation is studied under a nested error linear regression model with area level covariate subject to measurement error. Ghosh and Sinha (2007) obtained a pseudo-Bayes (PB) predictor of a small area mean and a corresponding pseudo-empirical Bayes (PEB) predictor, using the sample means of the observed covariate values to estimate the true covariate values. In this paper, we first derive an efficient PB predictor by using all the available data to estimate true covariate values. We then obtain a corresponding PEB predictor and show that it is asymptotically “optimal”. In addition, we employ a jackknife method to estimate the mean squared prediction error (MSPE) of the PEB predictor. Finally, we report the results of a simulation study on the performance of our PEB predictor and associated jackknife MSPE estimator. Our results show that the proposed PEB predictor can lead to significant gain in efficiency over the previously proposed PEB predictor. Area level models are also studied. 相似文献
99.
7 and 8 introduce a power max-autoregressive process, in short pARMAX, as an alternative to heavy tailed ARMA when modeling rare events. In this paper, an extension of pARMAX is considered, by including a random component which makes the model more applicable to real data. We will see conditions under which this new model, here denoted as pRARMAX, has unique stationary distribution and we analyze its extremal behavior. Based on Bortot and Tawn (1998), we derive a threshold-dependent extremal index which is a functional of the coefficient of tail dependence of 14 and 15 which in turn relates with the pRARMAX parameter. In order to fit a pRARMAX model to an observed data series, we present a methodology based on minimizing the Bayes risk in classification theory and analyze this procedure through a simulation study. We illustrate with an application to financial data. 相似文献
100.