全文获取类型
收费全文 | 1496篇 |
免费 | 33篇 |
国内免费 | 5篇 |
专业分类
管理学 | 76篇 |
人口学 | 2篇 |
丛书文集 | 33篇 |
理论方法论 | 15篇 |
综合类 | 348篇 |
社会学 | 22篇 |
统计学 | 1038篇 |
出版年
2023年 | 12篇 |
2022年 | 14篇 |
2021年 | 13篇 |
2020年 | 17篇 |
2019年 | 50篇 |
2018年 | 54篇 |
2017年 | 81篇 |
2016年 | 36篇 |
2015年 | 25篇 |
2014年 | 50篇 |
2013年 | 261篇 |
2012年 | 96篇 |
2011年 | 67篇 |
2010年 | 50篇 |
2009年 | 58篇 |
2008年 | 64篇 |
2007年 | 63篇 |
2006年 | 67篇 |
2005年 | 66篇 |
2004年 | 59篇 |
2003年 | 53篇 |
2002年 | 36篇 |
2001年 | 45篇 |
2000年 | 33篇 |
1999年 | 21篇 |
1998年 | 21篇 |
1997年 | 23篇 |
1996年 | 11篇 |
1995年 | 14篇 |
1994年 | 8篇 |
1993年 | 7篇 |
1992年 | 8篇 |
1991年 | 11篇 |
1990年 | 4篇 |
1989年 | 1篇 |
1988年 | 7篇 |
1987年 | 6篇 |
1986年 | 2篇 |
1985年 | 3篇 |
1984年 | 2篇 |
1983年 | 5篇 |
1982年 | 5篇 |
1981年 | 2篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1978年 | 1篇 |
排序方式: 共有1534条查询结果,搜索用时 0 毫秒
101.
Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation 总被引:1,自引:0,他引:1
《Journal of Statistical Computation and Simulation》2012,82(1):195-213
In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm uses simulations of ‘subsamples’ from the assumed data-generating model as well as a so-called ‘early-rejection’ strategy to speed up computations in the ABC-MCMC sampler. Using a considerate amount of subsamples does not seem to degrade the quality of the inferential results for the considered applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general and not limited to the exemplified model and data. 相似文献
102.
《Journal of Statistical Computation and Simulation》2012,82(2):394-413
Mixture models are flexible tools in density estimation and classification problems. Bayesian estimation of such models typically relies on sampling from the posterior distribution using Markov chain Monte Carlo. Label switching arises because the posterior is invariant to permutations of the component parameters. Methods for dealing with label switching have been studied fairly extensively in the literature, with the most popular approaches being those based on loss functions. However, many of these algorithms turn out to be too slow in practice, and can be infeasible as the size and/or dimension of the data grow. We propose a new, computationally efficient algorithm based on a loss function interpretation, and show that it can scale up well in large data set scenarios. Then, we review earlier solutions which can scale up well for large data set, and compare their performances on simulated and real data sets. We conclude with some discussions and recommendations of all the methods studied. 相似文献
103.
《Journal of Statistical Computation and Simulation》2012,82(7):1295-1319
This paper extends stochastic conditional duration (SCD) models for financial transaction data to allow for correlation between error processes and innovations of observed duration process and latent log duration process. Suitable algorithms of Markov Chain Monte Carlo (MCMC) are developed to fit the resulting SCD models under various distributional assumptions about the innovation of the measurement equation. Unlike the estimation methods commonly used to estimate the SCD models in the literature, we work with the original specification of the model, without subjecting the observation equation to a logarithmic transformation. Results of simulation studies suggest that our proposed models and corresponding estimation methodology perform quite well. We also apply an auxiliary particle filter technique to construct one-step-ahead in-sample and out-of-sample duration forecasts of the fitted models. Applications to the IBM transaction data allow comparison of our models and methods to those existing in the literature. 相似文献
104.
《Journal of Statistical Computation and Simulation》2012,82(10):1869-1890
ABSTRACTWe consider the use of modern likelihood asymptotics in the construction of confidence intervals for the parameter which determines the skewness of the distribution of the maximum/minimum of an exchangeable bivariate normal random vector. Simulation studies were conducted to investigate the accuracy of the proposed methods and to compare them to available alternatives. Accuracy is evaluated in terms of both coverage probability and expected length of the interval. We furthermore illustrate the suitability of our proposals by means of two data sets, consisting of, respectively, measurements taken on the brains of 10 mono-zygotic twins and measurements of mineral content of bones in the dominant and non-dominant arms for 25 elderly women. 相似文献
105.
《商业与经济统计学杂志》2012,30(1):124-136
Time-varying parameter models with stochastic volatility are widely used to study macroeconomic and financial data. These models are almost exclusively estimated using Bayesian methods. A common practice is to focus on prior distributions that themselves depend on relatively few hyperparameters such as the scaling factor for the prior covariance matrix of the residuals governing time variation in the parameters. The choice of these hyperparameters is crucial because their influence is sizeable for standard sample sizes. In this article, we treat the hyperparameters as part of a hierarchical model and propose a fast, tractable, easy-to-implement, and fully Bayesian approach to estimate those hyperparameters jointly with all other parameters in the model. We show via Monte Carlo simulations that, in this class of models, our approach can drastically improve on using fixed hyperparameters previously proposed in the literature. Supplementary materials for this article are available online. 相似文献
106.
《Journal of Statistical Computation and Simulation》2012,82(1-4):225-242
The introduction of software to calculate maximum likelihood estimates for mixed linear models has made likelihood estimation a practical alternative to methods based on sums of squares. Likelihood based tests and confidence intervals, however, may be misleading in problems with small sample sizes. This paper discusses an adjusted version of the directed log-likelihood statistic for mixed models that is highly accurate for testing one parameter hypotheses. Indroduced by Skovgaard (1996, Journal of the Bernoulli Society,2,145-165), we show in mixed models that the statistic has a simple conpact from that may be obtained from standard software. Simulation studies indicate that this statistic is more accurate than many of the specialized procedure that have been advocated. 相似文献
107.
Patrick J. Wolfe Simon J. Godsill Wee-Jing Ng 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(3):575-589
Summary. We describe novel Bayesian models for time–frequency inverse modelling of non-stationary signals. These models are based on the idea of a Gabor regression , in which a time series is represented as a superposition of translated, modulated versions of a window function exhibiting good time–frequency concentration. As a necessary consequence, the resultant set of potential predictors is in general overcomplete—constituting a frame rather than a basis—and hence the resultant models require careful regularization through appropriate choices of variable selection schemes and prior distributions. We introduce prior specifications that are tailored to representative time series, and we develop effective Markov chain Monte Carlo methods for inference. To highlight the potential applications of such methods, we provide examples using two of the most distinctive time–frequency surfaces—speech and music signals—as well as standard test functions from the wavelet regression literature. 相似文献
108.
Robin Insley Lucia Mok Tim Swartz 《Australian & New Zealand Journal of Statistics》2004,46(2):219-232
This paper looks at various issues that are of interest to the sports gambler. First, an expression is obtained for the distribution of the final bankroll using fixed wagers with a specified initial bankroll. Second, fixed percentage wagers are considered where the Kelly method is extended to the case of simultaneous bets placed at various odds; a computational algorithm is presented to obtain the Kelly fractions. Finally, the paper considers the problem of determining whether a gambling system is profitable based on the historical results of bets placed at various odds. 相似文献
109.
We develop exact inference for the location and scale parameters of the Laplace (double exponential) distribution based on their maximum likelihood estimators from a Type-II censored sample. Based on some pivotal quantities, exact confidence intervals and tests of hypotheses are constructed. Upon conditioning first on the number of observations that are below the population median, exact distributions of the pivotal quantities are expressed as mixtures of linear combinations and of ratios of linear combinations of standard exponential random variables, which facilitates the computation of quantiles of these pivotal quantities. Tables of quantiles are presented for the complete sample case. 相似文献
110.
Brent D. Burch 《Journal of statistical planning and inference》2011,141(12):3793-3807
In scenarios where the variance of a response variable can be attributed to two sources of variation, a confidence interval for a ratio of variance components gives information about the relative importance of the two sources. For example, if measurements taken from different laboratories are nine times more variable than the measurements taken from within the laboratories, then 90% of the variance in the responses is due to the variability amongst the laboratories and 10% of the variance in the responses is due to the variability within the laboratories. Assuming normally distributed sources of variation, confidence intervals for variance components are readily available. In this paper, however, simulation studies are conducted to evaluate the performance of confidence intervals under non-normal distribution assumptions. Confidence intervals based on the pivotal quantity method, fiducial inference, and the large-sample properties of the restricted maximum likelihood (REML) estimator are considered. Simulation results and an empirical example suggest that the REML-based confidence interval is favored over the other two procedures in unbalanced one-way random effects model. 相似文献