全文获取类型
收费全文 | 957篇 |
免费 | 20篇 |
国内免费 | 9篇 |
专业分类
管理学 | 66篇 |
民族学 | 1篇 |
人口学 | 4篇 |
丛书文集 | 9篇 |
理论方法论 | 7篇 |
综合类 | 280篇 |
社会学 | 2篇 |
统计学 | 617篇 |
出版年
2023年 | 3篇 |
2022年 | 5篇 |
2021年 | 4篇 |
2020年 | 19篇 |
2019年 | 25篇 |
2018年 | 33篇 |
2017年 | 42篇 |
2016年 | 33篇 |
2015年 | 31篇 |
2014年 | 28篇 |
2013年 | 243篇 |
2012年 | 65篇 |
2011年 | 34篇 |
2010年 | 25篇 |
2009年 | 33篇 |
2008年 | 32篇 |
2007年 | 25篇 |
2006年 | 27篇 |
2005年 | 32篇 |
2004年 | 25篇 |
2003年 | 24篇 |
2002年 | 24篇 |
2001年 | 16篇 |
2000年 | 14篇 |
1999年 | 13篇 |
1998年 | 13篇 |
1997年 | 26篇 |
1996年 | 9篇 |
1995年 | 13篇 |
1994年 | 11篇 |
1993年 | 7篇 |
1992年 | 6篇 |
1991年 | 11篇 |
1990年 | 4篇 |
1989年 | 3篇 |
1988年 | 8篇 |
1987年 | 5篇 |
1986年 | 6篇 |
1985年 | 3篇 |
1984年 | 1篇 |
1983年 | 3篇 |
1978年 | 1篇 |
1977年 | 1篇 |
排序方式: 共有986条查询结果,搜索用时 234 毫秒
531.
We consider an adaptive importance sampling approach to estimating the marginal likelihood, a quantity that is fundamental in Bayesian model comparison and Bayesian model averaging. This approach is motivated by the difficulty of obtaining an accurate estimate through existing algorithms that use Markov chain Monte Carlo (MCMC) draws, where the draws are typically costly to obtain and highly correlated in high-dimensional settings. In contrast, we use the cross-entropy (CE) method, a versatile adaptive Monte Carlo algorithm originally developed for rare-event simulation. The main advantage of the importance sampling approach is that random samples can be obtained from some convenient density with little additional costs. As we are generating independent draws instead of correlated MCMC draws, the increase in simulation effort is much smaller should one wish to reduce the numerical standard error of the estimator. Moreover, the importance density derived via the CE method is grounded in information theory, and therefore, is in a well-defined sense optimal. We demonstrate the utility of the proposed approach by two empirical applications involving women's labor market participation and U.S. macroeconomic time series. In both applications, the proposed CE method compares favorably to existing estimators. 相似文献
532.
《Journal of Statistical Computation and Simulation》2012,82(10):873-886
We study the correlation structure for a mixture of ordinal and continuous repeated measures using a Bayesian approach. We assume a multivariate probit model for the ordinal variables and a normal linear regression for the continuous variables, where latent normal variables underlying the ordinal data are correlated with continuous variables in the model. Due to the probit model assumption, we are required to sample a covariance matrix with some of the diagonal elements equal to one. The key computational idea is to use parameter-extended data augmentation, which involves applying the Metropolis-Hastings algorithm to get a sample from the posterior distribution of the covariance matrix incorporating the relevant restrictions. The methodology is illustrated through a simulated example and through an application to data from the UCLA Brain Injury Research Center. 相似文献
533.
《Journal of Statistical Computation and Simulation》2012,82(3):215-230
In Kernel density estimation, a criticism of bandwidth selection techniques which minimize squared error expressions is that they perform poorly when estimating tails of probability density functions. Techniques minimizing absolute error expressions are thought to result in more uniform performance and be potentially superior. An asympotic mean absolute error expression for nonparametric kernel density estimators from right-censored data is developed here. This expression is used to obtain local and global bandwidths that are optimal in the sense that they minimize asymptotic mean absolute error and integrated asymptotic mean absolute error, respectively. These estimators are illustrated fro eight data sets from known distributions. Computer simulation results are discussed, comparing the estimation methods with squared-error-based bandwidth selection for right-censored data. 相似文献
534.
《Journal of Statistical Computation and Simulation》2012,82(1):699-712
We formulate and evaluate weighted least squares (WLS) and ordinary least squares (OLS) procedures for estimating the parametric mean-value function of a nonhomogeneous Poisson process. We focus the development on processes having an exponential rate function, where the exponent may include a polynomial component or some trigonometric components. Unanticipated problems with the WLS procedure are explained by an analysis of the associated residuals. The OLS procedure is based on a square root transformation of the "detrended" event (arrival) times - that is, the fitted mean-value function evaluated at the observed event times; and under appropriate conditions, the corresponding residuals are proved to converge weakly to a normal distribution with mean 0 and variance 0.25. The results of a Monte Carlo study indicate the advantages of the OLS procedure with respect to estimation accuracy and computational efficiency. 相似文献
535.
《Journal of Statistical Computation and Simulation》2012,82(12):2644-2651
In this study, we demonstrate how generalized propensity score estimators (Imbens’ weighted estimator, the propensity score weighted estimator and the generalized doubly robust estimator) can be used to calculate the adjusted marginal probabilities for estimating the three common binomial parameters: the risk difference (RD), the relative risk (RR), and the odds ratio (OR). We further conduct a simulation study to compare the estimated RD, RR, and OR using the adjusted and the unadjusted marginal probabilities in terms of the bias and mean-squared error (MSE). Although there is no clear winner in terms of the MSE for estimating RD, RR, and OR, simulation results surprisingly show thatthe unadjusted marginal probabilities produce the smallest bias compared with the adjusted marginal probabilities in most of the estimates. Hence, in conclusion, we recommend using the unadjusted marginal probabilities to estimate RD, RR, and OR, in practice. 相似文献
536.
《Journal of Statistical Computation and Simulation》2012,82(11):1621-1634
We introduce a family of leptokurtic symmetric distributions represented by the difference of two gamma variates. Properties of this family are discussed. The Laplace, sums of Laplace and normal distributions all arise as special cases of this family. We propose a two-step method for fitting data to this family. First, we perform a test of symmetry, and second, we estimate the parameters by minimizing the quadratic distance between the real parts of the empirical and theoretical characteristic functions. The quadratic distance estimator obtained is consistent, robust and asymptotically normally distributed. We develop a statistical test for goodness of fit and introduce a test of normality of the data. A simulation study is provided to illustrate the theory. 相似文献
537.
《Journal of Statistical Computation and Simulation》2012,82(16):3276-3288
We propose a data-dependent method for choosing the tuning parameter appearing in many recently developed goodness-of-fit test statistics. The new method, based on the bootstrap, is applicable to a class of distributions for which the null distribution of the test statistic is independent of unknown parameters. No data-dependent choice for this parameter exists in the literature; typically, a fixed value for the parameter is chosen which can perform well for some alternatives, but poorly for others. The performance of the new method is investigated by means of a Monte Carlo study, employing three tests for exponentiality. It is found that the Monte Carlo power of these tests, using the data-dependent choice, compares favourably to the maximum achievable power for the tests calculated over a grid of values of the tuning parameter. 相似文献
538.
《Journal of Statistical Computation and Simulation》2012,82(7):1450-1461
The penalized logistic regression is a useful tool for classifying samples and feature selection. Although the methodology has been widely used in various fields of research, their performance takes a sudden turn for the worst in the presence of outlier, since the logistic regression is based on the maximum log-likelihood method which is sensitive to outliers. It implies that we cannot accurately classify samples and find important factors having crucial information for classification. To overcome the problem, we propose a robust penalized logistic regression based on a weighted likelihood methodology. We also derive an information criterion for choosing the tuning parameters, which is a vital matter in robust penalized logistic regression modelling in line with generalized information criteria. We demonstrate through Monte Carlo simulations and real-world example that the proposed robust modelling strategies perform well for sparse logistic regression modelling even in the presence of outliers. 相似文献
539.
The skew-normal model is a class of distributions that extends the Gaussian family by including a skewness parameter. This model presents some inferential problems linked to the estimation of the skewness parameter. In particular its maximum likelihood estimator can be infinite especially for moderate sample sizes and is not clear how to calculate confidence intervals for this parameter. In this work, we show how these inferential problems can be solved if we are interested in the distribution of extreme statistics of two random variables with joint normal distribution. Such situations are not uncommon in applications, especially in medical and environmental contexts, where it can be relevant to estimate the distribution of extreme statistics. A theoretical result, found by Loperfido [7], proves that such extreme statistics have a skew-normal distribution with skewness parameter that can be expressed as a function of the correlation coefficient between the two initial variables. It is then possible, using some theoretical results involving the correlation coefficient, to find approximate confidence intervals for the parameter of skewness. These theoretical intervals are then compared with parametric bootstrap intervals by means of a simulation study. Two applications are given using real data. 相似文献
540.
Principal axis factoring (PAF) and maximum likelihood factor analysis (MLFA) are two of the most popular estimation methods in exploratory factor analysis. It is known that PAF is better able to recover weak factors and that the maximum likelihood estimator is asymptotically efficient. However, there is almost no evidence regarding which method should be preferred for different types of factor patterns and sample sizes. Simulations were conducted to investigate factor recovery by PAF and MLFA for distortions of ideal simple structure and sample sizes between 25 and 5000. Results showed that PAF is preferred for population solutions with few indicators per factor and for overextraction. MLFA outperformed PAF in cases of unequal loadings within factors and for underextraction. It was further shown that PAF and MLFA do not always converge with increasing sample size. The simulation findings were confirmed by an empirical study as well as by a classic plasmode, Thurstone's box problem. The present results are of practical value for factor analysts. 相似文献