全文获取类型
收费全文 | 1761篇 |
免费 | 20篇 |
国内免费 | 7篇 |
专业分类
管理学 | 171篇 |
人口学 | 3篇 |
丛书文集 | 5篇 |
理论方法论 | 5篇 |
综合类 | 56篇 |
社会学 | 10篇 |
统计学 | 1538篇 |
出版年
2024年 | 9篇 |
2023年 | 6篇 |
2022年 | 14篇 |
2021年 | 7篇 |
2020年 | 22篇 |
2019年 | 68篇 |
2018年 | 69篇 |
2017年 | 116篇 |
2016年 | 48篇 |
2015年 | 32篇 |
2014年 | 44篇 |
2013年 | 396篇 |
2012年 | 234篇 |
2011年 | 45篇 |
2010年 | 33篇 |
2009年 | 43篇 |
2008年 | 53篇 |
2007年 | 64篇 |
2006年 | 50篇 |
2005年 | 59篇 |
2004年 | 47篇 |
2003年 | 36篇 |
2002年 | 26篇 |
2001年 | 36篇 |
2000年 | 37篇 |
1999年 | 39篇 |
1998年 | 40篇 |
1997年 | 22篇 |
1996年 | 16篇 |
1995年 | 15篇 |
1994年 | 17篇 |
1993年 | 6篇 |
1992年 | 12篇 |
1991年 | 8篇 |
1990年 | 1篇 |
1989年 | 3篇 |
1988年 | 3篇 |
1986年 | 2篇 |
1985年 | 1篇 |
1984年 | 3篇 |
1983年 | 1篇 |
1982年 | 1篇 |
1981年 | 1篇 |
1978年 | 1篇 |
1977年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有1788条查询结果,搜索用时 15 毫秒
991.
Sonia Petrone 《Revue canadienne de statistique》1999,27(1):105-126
We propose a Bayesian nonparametric procedure for density estimation, for data in a closed, bounded interval, say [0,1]. To this aim, we use a prior based on Bemstein polynomials. This corresponds to expressing the density of the data as a mixture of given beta densities, with random weights and a random number of components. The density estimate is then obtained as the corresponding predictive density function. Comparison with classical and Bayesian kernel estimates is provided. The proposed procedure is illustrated in an example; an MCMC algorithm for approximating the estimate is also discussed. 相似文献
992.
考虑静态和动态两类数据生成过程,利用蒙特卡罗模拟方法,从估计偏差、实际检验水平和检验功效三个方面对基于Johansen程序的长期参数渐近分析和自举分析进行全面比较。结果表明,与渐近分析相比,自举分析可以减小实际检验水平对名义水平的偏差,但要以检验功效的降低为代价。严格意义上,自举分析是降低了“拒真”错误出现的概率,如果VAR(Vector Autoregression)模型能够很好地拟合数据,自举分析可能导致实际检验水平低于名义水平,此时应该慎用。使用Johansen程序估计协整参数时,容易出现异常估计值,因而不宜通过自举法修正估计偏差。 相似文献
993.
In this paper we discuss new adaptive proposal strategies for sequential Monte Carlo algorithms—also known as particle filters—relying on criteria evaluating the quality of the proposed particles. The choice of the proposal distribution is a major concern and can dramatically influence the quality of the estimates. Thus, we show how the long-used coefficient of variation (suggested by Kong et al. in J. Am. Stat. Assoc. 89(278–288):590–599, 1994) of the weights can be used for estimating the chi-square distance between the target and instrumental distributions of the auxiliary particle filter. As a by-product of this analysis we obtain an auxiliary adjustment multiplier weight type for which this chi-square distance is minimal. Moreover, we establish an empirical estimate of linear complexity of the Kullback-Leibler divergence between the involved distributions. Guided by these results, we discuss adaptive designing of the particle filter proposal distribution and illustrate the methods on a numerical example. This work was partly supported by the National Research Agency (ANR) under the program “ANR-05-BLAN-0299”. 相似文献
994.
The performance of different information criteria – namely Akaike, corrected Akaike (AICC), Schwarz–Bayesian (SBC), and Hannan–Quinn – is investigated so as to choose the optimal lag length in stable and unstable vector autoregressive (VAR) models both when autoregressive conditional heteroscedasticity (ARCH) is present and when it is not. The investigation covers both large and small sample sizes. The Monte Carlo simulation results show that SBC has relatively better performance in lag-choice accuracy in many situations. It is also generally the least sensitive to ARCH regardless of stability or instability of the VAR model, especially in large sample sizes. These appealing properties of SBC make it the optimal criterion for choosing lag length in many situations, especially in the case of financial data, which are usually characterized by occasional periods of high volatility. SBC also has the best forecasting abilities in the majority of situations in which we vary sample size, stability, variance structure (ARCH or not), and forecast horizon (one period or five). frequently, AICC also has good lag-choosing and forecasting properties. However, when ARCH is present, the five-period forecast performance of all criteria in all situations worsens. 相似文献
995.
A new class of multivariate skew distributions with applications to bayesian regression models 总被引:1,自引:0,他引:1
Abstract: The authors develop a new class of distributions by introducing skewness in multivariate elliptically symmetric distributions. The class, which is obtained by using transformation and conditioning, contains many standard families including the multivariate skew‐normal and t distributions. The authors obtain analytical forms of the densities and study distributional properties. They give practical applications in Bayesian regression models and results on the existence of the posterior distributions and moments under improper priors for the regression coefficients. They illustrate their methods using practical examples. 相似文献
996.
Nur Aainaa Rozliman Rossita Muhamad Yunus 《Journal of Statistical Computation and Simulation》2018,88(2):203-220
In most practical applications, the quality of count data is often compromised due to errors-in-variables (EIVs). In this paper, we apply Bayesian approach to reduce bias in estimating the parameters of count data regression models that have mismeasured independent variables. Furthermore, the exposure model is misspecified with a flexible distribution, hence our approach remains robust against any departures from normality in its true underlying exposure distribution. The proposed method is also useful in realistic situations as the variance of EIVs is estimated instead of assumed as known, in contrast with other methods of correcting bias especially in count data EIVs regression models. We conduct simulation studies on synthetic data sets using Markov chain Monte Carlo simulation techniques to investigate the performance of our approach. Our findings show that the flexible Bayesian approach is able to estimate the values of the true regression parameters consistently and accurately. 相似文献
997.
Niladri Chakraborty Narayanaswamy Balakrishnan 《Journal of Statistical Computation and Simulation》2018,88(9):1759-1781
Distribution-free control charts gained momentum in recent years as they are more efficient in detecting a shift when there is a lack of information regarding the underlying process distribution. However, a distribution-free control chart for monitoring the process location often requires information on the in-control process median. This is somewhat challenging because, in practice, any information on the location parameter might not be known in advance and estimation of the parameter is therefore required. In view of this, a time-weighted control chart, labelled as the Generally Weighted Moving Average (GWMA) exceedance (EX) chart (in short GWMA-EX chart), is proposed for detection of a shift in the unknown process location; this chart is based on exceedance statistic when there is no information available on the process distribution. An extensive performance analysis shows that the proposed GWMA-EX control chart is, in many cases, better than its contenders. 相似文献
998.
Yuh-Jenn Wu Wei-Quan Fang Li-Hsueh Cheng Kai-Chi Chu Yin-Tzer Shih 《Journal of Statistical Computation and Simulation》2018,88(16):3132-3150
Interval-censored survival data arise often in medical applications and clinical trials [Wang L, Sun J, Tong X. Regression analyis of case II interval-censored failure time data with the additive hazards model. Statistica Sinica. 2010;20:1709–1723]. However, most of existing interval-censored survival analysis techniques suffer from challenges such as heavy computational cost or non-proportionality of hazard rates due to complicated data structure [Wang L, Lin X. A Bayesian approach for analyzing case 2 interval-censored data under the semiparametric proportional odds model. Statistics & Probability Letters. 2011;81:876–883; Banerjee T, Chen M-H, Dey DK, et al. Bayesian analysis of generalized odds-rate hazards models for survival data. Lifetime Data Analysis. 2007;13:241–260]. To address these challenges, in this paper, we introduce a flexible Bayesian non-parametric procedure for the estimation of the odds under interval censoring, case II. We use Bernstein polynomials to introduce a prior for modeling the odds and propose a novel and easy-to-implement sampling manner based on the Markov chain Monte Carlo algorithms to study the posterior distributions. We also give general results on asymptotic properties of the posterior distributions. The simulated examples show that the proposed approach is quite satisfactory in the cases considered. The use of the proposed method is further illustrated by analyzing the hemophilia study data [McMahan CS, Wang L. A package for semiparametric regression analysis of interval-censored data; 2015. http://CRAN.R-project.org/package=ICsurv. 相似文献
999.
This paper considers model averaging for the ordered probit and nested logit models, which are widely used in empirical research. Within the frameworks of these models, we examine a range of model averaging methods, including the jackknife method, which is proved to have an optimal asymptotic property in this paper. We conduct a large-scale simulation study to examine the behaviour of these model averaging estimators in finite samples, and draw comparisons with model selection estimators. Our results show that while neither averaging nor selection is a consistently better strategy, model selection results in the poorest estimates far more frequently than averaging, and more often than not, averaging yields superior estimates. Among the averaging methods considered, the one based on a smoothed version of the Bayesian Information criterion frequently produces the most accurate estimates. In three real data applications, we demonstrate the usefulness of model averaging in mitigating problems associated with the ‘replication crisis’ that commonly arises with model selection. 相似文献
1000.
Pierre Chaussé 《Econometric Reviews》2018,37(7):719-743
This article investigates alternative generalized method of moments (GMM) estimation procedures of a stochastic volatility model with realized volatility measures. The extended model can accommodate a more general correlation structure. General closed form moment conditions are derived to examine the model properties and to evaluate the performance of various GMM estimation procedures under Monte Carlo environment, including standard GMM, principal component GMM, robust GMM and regularized GMM. An application to five company stocks and one stock index is also provided for an empirical demonstration. 相似文献