全文获取类型
收费全文 | 453篇 |
免费 | 8篇 |
国内免费 | 1篇 |
专业分类
管理学 | 13篇 |
民族学 | 1篇 |
人口学 | 1篇 |
丛书文集 | 11篇 |
理论方法论 | 4篇 |
综合类 | 82篇 |
社会学 | 2篇 |
统计学 | 348篇 |
出版年
2022年 | 2篇 |
2020年 | 1篇 |
2019年 | 7篇 |
2018年 | 10篇 |
2017年 | 18篇 |
2016年 | 7篇 |
2015年 | 10篇 |
2014年 | 10篇 |
2013年 | 158篇 |
2012年 | 18篇 |
2011年 | 15篇 |
2010年 | 18篇 |
2009年 | 11篇 |
2008年 | 12篇 |
2007年 | 24篇 |
2006年 | 7篇 |
2005年 | 11篇 |
2004年 | 14篇 |
2003年 | 12篇 |
2002年 | 9篇 |
2001年 | 9篇 |
2000年 | 13篇 |
1999年 | 21篇 |
1998年 | 8篇 |
1997年 | 5篇 |
1996年 | 7篇 |
1995年 | 4篇 |
1994年 | 3篇 |
1993年 | 1篇 |
1992年 | 2篇 |
1991年 | 1篇 |
1990年 | 3篇 |
1989年 | 2篇 |
1986年 | 2篇 |
1985年 | 2篇 |
1983年 | 1篇 |
1982年 | 1篇 |
1980年 | 1篇 |
1976年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有462条查询结果,搜索用时 31 毫秒
11.
Utilizing time series modeling entails estimating the model parameters and dispersion. Classical estimators for autocorrelated observations are sensitive to presence of different types of outliers and lead to bias estimation and misinterpretation. It is important to present robust methods for parameters estimation which are not influenced by contaminations. In this article, an estimation method entitled Iteratively Robust Filtered Fast? τ(IRFFT) is proposed for general autoregressive models. In comparison to other commonly accepted methods, this method is more efficient and has lower sensitivity to contaminations due to having desirable robustness properties. This has been demonstrated by applying MSE, influence function, and breakdown point criteria. 相似文献
12.
Guoping Zeng 《统计学通讯:理论与方法》2017,46(22):11194-11203
The problems of existence and uniqueness of maximum likelihood estimates for logistic regression were completely solved by Silvapulle in 1981 and Albert and Anderson in 1984. In this paper, we extend the well-known results by Silvapulle and by Albert and Anderson to weighted logistic regression. We analytically prove the equivalence between the overlap condition used by Albert and Anderson and that used by Silvapulle. We show that the maximum likelihood estimate of weighted logistic regression does not exist if there is a complete separation or a quasicomplete separation of the data points, and exists and is unique if there is an overlap of data points. Our proofs and results for weighted logistic apply to unweighted logistic regression. 相似文献
13.
《Journal of Statistical Computation and Simulation》2012,82(12):2021-2037
Motivated by several practical issues, we consider the problem of estimating the mean of a p-variate population (not necessarily normal) with unknown finite covariance. A quadratic loss function is used. We give a number of estimators (for the mean) with their loss functions admitting expansions to the order of p ?1/2 as p→∞. These estimators contain Stein's [Inadmissibility of the usual estimator for the mean of a multivariate normal population, in Proceedings of the Third Berkeley Symposium in Mathematical Statistics and Probability, Vol. 1, J. Neyman, ed., University of California Press, Berkeley, 1956, pp. 197–206] estimate as a particular case and also contain ‘multiple shrinkage’ estimates improving on Stein's estimate. Finally, we perform a simulation study to compare the different estimates. 相似文献
14.
Seasonal fractional ARIMA (ARFISMA) model with infinite variance innovations is used in the analysis of seasonal long-memory time series with large fluctuations (heavy-tailed distributions). Two methods, which are the empirical characteristic function (ECF) procedure developed by Knight and Yu [The empirical characteristic function in time series estimation. Econometric Theory. 2002;18:691–721] and the Two-Step method (TSM) are proposed to estimate the parameters of stable ARFISMA model. The ECF method estimates simultaneously all the parameters, while the TSM considers in the first step the Markov Chains Monte Carlo–Whittle approach introduced by Ndongo et al. [Estimation of long-memory parameters for seasonal fractional ARIMA with stable innovations. Stat Methodol. 2010;7:141–151], combined with the maximum likelihood estimation method developed by Alvarez and Olivares [Méthodes d'estimation pour des lois stables avec des applications en finance. Journal de la Société Française de Statistique. 2005;1(4):23–54] in the second step. Monte Carlo simulations are also used to evaluate the finite sample performance of these estimation techniques. 相似文献
15.
Small area estimation (SAE) concerns with how to reliably estimate population quantities of interest when some areas or domains have very limited samples. This is an important issue in large population surveys, because the geographical areas or groups with only small samples or even no samples are often of interest to researchers and policy-makers. For example, large population health surveys, such as Behavioural Risk Factor Surveillance System and Ohio Mecaid Assessment Survey (OMAS), are regularly conducted for monitoring insurance coverage and healthcare utilization. Classic approaches usually provide accurate estimators at the state level or large geographical region level, but they fail to provide reliable estimators for many rural counties where the samples are sparse. Moreover, a systematic evaluation of the performances of the SAE methods in real-world setting is lacking in the literature. In this paper, we propose a Bayesian hierarchical model with constraints on the parameter space and show that it provides superior estimators for county-level adult uninsured rates in Ohio based on the 2012 OMAS data. Furthermore, we perform extensive simulation studies to compare our methods with a collection of common SAE strategies, including direct estimators, synthetic estimators, composite estimators, and Datta GS, Ghosh M, Steorts R, Maples J.'s [Bayesian benchmarking with applications to small area estimation. Test 2011;20(3):574–588] Bayesian hierarchical model-based estimators. To set a fair basis for comparison, we generate our simulation data with characteristics mimicking the real OMAS data, so that neither model-based nor design-based strategies use the true model specification. The estimators based on our proposed model are shown to outperform other estimators for small areas in both simulation study and real data analysis. 相似文献
16.
In quantitative trait linkage studies using experimental crosses, the conventional normal location-shift model or other parameterizations may be unnecessarily restrictive. We generalize the mapping problem to a genuine nonparametric setup and provide a robust estimation procedure for the situation where the underlying phenotype distributions are completely unspecified. Classical Wilcoxon–Mann–Whitney statistics are employed for point and interval estimation of QTL positions and effects. 相似文献
17.
Statistical inferences for the geometric process (GP) are derived when the distribution of the first occurrence time is assumed to be inverse Gaussian (IG). An α-series process, as a possible alternative to the GP, is introduced since the GP is sometimes inappropriate to apply some reliability and scheduling problems. In this study, statistical inference problem for the α-series process is considered where the distribution of first occurrence time is IG. The estimators of the parameters α, μ, and σ2 are obtained by using the maximum likelihood (ML) method. Asymptotic distributions and consistency properties of the ML estimators are derived. In order to compare the efficiencies of the ML estimators with the widely used nonparametric modified moment (MM) estimators, Monte Carlo simulations are performed. The results showed that the ML estimators are more efficient than the MM estimators. Moreover, two real life datasets are given for application purposes. 相似文献
18.
Mostafa S. Aminzadeh 《统计学通讯:理论与方法》2013,42(1):343-353
A method for obtaining prediction intervals for an outcome of a future experiment is presented. The method uses hypothesis testing as a tool to derive prediction intervals and assumes that the probability distributions of informative and future experiments are one parameter exponential families. Asymptotic similar mean coverage prediction intervals are derived using the score test as a test statistics. Examples are presented and asymptotic prediction limits are compared with the prediction limits given in the literature. 相似文献
19.
The current financial turbulence in Europe inspires and perhaps requires researchers to rethink how to measure incomes, wealth, and other parameters of interest to policy-makers and others. The noticeable increase in disparities between less and more fortunate individuals suggests that measures based upon comparing the incomes of less fortunate with the mean of the entire population may not be adequate. The classical Gini and related indices of economic inequality, however, are based exactly on such comparisons. It is because of this reason that in this paper we explore and contrast the classical Gini index with a new Zenga index, the latter being based on comparisons of the means of less and more fortunate sub-populations, irrespectively of the threshold that might be used to delineate the two sub-populations. The empirical part of the paper is based on the 2001 wave of the European Community Household Panel data set provided by EuroStat. Even though sample sizes appear to be large, we supplement the estimated Gini and Zenga indices with measures of variability in the form of normal, t-bootstrap, and bootstrap bias-corrected and accelerated confidence intervals. 相似文献
20.
In this paper, we consider the simple step-stress model for a two-parameter exponential distribution, when both the parameters are unknown and the data are Type-II censored. It is assumed that under two different stress levels, the scale parameter only changes but the location parameter remains unchanged. It is observed that the maximum likelihood estimators do not always exist. We obtain the maximum likelihood estimates of the unknown parameters whenever they exist. We provide the exact conditional distributions of the maximum likelihood estimators of the scale parameters. Since the construction of the exact confidence intervals is very difficult from the conditional distributions, we propose to use the observed Fisher Information matrix for this purpose. We have suggested to use the bootstrap method for constructing confidence intervals. Bayes estimates and associated credible intervals are obtained using the importance sampling technique. Extensive simulations are performed to compare the performances of the different confidence and credible intervals in terms of their coverage percentages and average lengths. The performances of the bootstrap confidence intervals are quite satisfactory even for small sample sizes. 相似文献