全文获取类型
收费全文 | 2133篇 |
免费 | 39篇 |
国内免费 | 3篇 |
专业分类
管理学 | 100篇 |
民族学 | 2篇 |
人口学 | 19篇 |
丛书文集 | 11篇 |
理论方法论 | 14篇 |
综合类 | 173篇 |
社会学 | 52篇 |
统计学 | 1804篇 |
出版年
2023年 | 10篇 |
2022年 | 9篇 |
2021年 | 15篇 |
2020年 | 29篇 |
2019年 | 78篇 |
2018年 | 93篇 |
2017年 | 160篇 |
2016年 | 49篇 |
2015年 | 51篇 |
2014年 | 60篇 |
2013年 | 607篇 |
2012年 | 174篇 |
2011年 | 59篇 |
2010年 | 57篇 |
2009年 | 48篇 |
2008年 | 64篇 |
2007年 | 66篇 |
2006年 | 61篇 |
2005年 | 61篇 |
2004年 | 49篇 |
2003年 | 40篇 |
2002年 | 51篇 |
2001年 | 37篇 |
2000年 | 34篇 |
1999年 | 45篇 |
1998年 | 26篇 |
1997年 | 18篇 |
1996年 | 19篇 |
1995年 | 11篇 |
1994年 | 7篇 |
1993年 | 12篇 |
1992年 | 14篇 |
1991年 | 12篇 |
1990年 | 6篇 |
1989年 | 1篇 |
1988年 | 6篇 |
1987年 | 1篇 |
1986年 | 4篇 |
1985年 | 7篇 |
1984年 | 4篇 |
1983年 | 7篇 |
1982年 | 5篇 |
1981年 | 2篇 |
1980年 | 2篇 |
1979年 | 3篇 |
1977年 | 1篇 |
排序方式: 共有2175条查询结果,搜索用时 15 毫秒
1.
OLIVIER CAPPÉ RANDAL DOUC ERIC MOULINES & CHRISTIAN ROBERT 《Scandinavian Journal of Statistics》2002,29(4):615-635
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker. 相似文献
2.
If a population contains many zero values and the sample size is not very large, the traditional normal approximation‐based confidence intervals for the population mean may have poor coverage probabilities. This problem is substantially reduced by constructing parametric likelihood ratio intervals when an appropriate mixture model can be found. In the context of survey sampling, however, there is a general preference for making minimal assumptions about the population under study. The authors have therefore investigated the coverage properties of nonparametric empirical likelihood confidence intervals for the population mean. They show that under a variety of hypothetical populations, these intervals often outperformed parametric likelihood intervals by having more balanced coverage rates and larger lower bounds. The authors illustrate their methodology using data from the Canadian Labour Force Survey for the year 2000. 相似文献
3.
Amy H. Herring Joseph G. Ibrahim Stuart R. Lipsitz 《Journal of the Royal Statistical Society. Series C, Applied statistics》2004,53(2):293-310
Summary. Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results. 相似文献
4.
Merging information for semiparametric density estimation 总被引:1,自引:0,他引:1
Konstantinos Fokianos 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(4):941-958
Summary. The density ratio model specifies that the likelihood ratio of m −1 probability density functions with respect to the m th is of known parametric form without reference to any parametric model. We study the semiparametric inference problem that is related to the density ratio model by appealing to the methodology of empirical likelihood. The combined data from all the samples leads to more efficient kernel density estimators for the unknown distributions. We adopt variants of well-established techniques to choose the smoothing parameter for the density estimators proposed. 相似文献
5.
Biao Zhang 《Australian & New Zealand Journal of Statistics》2004,46(3):407-423
Demonstrated equivalence between a categorical regression model based on case‐control data and an I‐sample semiparametric selection bias model leads to a new goodness‐of‐fit test. The proposed test statistic is an extension of an existing Kolmogorov–Smirnov‐type statistic and is the weighted average of the absolute differences between two estimated distribution functions in each response category. The paper establishes an optimal property for the maximum semiparametric likelihood estimator of the parameters in the I‐sample semiparametric selection bias model. It also presents a bootstrap procedure, some simulation results and an analysis of two real datasets. 相似文献
6.
We discuss Bayesian analyses of traditional normal-mixture models for classification and discrimination. The development involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling, and demonstrates routine application. We stress the benefits of exact analyses over traditional classification and discrimination techniques, including the ease with which such analyses may be performed in a quite general setting, with possibly several normal-mixture components having different covariance matrices, the computation of exact posterior classification probabilities for observed data and for future cases to be classified, and posterior distributions for these probabilities that allow for assessment of second-level uncertainties in classification. 相似文献
7.
Stephen Walker 《Statistics and Computing》1995,5(4):311-315
Laud et al. (1993) describe a method for random variate generation from D-distributions. In this paper an alternative method using substitution sampling is given. An algorithm for the random variate generation from SD-distributions is also given. 相似文献
8.
We describe an image reconstruction problem and the computational difficulties arising in determining the maximum a posteriori (MAP) estimate. Two algorithms for tackling the problem, iterated conditional modes (ICM) and simulated annealing, are usually applied pixel by pixel. The performance of this strategy can be poor, particularly for heavily degraded images, and as a potential improvement Jubb and Jennison (1991) suggest the cascade algorithm in which ICM is initially applied to coarser images formed by blocking squares of pixels. In this paper we attempt to resolve certain criticisms of cascade and present a version of the algorithm extended in definition and implementation. As an illustration we apply our new method to a synthetic aperture radar (SAR) image. We also carry out a study of simulated annealing, with and without cascade, applied to a more tractable minimization problem from which we gain insight into the properties of cascade algorithms. 相似文献
9.
The well-known chi-squared goodness-of-fit test for a multinomial distribution is generally biased when the observations are subject to misclassification. In Pardo and Zografos (2000) the problem was considered using a double sampling scheme and ø-divergence test statistics. A new problem appears if the null hypothesis is not simple because it is necessary to give estimators for the unknown parameters. In this paper the minimum ø-divergence estimators are considered and some of their properties are established. The proposed ø-divergence test statistics are obtained by calculating ø-divergences between probability density functions and by replacing parameters by their minimum ø-divergence estimators in the derived expressions. Asymptotic distributions of the new test statistics are also obtained. The testing procedure is illustrated with an example. 相似文献
10.
论科技期刊的品牌资本 总被引:1,自引:0,他引:1
杨丽君 《合肥工业大学学报(社会科学版)》2003,17(2):120-123
科技期刊的品牌资本是其生存和发展的关键因素,品牌资本体现了社会效益与经济效益的同一性。品牌资本的价值回归是一个缓慢但却是相当稳定的过程。通过抽样调查和方差分析,定量说明了上述论点的正确性。 相似文献