全文获取类型
收费全文 | 1310篇 |
免费 | 26篇 |
国内免费 | 2篇 |
专业分类
管理学 | 67篇 |
民族学 | 1篇 |
人口学 | 14篇 |
丛书文集 | 13篇 |
理论方法论 | 3篇 |
综合类 | 54篇 |
社会学 | 8篇 |
统计学 | 1178篇 |
出版年
2023年 | 1篇 |
2022年 | 6篇 |
2021年 | 4篇 |
2020年 | 19篇 |
2019年 | 33篇 |
2018年 | 34篇 |
2017年 | 77篇 |
2016年 | 26篇 |
2015年 | 21篇 |
2014年 | 47篇 |
2013年 | 547篇 |
2012年 | 103篇 |
2011年 | 52篇 |
2010年 | 49篇 |
2009年 | 47篇 |
2008年 | 33篇 |
2007年 | 30篇 |
2006年 | 22篇 |
2005年 | 24篇 |
2004年 | 16篇 |
2003年 | 16篇 |
2002年 | 12篇 |
2001年 | 11篇 |
2000年 | 12篇 |
1999年 | 13篇 |
1998年 | 12篇 |
1997年 | 7篇 |
1996年 | 4篇 |
1995年 | 4篇 |
1994年 | 5篇 |
1993年 | 3篇 |
1992年 | 7篇 |
1991年 | 2篇 |
1990年 | 3篇 |
1989年 | 4篇 |
1988年 | 3篇 |
1987年 | 4篇 |
1986年 | 3篇 |
1985年 | 1篇 |
1984年 | 7篇 |
1983年 | 5篇 |
1982年 | 2篇 |
1980年 | 1篇 |
1979年 | 2篇 |
1978年 | 2篇 |
1977年 | 1篇 |
1976年 | 1篇 |
排序方式: 共有1338条查询结果,搜索用时 15 毫秒
1.
AbstractThe problem of testing equality of two multivariate normal covariance matrices is considered. Assuming that the incomplete data are of monotone pattern, a quantity similar to the Likelihood Ratio Test Statistic is proposed. A satisfactory approximation to the distribution of the quantity is derived. Hypothesis testing based on the approximate distribution is outlined. The merits of the test are investigated using Monte Carlo simulation. Monte Carlo studies indicate that the test is very satisfactory even for moderately small samples. The proposed methods are illustrated using an example. 相似文献
2.
本文主要基于信号博弈的卖方欺诈行为进行研究,假设拍卖中可能存在欺诈型和诚实型两种卖家,其中欺诈行为有概率发生在第二价格拍卖中:欺诈型卖方冒充竞拍者递交仅次于最高价的报价从而获得额外收益。两种卖家根据各自效用选择拍卖形式:第一价格或者第二价格。而竞买者将卖者的选择作为信号,更新对卖方类型的判断,然后制定报价策略。这是一个买卖方信号交叉影响的过程。考虑到拍卖过程中买方价值相关性,本文在建立模型中参考了关联价值原理。针对该模型进行分析,得出了不同情况下的买卖方策略,并且研究了买方报价、买方判断、卖方收益三者之间的关系。文中利用贝叶斯公式对双方的策略选择问题进行预测,与单纯的概率分布方法相比,更具实践价值。 相似文献
3.
Simulation results are reported on methods that allow both within group and between group heteroscedasticity when testing the hypothesis that independent groups have identical regression parameters. The methods are based on a combination of extant techniques, but their finite-sample properties have not been studied. Included are results on the impact of removing all leverage points or just bad leverage points. The method used to identify leverage points can be important and can improve control over the Type I error probability. Results are illustrated using data from the Well Elderly II study. 相似文献
4.
Computing maximum likelihood estimates from type II doubly censored exponential data 总被引:1,自引:0,他引:1
Arturo J. fernández José I. Bravo Íñigo De Fuentes 《Statistical Methods and Applications》2002,11(2):187-200
It is well-known that, under Type II double censoring, the maximum likelihood (ML) estimators of the location and scale parameters, θ and δ, of a twoparameter exponential distribution are linear functions
of the order statistics. In contrast, when θ is known, theML estimator of δ does not admit a closed form expression. It is shown, however, that theML estimator of the scale parameter exists and is unique. Moreover, it has good large-sample properties. In addition, sharp
lower and upper bounds for this estimator are provided, which can serve as starting points for iterative interpolation methods
such as regula falsi. Explicit expressions for the expected Fisher information and Cramér-Rao lower bound are also derived.
In the Bayesian context, assuming an inverted gamma prior on δ, the uniqueness, boundedness and asymptotics of the highest
posterior density estimator of δ can be deduced in a similar way. Finally, an illustrative example is included. 相似文献
5.
Merging information for semiparametric density estimation 总被引:1,自引:0,他引:1
Konstantinos Fokianos 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(4):941-958
Summary. The density ratio model specifies that the likelihood ratio of m −1 probability density functions with respect to the m th is of known parametric form without reference to any parametric model. We study the semiparametric inference problem that is related to the density ratio model by appealing to the methodology of empirical likelihood. The combined data from all the samples leads to more efficient kernel density estimators for the unknown distributions. We adopt variants of well-established techniques to choose the smoothing parameter for the density estimators proposed. 相似文献
6.
Peter Hall Qiwei Yao 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2003,65(2):425-442
Summary. We develop a general methodology for tilting time series data. Attention is focused on a large class of regression problems, where errors are expressed through autoregressive processes. The class has a range of important applications and in the context of our work may be used to illustrate the application of tilting methods to interval estimation in regression, robust statistical inference and estimation subject to constraints. The method can be viewed as 'empirical likelihood with nuisance parameters'. 相似文献
7.
This paper argues that Fisher's paradox can be explained away in terms of estimator choice. We analyse by means of Monte Carlo experiments the small sample properties of a large set of estimators (including virtually all available single-equation estimators), and compute the critical values based on the empirical distributions of the t-statistics, for a variety of Data Generation Processes (DGPs), allowing for structural breaks, ARCH effects etc. We show that precisely the estimators most commonly used in the literature, namely OLS, Dynamic OLS (DOLS) and non-prewhitened FMLS, have the worst performance in small samples, and produce rejections of the Fisher hypothesis. If one employs the estimators with the most desirable properties (i.e., the smallest downward bias and the minimum shift in the distribution of the associated t-statistics), or if one uses the empirical critical values, the evidence based on US data is strongly supportive of the Fisher relation, consistently with many theoretical models. 相似文献
8.
The well-known chi-squared goodness-of-fit test for a multinomial distribution is generally biased when the observations are subject to misclassification. In Pardo and Zografos (2000) the problem was considered using a double sampling scheme and ø-divergence test statistics. A new problem appears if the null hypothesis is not simple because it is necessary to give estimators for the unknown parameters. In this paper the minimum ø-divergence estimators are considered and some of their properties are established. The proposed ø-divergence test statistics are obtained by calculating ø-divergences between probability density functions and by replacing parameters by their minimum ø-divergence estimators in the derived expressions. Asymptotic distributions of the new test statistics are also obtained. The testing procedure is illustrated with an example. 相似文献
9.
Stuart Barber Guy P. Nason 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(4):927-939
Summary. Wavelet shrinkage is an effective nonparametric regression technique, especially when the underlying curve has irregular features such as spikes or discontinuities. The basic idea is simple: take the discrete wavelet transform of data consisting of a signal corrupted by noise; shrink or remove the wavelet coefficients to remove the noise; then invert the discrete wavelet transform to form an estimate of the true underlying curve. Various researchers have proposed increasingly sophisticated methods of doing this by using real-valued wavelets. Complex-valued wavelets exist but are rarely used. We propose two new complex-valued wavelet shrinkage techniques: one based on multiwavelet style shrinkage and the other using Bayesian methods. Extensive simulations show that our methods almost always give significantly more accurate estimates than methods based on real-valued wavelets. Further, our multiwavelet style shrinkage method is both simpler and dramatically faster than its competitors. To understand the excellent performance of this method we present a new risk bound on its hard thresholded coefficients. 相似文献
10.
D. R. Cox Man Yu Wong 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(2):395-400
Summary. Given a large number of test statistics, a small proportion of which represent departures from the relevant null hypothesis, a simple rule is given for choosing those statistics that are indicative of departure. It is based on fitting by moments a mixture model to the set of test statistics and then deriving an estimated likelihood ratio. Simulation suggests that the procedure has good properties when the departure from an overall null hypothesis is not too small. 相似文献