全文获取类型
收费全文 | 1672篇 |
免费 | 45篇 |
国内免费 | 18篇 |
专业分类
管理学 | 199篇 |
民族学 | 1篇 |
人口学 | 23篇 |
丛书文集 | 19篇 |
理论方法论 | 78篇 |
综合类 | 227篇 |
社会学 | 10篇 |
统计学 | 1178篇 |
出版年
2023年 | 6篇 |
2022年 | 10篇 |
2021年 | 10篇 |
2020年 | 38篇 |
2019年 | 53篇 |
2018年 | 60篇 |
2017年 | 103篇 |
2016年 | 41篇 |
2015年 | 49篇 |
2014年 | 43篇 |
2013年 | 446篇 |
2012年 | 135篇 |
2011年 | 44篇 |
2010年 | 54篇 |
2009年 | 49篇 |
2008年 | 59篇 |
2007年 | 60篇 |
2006年 | 61篇 |
2005年 | 40篇 |
2004年 | 32篇 |
2003年 | 37篇 |
2002年 | 38篇 |
2001年 | 30篇 |
2000年 | 24篇 |
1999年 | 22篇 |
1998年 | 15篇 |
1997年 | 17篇 |
1996年 | 13篇 |
1995年 | 17篇 |
1994年 | 20篇 |
1993年 | 15篇 |
1992年 | 20篇 |
1991年 | 11篇 |
1990年 | 5篇 |
1989年 | 6篇 |
1988年 | 10篇 |
1987年 | 7篇 |
1986年 | 5篇 |
1985年 | 6篇 |
1984年 | 5篇 |
1983年 | 4篇 |
1982年 | 3篇 |
1981年 | 4篇 |
1980年 | 3篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1977年 | 2篇 |
1975年 | 1篇 |
排序方式: 共有1735条查询结果,搜索用时 15 毫秒
1.
On Optimality of Bayesian Wavelet Estimators 总被引:2,自引:0,他引:2
Felix Abramovich Umberto Amato Claudia Angelini 《Scandinavian Journal of Statistics》2004,31(2):217-234
Abstract. We investigate the asymptotic optimality of several Bayesian wavelet estimators, namely, posterior mean, posterior median and Bayes Factor, where the prior imposed on wavelet coefficients is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared error, for the properly chosen hyperparameters of the prior, all the three resulting Bayesian wavelet estimators achieve optimal minimax rates within any prescribed Besov space for p ≥ 2. For 1 ≤ p < 2, the Bayes Factor is still optimal for (2 s +2)/(2 s +1) ≤ p < 2 and always outperforms the posterior mean and the posterior median that can achieve only the best possible rates for linear estimators in this case. 相似文献
2.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small. 相似文献
3.
We discuss Bayesian analyses of traditional normal-mixture models for classification and discrimination. The development involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling, and demonstrates routine application. We stress the benefits of exact analyses over traditional classification and discrimination techniques, including the ease with which such analyses may be performed in a quite general setting, with possibly several normal-mixture components having different covariance matrices, the computation of exact posterior classification probabilities for observed data and for future cases to be classified, and posterior distributions for these probabilities that allow for assessment of second-level uncertainties in classification. 相似文献
4.
Summary Weak disintegrations are investigated from various points of view. Kolmogorov's definition of conditional probability is critically
analysed, and it is noted how the notion of disintegrability plays some role in connecting Kolmogorov's definition with the
one given in line with de Finetti's coherence principle. Conditions are given, on the domain of a prevision, implying the
equivalence between weak disintegrability and conglomerability. Moreover, weak sintegrations are characterized in terms of
coherence, in de Finetti's sense, of, a suitable function. This fact enables us to give, an interpretation of weak disintegrability
as a form of “preservation of coherence”. The previous results are also applied to a hypothetical inferential problem. In
particular, an inference is shown to be coherent, in the sense of Heath and Sudderth, if and only if a suitable function is
coherent, in de Finetti's sense.
Research partially supported by: M.U.R.S.T. 40% “Problemi di inferenza pura”. 相似文献
5.
Assessing accuracy of a continuous screening test in the presence of verification bias 总被引:1,自引:1,他引:0
Todd A. Alonzo Margaret Sullivan Pepe 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(1):173-190
Summary. In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss. 相似文献
6.
Marco Bee 《Statistical Methods and Applications》2005,14(1):127-141
In this article we provide a rigorous treatment of one of the central statistical issues of credit risk management. GivenK-1 rating categories, the rating of a corporate bond over a certain horizon may either stay the same or change to one of the
remainingK-2 categories; in addition, it is usually the case that the rating of some bonds is withdrawn during the time interval considered
in the analysis. When estimating transition probabilities, we have thus to consider aK-th category, called withdrawal, which contains (partially) missing data. We show how maximum likelihood estimation can be
performed in this setup; whereas in discrete time our solution gives rigorous support to a solution often used in applications,
in continuous time the maximum likelihood estimator of the transition matrix computed by means of the EM algorithm represents
a significant improvement over existing methods. 相似文献
7.
Kepher Henry Makambi 《Statistical Methods and Applications》2002,11(1):127-138
The standard hypothesis testing procedure in meta-analysis (or multi-center clinical trials) in the absence of treatment-by-center
interaction relies on approximating the null distribution of the standard test statistic by a standard normal distribution.
For relatively small sample sizes, the standard procedure has been shown by various authors to have poor control of the type
I error probability, leading to too many liberal decisions. In this article, two test procedures are proposed, which rely
on thet—distribution as the reference distribution. A simulation study indicates that the proposed procedures attain significance
levels closer to the nominal level compared with the standard procedure. 相似文献
8.
应用结构模式识别的方法建立多阶均线形态组合序列通过对股市历史数据的变换和分析考查各形态组合序列对应的盈利和亏损概率建立新的证券投资分析方法即均线形态组合预测法其优点是可以部分弥补传统技术分析中的滞后性和不确定性能根据市场的变化及时调整相应的参数指标有效提高预测的成功率并易用计算机作出识别 相似文献
9.
Grard Letac 《Revue canadienne de statistique》1991,19(2):229-232
This note exhibits two independent random variables on integers, X1 and X2, such that neither X1 nor X2 has a generalized Poisson distribution, but X1 + X2 has. This contradicts statements made by Professor Consul in his recent book. 相似文献
10.
EVE BOFINGER 《Australian & New Zealand Journal of Statistics》1994,36(1):59-66
Various authors, given k location parameters, have considered lower confidence bounds on (standardized) dserences between the largest and each of the other k - 1 parameters. They have then used these bounds to put lower confidence bounds on the probability of correct selection (PCS) in the same experiment (as was used for finding the lower bounds on differences). It is pointed out that this is an inappropriate inference procedure. Moreover, if the PCS refers to some later experiment it is shown that if a non-trivial confidence bound is possible then it is already possible to conclude, with greater confidence, that correct selection has occurred in the first experiment. The short answer to the question in the title is therefore ‘No’, but this should be qualified in the case of a Bayesian analysis. 相似文献