全文获取类型
收费全文 | 410篇 |
免费 | 14篇 |
国内免费 | 1篇 |
专业分类
管理学 | 10篇 |
人口学 | 5篇 |
丛书文集 | 4篇 |
理论方法论 | 3篇 |
综合类 | 11篇 |
社会学 | 3篇 |
统计学 | 389篇 |
出版年
2023年 | 3篇 |
2022年 | 2篇 |
2021年 | 4篇 |
2020年 | 7篇 |
2019年 | 16篇 |
2018年 | 16篇 |
2017年 | 27篇 |
2016年 | 11篇 |
2015年 | 15篇 |
2014年 | 10篇 |
2013年 | 139篇 |
2012年 | 30篇 |
2011年 | 9篇 |
2010年 | 13篇 |
2009年 | 13篇 |
2008年 | 10篇 |
2007年 | 5篇 |
2006年 | 11篇 |
2005年 | 7篇 |
2004年 | 11篇 |
2003年 | 5篇 |
2002年 | 4篇 |
2001年 | 5篇 |
2000年 | 7篇 |
1999年 | 7篇 |
1998年 | 6篇 |
1997年 | 3篇 |
1996年 | 2篇 |
1995年 | 5篇 |
1994年 | 7篇 |
1993年 | 2篇 |
1992年 | 3篇 |
1991年 | 2篇 |
1990年 | 1篇 |
1989年 | 3篇 |
1987年 | 1篇 |
1983年 | 1篇 |
1982年 | 2篇 |
排序方式: 共有425条查询结果,搜索用时 15 毫秒
1.
Stephen J. Ruberg Frank E. Harrell Jr. Margaret Gamalo-Siebers Lisa LaVange J. Jack Lee Karen Price 《The American statistician》2019,73(1):319-327
ABSTRACTThe cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making. 相似文献
2.
On Optimality of Bayesian Wavelet Estimators 总被引:2,自引:0,他引:2
Felix Abramovich Umberto Amato Claudia Angelini 《Scandinavian Journal of Statistics》2004,31(2):217-234
Abstract. We investigate the asymptotic optimality of several Bayesian wavelet estimators, namely, posterior mean, posterior median and Bayes Factor, where the prior imposed on wavelet coefficients is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared error, for the properly chosen hyperparameters of the prior, all the three resulting Bayesian wavelet estimators achieve optimal minimax rates within any prescribed Besov space for p ≥ 2. For 1 ≤ p < 2, the Bayes Factor is still optimal for (2 s +2)/(2 s +1) ≤ p < 2 and always outperforms the posterior mean and the posterior median that can achieve only the best possible rates for linear estimators in this case. 相似文献
3.
Biao Zhang 《Australian & New Zealand Journal of Statistics》2004,46(3):407-423
Demonstrated equivalence between a categorical regression model based on case‐control data and an I‐sample semiparametric selection bias model leads to a new goodness‐of‐fit test. The proposed test statistic is an extension of an existing Kolmogorov–Smirnov‐type statistic and is the weighted average of the absolute differences between two estimated distribution functions in each response category. The paper establishes an optimal property for the maximum semiparametric likelihood estimator of the parameters in the I‐sample semiparametric selection bias model. It also presents a bootstrap procedure, some simulation results and an analysis of two real datasets. 相似文献
4.
We discuss Bayesian analyses of traditional normal-mixture models for classification and discrimination. The development involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling, and demonstrates routine application. We stress the benefits of exact analyses over traditional classification and discrimination techniques, including the ease with which such analyses may be performed in a quite general setting, with possibly several normal-mixture components having different covariance matrices, the computation of exact posterior classification probabilities for observed data and for future cases to be classified, and posterior distributions for these probabilities that allow for assessment of second-level uncertainties in classification. 相似文献
5.
EVE BOFINGER 《Australian & New Zealand Journal of Statistics》1994,36(1):59-66
Various authors, given k location parameters, have considered lower confidence bounds on (standardized) dserences between the largest and each of the other k - 1 parameters. They have then used these bounds to put lower confidence bounds on the probability of correct selection (PCS) in the same experiment (as was used for finding the lower bounds on differences). It is pointed out that this is an inappropriate inference procedure. Moreover, if the PCS refers to some later experiment it is shown that if a non-trivial confidence bound is possible then it is already possible to conclude, with greater confidence, that correct selection has occurred in the first experiment. The short answer to the question in the title is therefore ‘No’, but this should be qualified in the case of a Bayesian analysis. 相似文献
6.
It is often of interest to find the maximum or near maxima among a set of vector‐valued parameters in a statistical model; in the case of disease mapping, for example, these correspond to relative‐risk “hotspots” where public‐health intervention may be needed. The general problem is one of estimating nonlinear functions of the ensemble of relative risks, but biased estimates result if posterior means are simply substituted into these nonlinear functions. The authors obtain better estimates of extrema from a new, weighted ranks squared error loss function. The derivation of these Bayes estimators assumes a hidden‐Markov random‐field model for relative risks, and their behaviour is illustrated with real and simulated data. 相似文献
7.
CATIA SCRICCIOLO 《Scandinavian Journal of Statistics》2007,34(3):626-642
Abstract. We consider the problem of estimating a compactly supported density taking a Bayesian nonparametric approach. We define a Dirichlet mixture prior that, while selecting piecewise constant densities, has full support on the Hellinger metric space of all commonly dominated probability measures on a known bounded interval. We derive pointwise rates of convergence for the posterior expected density by studying the speed at which the posterior mass accumulates on shrinking Hellinger neighbourhoods of the sampling density. If the data are sampled from a strictly positive, α -Hölderian density, with α ∈ ( 0,1] , then the optimal convergence rate n− α / (2 α +1) is obtained up to a logarithmic factor. Smoothing histograms by polygons, a continuous piecewise linear estimator is obtained that for twice continuously differentiable, strictly positive densities satisfying boundary conditions attains a rate comparable up to a logarithmic factor to the convergence rate n −4/5 for integrated mean squared error of kernel type density estimators. 相似文献
8.
To model an hypothesis of double monotone dependence between two ordinal categorical variables A and B usually a set of symmetric odds ratios defined on the joint probability function is subject to linear inequality constraints. Conversely in this paper two sets of asymmetric odds ratios defined, respectively, on the conditional distributions of A given B and on the conditional distributions of B given A are subject to linear inequality constraints. If the joint probabilities are parameterized by a saturated log-linear model, these constraints are nonlinear inequality constraints on the log-linear parameters. The problem here considered is a non-standard one both for the presence of nonlinear inequality constraints and for the fact that the number of these constraints is greater than the number of the parameters of the saturated log-linear model.This work has been supported by the COFIN 2002 project, references 2002133957_002, 2002133957_004. Preliminary findings have been presented at SIS (Società Italiana di Statistica) Annual Meeting, Bari, 2004. 相似文献
9.
The posterior distribution of the likelihood is used to interpret the evidential meaning of P-values, posterior Bayes factors and Akaike's information criterion when comparing point null hypotheses with composite alternatives. Asymptotic arguments lead to simple re-calibrations of these criteria in terms of posterior tail probabilities of the likelihood ratio. (Prior) Bayes factors cannot be calibrated in this way as they are model-specific. 相似文献
10.
By examining the association between employees' perceptions of job security and central labor market policies and characteristics, this paper seeks to understand the mechanisms through which institutions generate confidence and positive expectations among individuals regarding their economic future. The analyses distinguish between different facets of perceived job security and different institutional mechanisms. My multilevel analyses of a data set that contains information on 12,431 individuals and 23 countries show that some labor market policies and characteristics are more likely than others to provide workers with subjective security. Unemployment assistance in particular is an effective means of reducing workers' worries about job loss. Dismissal protection, by contrast, only unleashes its psychologically protective effects under certain conditions. The paper's main conclusion is that the effectiveness of policies varies and that different types of labor market institutions serve as complements rather than as substitutes. 相似文献