首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   292篇
  免费   9篇
  国内免费   1篇
管理学   7篇
人口学   2篇
理论方法论   3篇
综合类   8篇
社会学   2篇
统计学   280篇
  2023年   2篇
  2022年   2篇
  2021年   1篇
  2020年   7篇
  2019年   8篇
  2018年   12篇
  2017年   17篇
  2016年   6篇
  2015年   7篇
  2014年   6篇
  2013年   108篇
  2012年   22篇
  2011年   8篇
  2010年   10篇
  2009年   8篇
  2008年   7篇
  2007年   3篇
  2006年   6篇
  2005年   5篇
  2004年   5篇
  2003年   5篇
  2002年   3篇
  2001年   3篇
  2000年   6篇
  1999年   4篇
  1998年   3篇
  1997年   3篇
  1996年   1篇
  1995年   5篇
  1994年   7篇
  1993年   2篇
  1992年   3篇
  1991年   1篇
  1990年   1篇
  1989年   2篇
  1987年   1篇
  1983年   1篇
  1982年   1篇
排序方式: 共有302条查询结果,搜索用时 15 毫秒
1.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
2.
On Optimality of Bayesian Wavelet Estimators   总被引:2,自引:0,他引:2  
Abstract.  We investigate the asymptotic optimality of several Bayesian wavelet estimators, namely, posterior mean, posterior median and Bayes Factor, where the prior imposed on wavelet coefficients is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared error, for the properly chosen hyperparameters of the prior, all the three resulting Bayesian wavelet estimators achieve optimal minimax rates within any prescribed Besov space     for p  ≥ 2. For 1 ≤  p  < 2, the Bayes Factor is still optimal for (2 s +2)/(2 s +1) ≤  p  < 2 and always outperforms the posterior mean and the posterior median that can achieve only the best possible rates for linear estimators in this case.  相似文献   
3.
We discuss Bayesian analyses of traditional normal-mixture models for classification and discrimination. The development involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling, and demonstrates routine application. We stress the benefits of exact analyses over traditional classification and discrimination techniques, including the ease with which such analyses may be performed in a quite general setting, with possibly several normal-mixture components having different covariance matrices, the computation of exact posterior classification probabilities for observed data and for future cases to be classified, and posterior distributions for these probabilities that allow for assessment of second-level uncertainties in classification.  相似文献   
4.
Various authors, given k location parameters, have considered lower confidence bounds on (standardized) dserences between the largest and each of the other k - 1 parameters. They have then used these bounds to put lower confidence bounds on the probability of correct selection (PCS) in the same experiment (as was used for finding the lower bounds on differences). It is pointed out that this is an inappropriate inference procedure. Moreover, if the PCS refers to some later experiment it is shown that if a non-trivial confidence bound is possible then it is already possible to conclude, with greater confidence, that correct selection has occurred in the first experiment. The short answer to the question in the title is therefore ‘No’, but this should be qualified in the case of a Bayesian analysis.  相似文献   
5.
It is often of interest to find the maximum or near maxima among a set of vector‐valued parameters in a statistical model; in the case of disease mapping, for example, these correspond to relative‐risk “hotspots” where public‐health intervention may be needed. The general problem is one of estimating nonlinear functions of the ensemble of relative risks, but biased estimates result if posterior means are simply substituted into these nonlinear functions. The authors obtain better estimates of extrema from a new, weighted ranks squared error loss function. The derivation of these Bayes estimators assumes a hidden‐Markov random‐field model for relative risks, and their behaviour is illustrated with real and simulated data.  相似文献   
6.
Abstract.  We consider the problem of estimating a compactly supported density taking a Bayesian nonparametric approach. We define a Dirichlet mixture prior that, while selecting piecewise constant densities, has full support on the Hellinger metric space of all commonly dominated probability measures on a known bounded interval. We derive pointwise rates of convergence for the posterior expected density by studying the speed at which the posterior mass accumulates on shrinking Hellinger neighbourhoods of the sampling density. If the data are sampled from a strictly positive, α -Hölderian density, with α  ∈ ( 0,1] , then the optimal convergence rate n− α / (2 α +1) is obtained up to a logarithmic factor. Smoothing histograms by polygons, a continuous piecewise linear estimator is obtained that for twice continuously differentiable, strictly positive densities satisfying boundary conditions attains a rate comparable up to a logarithmic factor to the convergence rate n −4/5 for integrated mean squared error of kernel type density estimators.  相似文献   
7.
The posterior distribution of the likelihood is used to interpret the evidential meaning of P-values, posterior Bayes factors and Akaike's information criterion when comparing point null hypotheses with composite alternatives. Asymptotic arguments lead to simple re-calibrations of these criteria in terms of posterior tail probabilities of the likelihood ratio. (Prior) Bayes factors cannot be calibrated in this way as they are model-specific.  相似文献   
8.
The location-scale model with equi-correlated responses is discussed. The structure of the location-scale model is utilised to genera-te the prediction distribution of a future response and that of a set of future responses. The method avoids the integration procedures usually involved in derivation of prediction distributions and yields results same as those obtained by the Bayes method with the vague prior distribution* Finally the re-suits have been specialised to cover the case of the normal intra-class model.  相似文献   
9.
Xing-De Duan 《Statistics》2016,50(3):525-539
This paper develops a Bayesian approach to obtain the joint estimates of unknown parameters, nonparametric functions and random effects in generalized partially linear mixed models (GPLMMs), and presents three case deletion influence measures to identify influential observations based on the φ-divergence, Cook's posterior mean distance and Cook's posterior mode distance of parameters. Fisher's iterative scoring algorithm is developed to evaluate the posterior modes of parameters in GPLMMs. The first-order approximation to Cook's posterior mode distance is presented. The computationally feasible formulae for the φ-divergence diagnostic and Cook's posterior mean distance are given. Several simulation studies and an example are presented to illustrate our proposed methodologies.  相似文献   
10.
This paper proposes a new hysteretic vector autoregressive (HVAR) model in which the regime switching may be delayed when the hysteresis variable lies in a hysteresis zone. We integrate an adapted multivariate Student-t distribution from amending the scale mixtures of normal distributions. This HVAR model allows for a higher degree of flexibility in the degrees of freedom for each time series. We use the proposed model to test for a causal relationship between any two target time series. Using posterior odds ratios, we overcome the limitations of the classical approach to multiple testing. Both simulated and real examples herein help illustrate the suggested methods. We apply the proposed HVAR model to investigate the causal relationship between the quarterly growth rates of gross domestic product of United Kingdom and United States. Moreover, we check the pairwise lagged dependence of daily PM2.5 levels in three districts of Taipei.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号