首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1136篇
  免费   18篇
  国内免费   1篇
管理学   23篇
人口学   2篇
丛书文集   1篇
理论方法论   4篇
综合类   15篇
社会学   3篇
统计学   1107篇
  2023年   4篇
  2022年   2篇
  2021年   2篇
  2020年   18篇
  2019年   33篇
  2018年   50篇
  2017年   84篇
  2016年   15篇
  2015年   18篇
  2014年   33篇
  2013年   450篇
  2012年   78篇
  2011年   28篇
  2010年   35篇
  2009年   44篇
  2008年   28篇
  2007年   25篇
  2006年   11篇
  2005年   11篇
  2004年   14篇
  2003年   14篇
  2002年   12篇
  2001年   14篇
  2000年   11篇
  1999年   12篇
  1998年   8篇
  1997年   11篇
  1996年   5篇
  1995年   8篇
  1994年   14篇
  1993年   4篇
  1992年   7篇
  1991年   4篇
  1990年   6篇
  1989年   4篇
  1987年   4篇
  1985年   2篇
  1984年   9篇
  1983年   6篇
  1982年   6篇
  1981年   4篇
  1980年   4篇
  1979年   1篇
  1978年   2篇
排序方式: 共有1155条查询结果,搜索用时 15 毫秒
1.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
2.
In this paper, the quantile-based flattened logistic distribution has been studied. Some classical and quantile-based properties of the distribution have been obtained. Closed form expressions of L-moments, L-moment ratios and expectation of order statistics of the distribution have been obtained. A quantile-based analysis concerning the method of matching L-moments estimation is employed to estimate the parameters of the proposed model. We further derive the asymptotic variance–covariance matrix of the matching L-Moments estimators of the proposed model. Finally, we apply the proposed model to simulated as well as two real life datasets and compare the fit with the logistic distribution.  相似文献   
3.
Generalized method of moments (GMM) is used to develop tests for discriminating discrete distributions among the two-parameter family of Katz distributions. Relationships involving moments are exploited to obtain identifying and over-identifying restrictions. The asymptotic relative efficiencies of tests based on GMM are analyzed using the local power approach and the approximate Bahadur efficiency. The paper also gives results of Monte Carlo experiments designed to check the validity of the theoretical findings and to shed light on the small sample properties of the proposed tests. Extensions of the results to compound Poisson alternative hypotheses are discussed.  相似文献   
4.
On Optimality of Bayesian Wavelet Estimators   总被引:2,自引:0,他引:2  
Abstract.  We investigate the asymptotic optimality of several Bayesian wavelet estimators, namely, posterior mean, posterior median and Bayes Factor, where the prior imposed on wavelet coefficients is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared error, for the properly chosen hyperparameters of the prior, all the three resulting Bayesian wavelet estimators achieve optimal minimax rates within any prescribed Besov space     for p  ≥ 2. For 1 ≤  p  < 2, the Bayes Factor is still optimal for (2 s +2)/(2 s +1) ≤  p  < 2 and always outperforms the posterior mean and the posterior median that can achieve only the best possible rates for linear estimators in this case.  相似文献   
5.
Approximation formulae are developed for the bias of ordinary and generalized Least Squares Dummy Variable (LSDV) estimators in dynamic panel data models. Results from Kiviet [Kiviet, J. F. (1995), on bias, inconsistency, and efficiency of various estimators in dynamic panel data models, J. Econometrics68:53-78; Kiviet, J. F. (1999), Expectations of expansions for estimators in a dynamic panel data model: some results for weakly exogenous regressors, In: Hsiao, C., Lahiri, K., Lee, L-F., Pesaran, M. H., eds., Analysis of Panels and Limited Dependent Variables, Cambridge: Cambridge University Press, pp. 199-225] are extended to higher-order dynamic panel data models with general covariance structure. The focus is on estimation of both short- and long-run coefficients. The results show that proper modelling of the disturbance covariance structure is indispensable. The bias approximations are used to construct bias corrected estimators which are then applied to quarterly data from 14 European Union countries. Money demand functions for M1, M2 and M3 are estimated for the EU area as a whole for the period 1991: I-1995: IV. Significant spillovers between countries are found reflecting the dependence of domestic money demand on foreign developments. The empirical results show that in general plausible long-run effects are obtained by the bias corrected estimators. Moreover, finite sample bias, although of moderate magnitude, is present underlining the importance of more refined estimation techniques. Also the efficiency gains by exploiting the heteroscedasticity and cross-correlation patterns between countries are sometimes considerable.  相似文献   
6.
We discuss Bayesian analyses of traditional normal-mixture models for classification and discrimination. The development involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling, and demonstrates routine application. We stress the benefits of exact analyses over traditional classification and discrimination techniques, including the ease with which such analyses may be performed in a quite general setting, with possibly several normal-mixture components having different covariance matrices, the computation of exact posterior classification probabilities for observed data and for future cases to be classified, and posterior distributions for these probabilities that allow for assessment of second-level uncertainties in classification.  相似文献   
7.
Various authors, given k location parameters, have considered lower confidence bounds on (standardized) dserences between the largest and each of the other k - 1 parameters. They have then used these bounds to put lower confidence bounds on the probability of correct selection (PCS) in the same experiment (as was used for finding the lower bounds on differences). It is pointed out that this is an inappropriate inference procedure. Moreover, if the PCS refers to some later experiment it is shown that if a non-trivial confidence bound is possible then it is already possible to conclude, with greater confidence, that correct selection has occurred in the first experiment. The short answer to the question in the title is therefore ‘No’, but this should be qualified in the case of a Bayesian analysis.  相似文献   
8.
It is often of interest to find the maximum or near maxima among a set of vector‐valued parameters in a statistical model; in the case of disease mapping, for example, these correspond to relative‐risk “hotspots” where public‐health intervention may be needed. The general problem is one of estimating nonlinear functions of the ensemble of relative risks, but biased estimates result if posterior means are simply substituted into these nonlinear functions. The authors obtain better estimates of extrema from a new, weighted ranks squared error loss function. The derivation of these Bayes estimators assumes a hidden‐Markov random‐field model for relative risks, and their behaviour is illustrated with real and simulated data.  相似文献   
9.
Abstract.  We consider the problem of estimating a compactly supported density taking a Bayesian nonparametric approach. We define a Dirichlet mixture prior that, while selecting piecewise constant densities, has full support on the Hellinger metric space of all commonly dominated probability measures on a known bounded interval. We derive pointwise rates of convergence for the posterior expected density by studying the speed at which the posterior mass accumulates on shrinking Hellinger neighbourhoods of the sampling density. If the data are sampled from a strictly positive, α -Hölderian density, with α  ∈ ( 0,1] , then the optimal convergence rate n− α / (2 α +1) is obtained up to a logarithmic factor. Smoothing histograms by polygons, a continuous piecewise linear estimator is obtained that for twice continuously differentiable, strictly positive densities satisfying boundary conditions attains a rate comparable up to a logarithmic factor to the convergence rate n −4/5 for integrated mean squared error of kernel type density estimators.  相似文献   
10.
The purpose of this paper is threefold. First, we obtain the asymptotic properties of the modified model selection criteria proposed by Hurvich et al. (1990. Improved estimators of Kullback-Leibler information for autoregressive model selection in small samples. Biometrika 77, 709–719) for autoregressive models. Second, we provide some highlights on the better performance of this modified criteria. Third, we extend the modification introduced by these authors to model selection criteria commonly used in the class of self-exciting threshold autoregressive (SETAR) time series models. We show the improvements of the modified criteria in their finite sample performance. In particular, for small and medium sample size the frequency of selecting the true model improves for the consistent criteria and the root mean square error (RMSE) of prediction improves for the efficient criteria. These results are illustrated via simulation with SETAR models in which we assume that the threshold and the parameters are unknown.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号