全文获取类型
收费全文 | 1499篇 |
免费 | 37篇 |
国内免费 | 2篇 |
专业分类
管理学 | 60篇 |
人口学 | 1篇 |
丛书文集 | 22篇 |
理论方法论 | 9篇 |
综合类 | 163篇 |
社会学 | 28篇 |
统计学 | 1255篇 |
出版年
2024年 | 1篇 |
2023年 | 12篇 |
2022年 | 12篇 |
2021年 | 13篇 |
2020年 | 19篇 |
2019年 | 54篇 |
2018年 | 68篇 |
2017年 | 104篇 |
2016年 | 43篇 |
2015年 | 24篇 |
2014年 | 48篇 |
2013年 | 332篇 |
2012年 | 100篇 |
2011年 | 55篇 |
2010年 | 44篇 |
2009年 | 53篇 |
2008年 | 47篇 |
2007年 | 52篇 |
2006年 | 48篇 |
2005年 | 55篇 |
2004年 | 53篇 |
2003年 | 40篇 |
2002年 | 33篇 |
2001年 | 30篇 |
2000年 | 31篇 |
1999年 | 20篇 |
1998年 | 21篇 |
1997年 | 24篇 |
1996年 | 8篇 |
1995年 | 14篇 |
1994年 | 9篇 |
1993年 | 8篇 |
1992年 | 8篇 |
1991年 | 10篇 |
1990年 | 4篇 |
1989年 | 2篇 |
1988年 | 5篇 |
1987年 | 6篇 |
1986年 | 4篇 |
1985年 | 3篇 |
1984年 | 2篇 |
1983年 | 5篇 |
1982年 | 6篇 |
1981年 | 2篇 |
1980年 | 2篇 |
1979年 | 2篇 |
1978年 | 2篇 |
排序方式: 共有1538条查询结果,搜索用时 937 毫秒
1.
A conformance proportion is an important and useful index to assess industrial quality improvement. Statistical confidence limits for a conformance proportion are usually required not only to perform statistical significance tests, but also to provide useful information for determining practical significance. In this article, we propose approaches for constructing statistical confidence limits for a conformance proportion of multiple quality characteristics. Under the assumption that the variables of interest are distributed with a multivariate normal distribution, we develop an approach based on the concept of a fiducial generalized pivotal quantity (FGPQ). Without any distribution assumption on the variables, we apply some confidence interval construction methods for the conformance proportion by treating it as the probability of a success in a binomial distribution. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the simulated coverage probability (cp) for the FGPQ-based method is generally larger than the claimed value. On the other hand, one of the binomial distribution-based methods, that is, the standard method suggested in classical textbooks, appears to have smaller simulated cps than the nominal level. Two alternatives to the standard method are found to maintain their simulated cps sufficiently close to the claimed level, and hence their performances are judged to be satisfactory. In addition, three examples are given to illustrate the application of the proposed methods. 相似文献
2.
Stephen J. Ruberg Frank E. Harrell Jr. Margaret Gamalo-Siebers Lisa LaVange J. Jack Lee Karen Price 《The American statistician》2019,73(1):319-327
ABSTRACTThe cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making. 相似文献
3.
David R. Bickel 《统计学通讯:理论与方法》2020,49(11):2703-2712
AbstractConfidence sets, p values, maximum likelihood estimates, and other results of non-Bayesian statistical methods may be adjusted to favor sampling distributions that are simple compared to others in the parametric family. The adjustments are derived from a prior likelihood function previously used to adjust posterior distributions. 相似文献
4.
5.
Carmen Fernández Eduardo Ley Mark F. J. Steel 《Journal of the Royal Statistical Society. Series C, Applied statistics》2002,51(3):257-280
Summary. We model daily catches of fishing boats in the Grand Bank fishing grounds. We use data on catches per species for a number of vessels collected by the European Union in the context of the Northwest Atlantic Fisheries Organization. Many variables can be thought to influence the amount caught: a number of ship characteristics (such as the size of the ship, the fishing technique used and the mesh size of the nets) are obvious candidates, but one can also consider the season or the actual location of the catch. Our database leads to 28 possible regressors (arising from six continuous variables and four categorical variables, whose 22 levels are treated separately), resulting in a set of 177 million possible linear regression models for the log-catch. Zero observations are modelled separately through a probit model. Inference is based on Bayesian model averaging, using a Markov chain Monte Carlo approach. Particular attention is paid to the prediction of catches for single and aggregated ships. 相似文献
6.
Bram Thuysbaert 《Journal of Economic Inequality》2008,6(1):33-55
Empirical applications of poverty measurement often have to deal with a stochastic weighting variable such as household size.
Within the framework of a bivariate distribution function defined over income and weight, I derive the limiting distributions
of the decomposable poverty measures and of the ordinates of stochastic dominance curves. The poverty line is allowed to depend
on the income distribution. It is shown how the results can be used to test hypotheses concerning changes in poverty. The
inference procedures are briefly illustrated using Belgian data.
An erratum to this article can be found at 相似文献
7.
何琼 《浙江海洋学院学报(人文科学版)》2003,20(1):79-82
语义层面上语用含义的话语理解是大学英语教学的难点之所在。为加强薄弱环节,须从“会话含义”和“关联”理论出发,揭示话语中交际意图传递的规律和特征,并结合实例分析听力理解过程中可能出现的信息推理障碍及原因。 相似文献
8.
Peter Hall Qiwei Yao 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2003,65(2):425-442
Summary. We develop a general methodology for tilting time series data. Attention is focused on a large class of regression problems, where errors are expressed through autoregressive processes. The class has a range of important applications and in the context of our work may be used to illustrate the application of tilting methods to interval estimation in regression, robust statistical inference and estimation subject to constraints. The method can be viewed as 'empirical likelihood with nuisance parameters'. 相似文献
9.
John D. Emerson David C. Hoaglin Frederick Mosteller 《Statistical Methods and Applications》1993,2(3):269-290
Summary Meta-analyses of sets of clinical trials often combine risk differences from several 2×2 tables according to a random-effects
model. The DerSimonian-Laird random-effects procedure, widely used for estimating the populaton mean risk difference, weights
the risk difference from each primary study inversely proportional to an estimate of its variance (the sum of the between-study
variance and the conditional within-study variance). Because those weights are not independent of the risk differences, however,
the procedure sometimes exhibits bias and unnatural behavior. The present paper proposes a modified weighting scheme that
uses the unconditional within-study variance to avoid this source of bias. The modified procedure has variance closer to that
available from weighting by ideal weights when such weights are known. We studied the modified procedure in extensive simulation
experiments using situations whose parameters resemble those of actual studies in medical research. For comparison we also
included two unbiased procedures, the unweighted mean and a sample-size-weighted mean; their relative variability depends
on the extent of heterogeneity among the primary studies. An example illustrates the application of the procedures to actual
data and the differences among the results.
This research was supported by Grant HS 05936 from the Agency for Health Care Policy and Research to Harvard University. 相似文献
10.
Low dose risk estimation via simultaneous statistical inferences 总被引:2,自引:0,他引:2
Walter W. Piegorsch R. Webster West Wei Pan Ralph L. Kodell 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(1):245-258
Summary. The paper develops and studies simultaneous confidence bounds that are useful for making low dose inferences in quantitative risk analysis. Application is intended for risk assessment studies where human, animal or ecological data are used to set safe low dose levels of a toxic agent, but where study information is limited to high dose levels of the agent. Methods are derived for estimating simultaneous, one-sided, upper confidence limits on risk for end points measured on a continuous scale. From the simultaneous confidence bounds, lower confidence limits on the dose that is associated with a particular risk (often referred to as a bench-mark dose ) are calculated. An important feature of the simultaneous construction is that any inferences that are based on inverting the simultaneous confidence bounds apply automatically to inverse bounds on the bench-mark dose. 相似文献