首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1480篇
  免费   42篇
  国内免费   5篇
管理学   76篇
人口学   2篇
丛书文集   33篇
理论方法论   15篇
综合类   345篇
社会学   22篇
统计学   1034篇
  2023年   12篇
  2022年   10篇
  2021年   13篇
  2020年   17篇
  2019年   49篇
  2018年   53篇
  2017年   81篇
  2016年   36篇
  2015年   25篇
  2014年   50篇
  2013年   260篇
  2012年   96篇
  2011年   67篇
  2010年   50篇
  2009年   58篇
  2008年   64篇
  2007年   63篇
  2006年   67篇
  2005年   66篇
  2004年   59篇
  2003年   53篇
  2002年   36篇
  2001年   45篇
  2000年   33篇
  1999年   21篇
  1998年   21篇
  1997年   23篇
  1996年   11篇
  1995年   14篇
  1994年   8篇
  1993年   7篇
  1992年   8篇
  1991年   11篇
  1990年   4篇
  1989年   1篇
  1988年   7篇
  1987年   6篇
  1986年   2篇
  1985年   3篇
  1984年   2篇
  1983年   5篇
  1982年   5篇
  1981年   2篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1527条查询结果,搜索用时 15 毫秒
1.
A conformance proportion is an important and useful index to assess industrial quality improvement. Statistical confidence limits for a conformance proportion are usually required not only to perform statistical significance tests, but also to provide useful information for determining practical significance. In this article, we propose approaches for constructing statistical confidence limits for a conformance proportion of multiple quality characteristics. Under the assumption that the variables of interest are distributed with a multivariate normal distribution, we develop an approach based on the concept of a fiducial generalized pivotal quantity (FGPQ). Without any distribution assumption on the variables, we apply some confidence interval construction methods for the conformance proportion by treating it as the probability of a success in a binomial distribution. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the simulated coverage probability (cp) for the FGPQ-based method is generally larger than the claimed value. On the other hand, one of the binomial distribution-based methods, that is, the standard method suggested in classical textbooks, appears to have smaller simulated cps than the nominal level. Two alternatives to the standard method are found to maintain their simulated cps sufficiently close to the claimed level, and hence their performances are judged to be satisfactory. In addition, three examples are given to illustrate the application of the proposed methods.  相似文献   
2.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
3.
Modern analytical models for anti-monopoly laws are a core element of the application of those laws. Since the Anti-Monopoly Law of the People’s Republic of China was promulgated in 2008, law enforcement and judicial authorities have applied different analytical models, leading to divergent legal and regulatory outcomes as similar cases receive different verdicts. To select a suitable analytical model for China’s Anti-Monopoly Law, we need to consider the possible contribution of both economic analysis and legal formalism and to learn from the mature systems and experience of foreign countries. It is also necessary to take into account such binding constraints as the current composition of China’s anti-monopoly legal system, the ability of implementing agencies and the supply of economic analysis, in order to ensure complementarity between the analytical model chosen and the complexity of economic analysis and between the professionalism of implementing agencies and the cost of compliance for participants in economic activities. In terms of institutional design, the models should provide a considered explanation of the legislative aims of the law’s provisions. It is necessary, therefore, to establish a processing model of behavioral classification that is based on China’s national conditions, applies analytical models using normative comprehensive analysis, makes use of the distribution rule of burden of proof, improves supporting systems related to analytical models and enhances the ability of public authorities to implement the law.  相似文献   
4.
Abstract

Confidence sets, p values, maximum likelihood estimates, and other results of non-Bayesian statistical methods may be adjusted to favor sampling distributions that are simple compared to others in the parametric family. The adjustments are derived from a prior likelihood function previously used to adjust posterior distributions.  相似文献   
5.
本文从培养数学观念的角度,就数学教育如何培养人们的科学思维方式和形成良好的思维习惯作一论述。  相似文献   
6.
Summary. We model daily catches of fishing boats in the Grand Bank fishing grounds. We use data on catches per species for a number of vessels collected by the European Union in the context of the Northwest Atlantic Fisheries Organization. Many variables can be thought to influence the amount caught: a number of ship characteristics (such as the size of the ship, the fishing technique used and the mesh size of the nets) are obvious candidates, but one can also consider the season or the actual location of the catch. Our database leads to 28 possible regressors (arising from six continuous variables and four categorical variables, whose 22 levels are treated separately), resulting in a set of 177 million possible linear regression models for the log-catch. Zero observations are modelled separately through a probit model. Inference is based on Bayesian model averaging, using a Markov chain Monte Carlo approach. Particular attention is paid to the prediction of catches for single and aggregated ships.  相似文献   
7.
Empirical applications of poverty measurement often have to deal with a stochastic weighting variable such as household size. Within the framework of a bivariate distribution function defined over income and weight, I derive the limiting distributions of the decomposable poverty measures and of the ordinates of stochastic dominance curves. The poverty line is allowed to depend on the income distribution. It is shown how the results can be used to test hypotheses concerning changes in poverty. The inference procedures are briefly illustrated using Belgian data. An erratum to this article can be found at  相似文献   
8.
语义层面上语用含义的话语理解是大学英语教学的难点之所在。为加强薄弱环节,须从“会话含义”和“关联”理论出发,揭示话语中交际意图传递的规律和特征,并结合实例分析听力理解过程中可能出现的信息推理障碍及原因。  相似文献   
9.
Summary. We develop a general methodology for tilting time series data. Attention is focused on a large class of regression problems, where errors are expressed through autoregressive processes. The class has a range of important applications and in the context of our work may be used to illustrate the application of tilting methods to interval estimation in regression, robust statistical inference and estimation subject to constraints. The method can be viewed as 'empirical likelihood with nuisance parameters'.  相似文献   
10.
Low dose risk estimation via simultaneous statistical inferences   总被引:2,自引:0,他引:2  
Summary.  The paper develops and studies simultaneous confidence bounds that are useful for making low dose inferences in quantitative risk analysis. Application is intended for risk assessment studies where human, animal or ecological data are used to set safe low dose levels of a toxic agent, but where study information is limited to high dose levels of the agent. Methods are derived for estimating simultaneous, one-sided, upper confidence limits on risk for end points measured on a continuous scale. From the simultaneous confidence bounds, lower confidence limits on the dose that is associated with a particular risk (often referred to as a bench-mark dose ) are calculated. An important feature of the simultaneous construction is that any inferences that are based on inverting the simultaneous confidence bounds apply automatically to inverse bounds on the bench-mark dose.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号