首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1430篇
  免费   46篇
  国内免费   18篇
管理学   193篇
民族学   1篇
人口学   23篇
丛书文集   22篇
理论方法论   77篇
综合类   231篇
社会学   10篇
统计学   937篇
  2023年   6篇
  2022年   6篇
  2021年   10篇
  2020年   32篇
  2019年   45篇
  2018年   47篇
  2017年   87篇
  2016年   37篇
  2015年   44篇
  2014年   40篇
  2013年   357篇
  2012年   116篇
  2011年   40篇
  2010年   46篇
  2009年   43篇
  2008年   56篇
  2007年   57篇
  2006年   59篇
  2005年   38篇
  2004年   28篇
  2003年   35篇
  2002年   35篇
  2001年   27篇
  2000年   18篇
  1999年   20篇
  1998年   13篇
  1997年   14篇
  1996年   12篇
  1995年   12篇
  1994年   14篇
  1993年   13篇
  1992年   18篇
  1991年   11篇
  1990年   4篇
  1989年   5篇
  1988年   10篇
  1987年   6篇
  1986年   5篇
  1985年   6篇
  1984年   5篇
  1983年   3篇
  1982年   2篇
  1981年   4篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
  1975年   1篇
排序方式: 共有1494条查询结果,搜索用时 31 毫秒
801.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   
802.
Mixed‐effects models for repeated measures (MMRM) analyses using the Kenward‐Roger method for adjusting standard errors and degrees of freedom in an “unstructured” (UN) covariance structure are increasingly becoming common in primary analyses for group comparisons in longitudinal clinical trials. We evaluate the performance of an MMRM‐UN analysis using the Kenward‐Roger method when the variance of outcome between treatment groups is unequal. In addition, we provide alternative approaches for valid inferences in the MMRM analysis framework. Two simulations are conducted in cases with (1) unequal variance but equal correlation between the treatment groups and (2) unequal variance and unequal correlation between the groups. Our results in the first simulation indicate that MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for the groups yields notably poor coverage probability (CP) with confidence intervals for the treatment effect when both the variance and the sample size between the groups are disparate. In addition, even when the randomization ratio is 1:1, the CP will fall seriously below the nominal confidence level if a treatment group with a large dropout proportion has a larger variance. Mixed‐effects models for repeated measures analysis with the Mancl and DeRouen covariance estimator shows relatively better performance than the traditional MMRM‐UN analysis method. In the second simulation, the traditional MMRM‐UN analysis leads to bias of the treatment effect and yields notably poor CP. Mixed‐effects models for repeated measures analysis fitting separate UN covariance structures for each group provides an unbiased estimate of the treatment effect and an acceptable CP. We do not recommend MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for treatment groups, although it is frequently seen in applications, when heteroscedasticity between the groups is apparent in incomplete longitudinal data.  相似文献   
803.
The problems of estimating the mean and an upper percentile of a lognormal population with nonnegative values are considered. For estimating the mean of a such population based on data that include zeros, a simple confidence interval (CI) that is obtained by modifying Tian's [Inferences on the mean of zero-inflated lognormal data: the generalized variable approach. Stat Med. 2005;24:3223—3232] generalized CI, is proposed. A fiducial upper confidence limit (UCL) and a closed-form approximate UCL for an upper percentile are developed. Our simulation studies indicate that the proposed methods are very satisfactory in terms of coverage probability and precision, and better than existing methods for maintaining balanced tail error rates. The proposed CI and the UCL are simple and easy to calculate. All the methods considered are illustrated using samples of data involving airborne chlorine concentrations and data on diagnostic test costs.  相似文献   
804.
This R package implements three types of goodness-of-fit tests for some widely used probability distributions where there are unknown parameters, namely tests based on data transformations, on the ratio of two estimators of a dispersion parameter, and correlation tests. Most of the considered tests have been proved to be powerful against a wide range of alternatives and some new ones are proposed here. The package's functionality is illustrated with several examples by using some data sets from the areas of environmental studies, biology and finance, among others.  相似文献   
805.
When comparing two experimental treatments with a placebo, we focus our attention on interval estimation of the proportion ratio (PR) of patient responses under a three-period crossover design. We propose a random effects exponential multiplicative risk model and derive asymptotic interval estimators in closed form for the PR between treatments and placebo. Using Monte Carlo simulations, we compare the performance of these interval estimators in a variety of situations. We use the data comparing two different doses of an analgesic with placebo for the relief of primary dysmenorrhea to illustrate the use of these interval estimators and the difference in estimates of the PR and odds ratio (OR) when the underlying relief rates are not small.  相似文献   
806.
Guogen Shan 《Statistics》2018,52(5):1086-1095
In addition to point estimate for the probability of response in a two-stage design (e.g. Simon's two-stage design for binary endpoints), confidence limits should be computed and reported. The current method of inverting the p-value function to compute the confidence interval does not guarantee coverage probability in a two-stage setting. The existing exact approach to calculate one-sided limits is based on the overall number of responses to order the sample space. This approach could be conservative because many sample points have the same limits. We propose a new exact one-sided interval based on p-value for the sample space ordering. Exact intervals are computed by using binomial distributions directly, instead of a normal approximation. Both exact intervals preserve the nominal confidence level. The proposed exact interval based on the p-value generally performs better than the other exact interval with regard to expected length and simple average length of confidence intervals.  相似文献   
807.
Diagnostic plots for determining the max domains of attraction of power normalized partial maxima are proposed. A test to ascertain the veracity of the claim that data distribution belongs to a max domain of attraction under power normalization is given. The performance of this test is demonstrated using data simulated from many well-known distributions. Furthermore, two real-world datasets are analysed using the proposed procedure.  相似文献   
808.
The problem of estimation of unknown response function of a time-invariant continuous linear system is considered. Integral sample input–output cross-correlogram is taken as an estimator of the response function. The inputs are supposed to be zero-mean stationary Gaussian process. A criterion on the shape of impulse response function is given. For this purpose, we apply a theory of square–Gaussian random processes and estimate the probability that supremum of square–Gaussian process exceeds the level specified by some function.  相似文献   
809.
The conjunction fallacy occurs whenever probability compounds are thought of as more likely than its component probabilities alone. In the experiment we present, subjects chose between simple and compound lotteries after some practice. Depending on the condition, they were given more or less information about the nature of probability compounds. The conjunction fallacy was surprisingly robust. There was, however, a puzzling dissociation between verbal and behavioral learning: verbal responses were sensitive, but actual choices entirely insensitive, to the amount of verbal instructions being provided. This might reflect a dichotomy between implicit and explicit learning. Caution must be exercised in generalizing results from what people say to what people do.  相似文献   
810.
The widely observed preference for lotteries involving precise rather than vague of ambiguous probabilities is called ambiguity aversion. Ambiguity aversion cannot be predicted or explained by conventional expected utility models. For the subjectively weighted linear utility (SWLU) model, we define both probability and payoff premiums for ambiguity, and introduce alocal ambiguity aversion function a(u) that is proportional to these ambiguity premiums for small uncertainties. We show that one individual's ambiguity premiums areglobally larger than another's if and only if hisa(u) function is everywhere larger. Ambiguity aversion has been observed to increase 1) when the mean probability of gain increases and 2) when the mean probability of loss decreases. We show that such behavior is equivalent toa(u) increasing in both the gain and loss domains. Increasing ambiguity aversion also explains the observed excess of sellers' over buyers' prices for insurance against an ambiguous probability of loss.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号