首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4500篇
  免费   957篇
  国内免费   6篇
管理学   1156篇
民族学   7篇
人口学   56篇
丛书文集   74篇
理论方法论   833篇
综合类   595篇
社会学   1666篇
统计学   1076篇
  2024年   1篇
  2023年   8篇
  2022年   3篇
  2021年   102篇
  2020年   171篇
  2019年   347篇
  2018年   209篇
  2017年   359篇
  2016年   357篇
  2015年   357篇
  2014年   386篇
  2013年   597篇
  2012年   401篇
  2011年   291篇
  2010年   299篇
  2009年   195篇
  2008年   212篇
  2007年   141篇
  2006年   175篇
  2005年   156篇
  2004年   161篇
  2003年   126篇
  2002年   114篇
  2001年   126篇
  2000年   93篇
  1999年   16篇
  1998年   14篇
  1997年   9篇
  1996年   7篇
  1995年   7篇
  1994年   2篇
  1993年   2篇
  1992年   7篇
  1991年   4篇
  1990年   2篇
  1989年   1篇
  1987年   2篇
  1985年   1篇
  1982年   1篇
  1975年   1篇
排序方式: 共有5463条查询结果,搜索用时 250 毫秒
181.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
182.
Clinical trials are often designed to compare continuous non‐normal outcomes. The conventional statistical method for such a comparison is a non‐parametric Mann–Whitney test, which provides a P‐value for testing the hypothesis that the distributions of both treatment groups are identical, but does not provide a simple and straightforward estimate of treatment effect. For that, Hodges and Lehmann proposed estimating the shift parameter between two populations and its confidence interval (CI). However, such a shift parameter does not have a straightforward interpretation, and its CI contains zero in some cases when Mann–Whitney test produces a significant result. To overcome the aforementioned problems, we introduce the use of the win ratio for analysing such data. Patients in the new and control treatment are formed into all possible pairs. For each pair, the new treatment patient is labelled a ‘winner’ or a ‘loser’ if it is known who had the more favourable outcome. The win ratio is the total number of winners divided by the total numbers of losers. A 95% CI for the win ratio can be obtained using the bootstrap method. Statistical properties of the win ratio statistic are investigated using two real trial data sets and six simulation studies. Results show that the win ratio method has about the same power as the Mann–Whitney method. We recommend the use of the win ratio method for estimating the treatment effect (and CI) and the Mann–Whitney method for calculating the P‐value for comparing continuous non‐Normal outcomes when the amount of tied pairs is small. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
183.
We consider the blinded sample size re‐estimation based on the simple one‐sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two‐sample t‐test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re‐estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non‐inferiority margin for non‐inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
184.
In recent years, immunological science has evolved, and cancer vaccines are now approved and available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected and is actually observed in drug approval studies. Accordingly, we propose the evaluation of survival endpoints by weighted log‐rank tests with the Fleming–Harrington class of weights. We consider group sequential monitoring, which allows early efficacy stopping, and determine a semiparametric information fraction for the Fleming–Harrington family of weights, which is necessary for the error spending function. Moreover, we give a flexible survival model in cancer vaccine studies that considers not only the delayed treatment effect but also the long‐term survivors. In a Monte Carlo simulation study, we illustrate that when the primary analysis is a weighted log‐rank test emphasizing the late differences, the proposed information fraction can be a useful alternative to the surrogate information fraction, which is proportional to the number of events. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
185.
针对反求工程中截面重构时由于分段点提取不精确造成重构结果不理想的问题,提出一种分段点精确提取方 法。基于截面数据的法向量信息构建高斯图,将分段点确定在一个区间内,根据区间端点将截面数据分为不同的特征 段;优先重构直线特征,再根据约束重构自由特征,并在自由特征重构过程中建立动态搜索模型,用二次插值法动态搜索 精确分段点。实验表明该方法提高了分段点的提取精度和截面重构的精度,重构的结果很好地还原了产品模型的初始 设计意图。  相似文献   
186.
Pseudo‐values have proven very useful in censored data analysis in complex settings such as multi‐state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second‐order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo‐values still seem unclear. In this paper, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U‐statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error estimates will typically be too large, although in many practical applications the difference will be of minor importance. We show how to estimate correctly the variability of the estimator. This is further studied in some simulation studies.  相似文献   
187.
Linear increments (LI) are used to analyse repeated outcome data with missing values. Previously, two LI methods have been proposed, one allowing non‐monotone missingness but not independent measurement error and one allowing independent measurement error but only monotone missingness. In both, it was suggested that the expected increment could depend on current outcome. We show that LI can allow non‐monotone missingness and either independent measurement error of unknown variance or dependence of expected increment on current outcome but not both. A popular alternative to LI is a multivariate normal model ignoring the missingness pattern. This gives consistent estimation when data are normally distributed and missing at random (MAR). We clarify the relation between MAR and the assumptions of LI and show that for continuous outcomes multivariate normal estimators are also consistent under (non‐MAR and non‐normal) assumptions not much stronger than those of LI. Moreover, when missingness is non‐monotone, they are typically more efficient.  相似文献   
188.
The estimation of abundance from presence–absence data is an intriguing problem in applied statistics. The classical Poisson model makes strong independence and homogeneity assumptions and in practice generally underestimates the true abundance. A controversial ad hoc method based on negative‐binomial counts (Am. Nat.) has been empirically successful but lacks theoretical justification. We first present an alternative estimator of abundance based on a paired negative binomial model that is consistent and asymptotically normally distributed. A quadruple negative binomial extension is also developed, which yields the previous ad hoc approach and resolves the controversy in the literature. We examine the performance of the estimators in a simulation study and estimate the abundance of 44 tree species in a permanent forest plot.  相似文献   
189.
A problem of using a non‐convex penalty for sparse regression is that there are multiple local minima of the penalized sum of squared residuals, and it is not known which one is a good estimator. The aim of this paper is to give a guide to design a non‐convex penalty that has the strong oracle property. Here, the strong oracle property means that the oracle estimator is the unique local minimum of the objective function. We summarize three definitions of the oracle property – the global, weak and strong oracle properties. Then, we give sufficient conditions for the weak oracle property, which means that the oracle estimator becomes a local minimum. We give an example of non‐convex penalties that possess the weak oracle property but not the strong oracle property. Finally, we give a necessary condition for the strong oracle property.  相似文献   
190.
The paper proposes a new test for detecting the umbrella pattern under a general non‐parametric scheme. The alternative asserts that the umbrella ordering holds while the hypothesis is its complement. The main focus is put on controlling the power function of the test outside the alternative. As a result, the asymptotic error of the first kind of the constructed solution is smaller than or equal to the fixed significance level α on the whole set where the umbrella ordering does not hold. Also, under finite sample sizes, this error is controlled to a satisfactory extent. A simulation study shows, among other things, that the new test improves upon the solution widely recommended in the literature of the subject. A routine, written in R, is attached as the Supporting Information file.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号