首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6521篇
  免费   180篇
  国内免费   14篇
管理学   279篇
劳动科学   1篇
民族学   1篇
人口学   46篇
丛书文集   27篇
理论方法论   19篇
综合类   454篇
社会学   32篇
统计学   5856篇
  2023年   37篇
  2022年   51篇
  2021年   45篇
  2020年   116篇
  2019年   241篇
  2018年   276篇
  2017年   438篇
  2016年   203篇
  2015年   164篇
  2014年   195篇
  2013年   2016篇
  2012年   600篇
  2011年   172篇
  2010年   182篇
  2009年   197篇
  2008年   185篇
  2007年   140篇
  2006年   139篇
  2005年   130篇
  2004年   152篇
  2003年   116篇
  2002年   105篇
  2001年   109篇
  2000年   97篇
  1999年   95篇
  1998年   96篇
  1997年   70篇
  1996年   40篇
  1995年   31篇
  1994年   39篇
  1993年   35篇
  1992年   34篇
  1991年   14篇
  1990年   24篇
  1989年   16篇
  1988年   18篇
  1987年   10篇
  1986年   9篇
  1985年   6篇
  1984年   16篇
  1983年   16篇
  1982年   9篇
  1981年   6篇
  1980年   4篇
  1979年   6篇
  1978年   5篇
  1977年   2篇
  1976年   2篇
  1975年   4篇
  1973年   1篇
排序方式: 共有6715条查询结果,搜索用时 656 毫秒
151.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   
152.
153.
This paper focusses on computing the Bayesian reliability of components whose performance characteristics (degradation – fatigue and cracks) are observed during a specified period of time. Depending upon the nature of degradation data collected, we fit a monotone increasing or decreasing function for the data. Since the components are supposed to have different lifetimes, the rate of degradation is assumed to be a random variable. At a critical level of degradation, the time to failure distribution is obtained. The exponential and power degradation models are studied and exponential density function is assumed for the random variable representing the rate of degradation. The maximum likelihood estimator and Bayesian estimator of the parameter of exponential density function, predictive distribution, hierarchical Bayes approach and robustness of the posterior mean are presented. The Gibbs sampling algorithm is used to obtain the Bayesian estimates of the parameter. Illustrations are provided for the train wheel degradation data.  相似文献   
154.
The estimation of the mixtures of regression models is usually based on the normal assumption of components and maximum likelihood estimation of the normal components is sensitive to noise, outliers, or high-leverage points. Missing values are inevitable in many situations and parameter estimates could be biased if the missing values are not handled properly. In this article, we propose the mixtures of regression models for contaminated incomplete heterogeneous data. The proposed models provide robust estimates of regression coefficients varying across latent subgroups even under the presence of missing values. The methodology is illustrated through simulation studies and a real data analysis.  相似文献   
155.
This paper studies the likelihood ratio ordering of parallel systems under multiple-outlier models. We introduce a partial order, the so-called θ-order, and show that the θ-order between the parameter vectors of the parallel systems implies the likelihood ratio order between the systems.  相似文献   
156.
This article analyzes a growing group of fixed T dynamic panel data estimators with a multifactor error structure. We use a unified notational approach to describe these estimators and discuss their properties in terms of deviations from an underlying set of basic assumptions. Furthermore, we consider the extendability of these estimators to practical situations that may frequently arise, such as their ability to accommodate unbalanced panels and common observed factors. Using a large-scale simulation exercise, we consider scenarios that remain largely unexplored in the literature, albeit being of great empirical relevance. In particular, we examine (i) the effect of the presence of weakly exogenous covariates, (ii) the effect of changing the magnitude of the correlation between the factor loadings of the dependent variable and those of the covariates, (iii) the impact of the number of moment conditions on bias and size for GMM estimators, and finally (iv) the effect of sample size. We apply each of these estimators to a crime application using a panel data set of local government authorities in New South Wales, Australia; we find that the results bear substantially different policy implications relative to those potentially derived from standard dynamic panel GMM estimators. Thus, our study may serve as a useful guide to practitioners who wish to allow for multiplicative sources of unobserved heterogeneity in their model.  相似文献   
157.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   
158.
In this article, an integer-valued self-exciting threshold model with a finite range based on the binomial INARCH(1) model is proposed. Important stochastic properties are derived, and approaches for parameter estimation are discussed. A real-data example about the regional spread of public drunkenness in Pittsburgh demonstrates the applicability of the new model in comparison to existing models. Feasible modifications of the model are presented, which are designed to handle special features such as zero-inflation.  相似文献   
159.
The Perron test which is based on a Dickey–Fuller test regression is a commonly employed approach to test for a unit root in the presence of a structural break of unknown timing. In the case of an innovational outlier (IO), the Perron test tends to exhibit spurious rejections in finite samples when the break occurs under the null hypothesis. In the present paper, a new Perron-type IO unit root test is developed. It is shown in Monte Carlo experiments that the new test does not over-reject the null hypothesis. Even for the case of a level and slope break for trending data, the empirical size is near its nominal level. The test distribution equals the case of a known break date. Furthermore, the test is able to identify the true break date very accurately even for small breaks. As an application serves the Nelson–Plosser data set.  相似文献   
160.
The robustness of an extended version of Colton's decision theoretic model is considered. The extended version includes the losses due to the patients who are not entered in the experiment, but require treatment while the experiment is in progress. Among the topics considered are the effects of risk of using a sample size considerably less than the optimum, use of an incorrect patient horizon, application of a modified loss function, and use of a two point prior distribution. It is shown that the investigated model is robust with respect to all these changes with the exception of the use of the modified prior density.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号