首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   417篇
  免费   10篇
  国内免费   1篇
管理学   18篇
人口学   5篇
丛书文集   8篇
理论方法论   10篇
综合类   177篇
社会学   4篇
统计学   206篇
  2023年   4篇
  2021年   1篇
  2020年   5篇
  2019年   8篇
  2018年   17篇
  2017年   16篇
  2016年   15篇
  2015年   13篇
  2014年   19篇
  2013年   87篇
  2012年   33篇
  2011年   19篇
  2010年   18篇
  2009年   16篇
  2008年   17篇
  2007年   25篇
  2006年   18篇
  2005年   8篇
  2004年   9篇
  2003年   9篇
  2002年   7篇
  2001年   9篇
  2000年   9篇
  1999年   10篇
  1998年   7篇
  1997年   5篇
  1996年   5篇
  1995年   2篇
  1994年   3篇
  1993年   1篇
  1992年   5篇
  1991年   1篇
  1989年   1篇
  1988年   1篇
  1987年   1篇
  1985年   2篇
  1979年   1篇
  1976年   1篇
排序方式: 共有428条查询结果,搜索用时 15 毫秒
91.
ABSTRACT

In a test of significance, it is common practice to report the p-value as one way of summarizing the incompatibility between a set of data and a proposed model for the data constructed under a set of assumptions together with a null hypothesis. However, the p-value does have some flaws, one being in general its definition for two-sided tests and a related serious logical one of incoherence, in its interpretation as a statistical measure of evidence for its respective null hypothesis. We shall address these two issues in this article.  相似文献   
92.
In this paper, a new criterion is constructed for testing hypothesis about covariance function of Gaussian stationary stochastic process with an unknown mean. This criterion is based on the fact that we can estimate the deviation of covariance function from its estimator with a given accuracy and reliability in Lp metric.  相似文献   
93.
We consider a regression analysis of longitudinal data in the presence of outcome‐dependent observation times and informative censoring. Existing approaches commonly require a correct specification of the joint distribution of longitudinal measurements, the observation time process, and informative censoring time under the joint modeling framework and can be computationally cumbersome due to the complex form of the likelihood function. In view of these issues, we propose a semiparametric joint regression model and construct a composite likelihood function based on a conditional order statistics argument. As a major feature of our proposed methods, the aforementioned joint distribution is not required to be specified, and the random effect in the proposed joint model is treated as a nuisance parameter. Consequently, the derived composite likelihood bypasses the need to integrate over the random effect and offers the advantage of easy computation. We show that the resulting estimators are consistent and asymptotically normal. We use simulation studies to evaluate the finite‐sample performance of the proposed method and apply it to a study of weight loss data that motivated our investigation.  相似文献   
94.
In cancer diagnosis studies, high‐throughput gene profiling has been extensively conducted, searching for genes whose expressions may serve as markers. Data generated from such studies have the ‘large d, small n’ feature, with the number of genes profiled much larger than the sample size. Penalization has been extensively adopted for simultaneous estimation and marker selection. Because of small sample sizes, markers identified from the analysis of single data sets can be unsatisfactory. A cost‐effective remedy is to conduct integrative analysis of multiple heterogeneous data sets. In this article, we investigate composite penalization methods for estimation and marker selection in integrative analysis. The proposed methods use the minimax concave penalty (MCP) as the outer penalty. Under the homogeneity model, the ridge penalty is adopted as the inner penalty. Under the heterogeneity model, the Lasso penalty and MCP are adopted as the inner penalty. Effective computational algorithms based on coordinate descent are developed. Numerical studies, including simulation and analysis of practical cancer data sets, show satisfactory performance of the proposed methods.  相似文献   
95.
This study investigates the empirical evidence on the effects of unanticipated changes in nominal money on real output in 47 countries when viewed through a window (i.e., likelihood function) that assumes the neutrality of anticipated changes. Using a Bayesian predictivist approach, it provides a pedagogical Bayesian analysis of generated regressor models in the face of specification uncertainty involving, among other things, multiple unit roots and trend stationary alternatives.  相似文献   
96.
This paper develops a novel weighted composite quantile regression (CQR) method for estimation of a linear model when some covariates are missing at random and the probability for missingness mechanism can be modelled parametrically. By incorporating the unbiased estimating equations of incomplete data into empirical likelihood (EL), we obtain the EL-based weights, and then re-adjust the inverse probability weighted CQR for estimating the vector of regression coefficients. Theoretical results show that the proposed method can achieve semiparametric efficiency if the selection probability function is correctly specified, therefore the EL weighted CQR is more efficient than the inverse probability weighted CQR. Besides, our algorithm is computationally simple and easy to implement. Simulation studies are conducted to examine the finite sample performance of the proposed procedures. Finally, we apply the new method to analyse the US news College data.  相似文献   
97.
In this paper, multivariate data with missing observations, where missing values could be by chance or by design, are considered for various models including the growth curve model. The likelihood equations are derived and the consistency of the estimates established. The likelihood ratio tests are explicity derived.  相似文献   
98.
模糊合成算子是模糊评价中数据处理的核心技术。目前模糊合成算子在模糊评价中应用时比较混乱,没有一般方法。原因之一是Zadeh的模糊集理论存在缺陷,另一方面是对算子性质研究挖掘不够。需要研究不同的模糊算子在模糊评价中的使用方法。Hamacher算子是参数算子,含参γ在[1,+∞)时有对参数变量和数据变量的多重单调性,它的清晰域不随参数变化而改变,并且清晰域最小。模糊评价中的数据起到至关重要作用,分析模糊与算子的清晰域,确定数据点与清晰域的从属关系,提出基于清晰域与数据分布的Hamacher算子的应用方法,通过实例验证其合理、有效性。  相似文献   
99.
Recent research on finding appropriate composite endpoints for preclinical Alzheimer's disease has focused considerable effort on finding “optimized” weights in the construction of a weighted composite score. In this paper, several proposed methods are reviewed. Our results indicate no evidence that these methods will increase the power of the test statistics, and some of these weights will introduce biases to the study. Our recommendation is to focus on identifying more sensitive items from clinical practice and appropriate statistical analyses of a large Alzheimer's data set. Once a set of items has been selected, there is no evidence that adding weights will generate more sensitive composite endpoints.  相似文献   
100.
This article considers multiple hypotheses testing with the generalized familywise error rate k-FWER control, which is the probability of at least k false rejections. We first assume the p-values corresponding to the true null hypotheses are independent, and propose adaptive generalized Bonferroni procedure with k-FWER control based on the estimation of the number of true null hypotheses. Then, we assume the p-values are dependent, satisfying block dependence, and propose adaptive procedure with k-FWER control. Extensive simulations compare the performance of the adaptive procedures with different estimators.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号