首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   537篇
  免费   26篇
  国内免费   2篇
管理学   20篇
民族学   1篇
人口学   2篇
丛书文集   16篇
理论方法论   2篇
综合类   170篇
社会学   2篇
统计学   352篇
  2023年   6篇
  2022年   2篇
  2021年   8篇
  2020年   14篇
  2019年   17篇
  2018年   14篇
  2017年   25篇
  2016年   19篇
  2015年   26篇
  2014年   12篇
  2013年   129篇
  2012年   43篇
  2011年   20篇
  2010年   17篇
  2009年   15篇
  2008年   17篇
  2007年   16篇
  2006年   15篇
  2005年   19篇
  2004年   15篇
  2003年   11篇
  2002年   9篇
  2001年   12篇
  2000年   11篇
  1999年   7篇
  1998年   12篇
  1997年   6篇
  1996年   7篇
  1995年   7篇
  1994年   5篇
  1993年   4篇
  1992年   3篇
  1991年   2篇
  1990年   5篇
  1989年   4篇
  1988年   4篇
  1987年   1篇
  1985年   2篇
  1981年   1篇
  1978年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有565条查询结果,搜索用时 31 毫秒
21.
Adaptive estimation of parameters of some failure time distributionsis considered. A new procedure named the F-procedure has beendeveloped for selecting an appropriate model out of two possible models by Pandey et.al. (1991). Applying this F-procedure adaptive estimatorsof parameters of exponential, Wei bull, inverse Gaussian (IG) and Wald failure time distributions have been proposed in this paper. Comparison of these estimators has been undertaken with MLE's of the respective parameters and with some previous adaptiveestimators by simulation of samples using the Monte Carlo method.Adaptive estimation of parameters of some failure time distributions is considered. A new procedure named the F-procedure has been developedfor selecting an appropriate model out of two possible models by Pandey et.al. (1991). Applying this F-procedure adaptive estimators of parameters of exponential, Wei bull, inverse Gaussian (IG) and Wald failure time distributions have been proposed in this paper. Comparison of these estimators has been undertaken with MLE's of the respective parameters and with some previous adaptive estimators by simulation of samples using the Monte Carlo method.  相似文献   
22.
To reduce the dimensionality of regression problems, sliced inverse regression approaches make it possible to determine linear combinations of a set of explanatory variables X related to the response variable Y in general semiparametric regression context. From a practical point of view, the determination of a suitable dimension (number of the linear combination of X) is important. In the literature, statistical tests based on the nullity of some eigenvalues have been proposed. Another approach is to consider the quality of the estimation of the effective dimension reduction (EDR) space. The square trace correlation between the true EDR space and its estimate can be used as goodness of estimation. In this article, we focus on the SIRα method and propose a naïve bootstrap estimation of the square trace correlation criterion. Moreover, this criterion could also select the α parameter in the SIRα method. We indicate how it can be used in practice. A simulation study is performed to illustrate the behavior of this approach.  相似文献   
23.
Four methods of approximating confidence limits for the single negative binomial parameter, P, are outlined and an empirical study is presented. Some remarks on prediction intervals are also included.  相似文献   
24.
Process regression methodology is underdeveloped relative to the frequency with which pertinent data arise. In this article, the response-190 is a binary indicator process representing the joint event of being alive and remaining in a specific state. The process is indexed by time (e.g., time since diagnosis) and observed continuously. Data of this sort occur frequently in the study of chronic disease. A general area of application involves a recurrent event with non-negligible duration (e.g., hospitalization and associated length of hospital stay) and subject to a terminating event (e.g., death). We propose a semiparametric multiplicative model for the process version of the probability of being alive and in the (transient) state of interest. Under the proposed methods, the regression parameter is estimated through a procedure that does not require estimating the baseline probability. Unlike the majority of process regression methods, the proposed methods accommodate multiple sources of censoring. In particular, we derive a computationally convenient variant of inverse probability of censoring weighting based on the additive hazards model. We show that the regression parameter estimator is asymptotically normal, and that the baseline probability function estimator converges to a Gaussian process. Simulations demonstrate that our estimators have good finite sample performance. We apply our method to national end-stage liver disease data. The Canadian Journal of Statistics 48: 222–237; 2020 © 2019 Statistical Society of Canada  相似文献   
25.
A common approach taken in high‐dimensional regression analysis is sliced inverse regression, which separates the range of the response variable into non‐overlapping regions, called ‘slices’. Asymptotic results are usually shown assuming that the slices are fixed, while in practice, estimators are computed with random slices containing the same number of observations. Based on empirical process theory, we present a unified theoretical framework to study these techniques, and revisit popular inverse regression estimators. Furthermore, we introduce a bootstrap methodology that reproduces the laws of Cramér–von Mises test statistics of interest to model dimension, effects of specified covariates and whether or not a sliced inverse regression estimator is appropriate. Finally, we investigate the accuracy of different bootstrap procedures by means of simulations.  相似文献   
26.
Computer models with functional output are omnipresent throughout science and engineering. Most often the computer model is treated as a black-box and information about the underlying mathematical model is not exploited in statistical analyses. Consequently, general-purpose bases such as wavelets are typically used to describe the main characteristics of the functional output. In this article we advocate for using information about the underlying mathematical model in order to choose a better basis for the functional output. To validate this choice, a simulation study is presented in the context of uncertainty analysis for a computer model from inverse Sturm-Liouville problems.  相似文献   
27.
Log‐normal linear regression models are popular in many fields of research. Bayesian estimation of the conditional mean of the dependent variable is problematic as many choices of the prior for the variance (on the log‐scale) lead to posterior distributions with no finite moments. We propose a generalized inverse Gaussian prior for this variance and derive the conditions on the prior parameters that yield posterior distributions of the conditional mean of the dependent variable with finite moments up to a pre‐specified order. The conditions depend on one of the three parameters of the suggested prior; the other two have an influence on inferences for small and medium sample sizes. A second goal of this paper is to discuss how to choose these parameters according to different criteria including the optimization of frequentist properties of posterior means.  相似文献   
28.
29.
With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the covariates are independent. Covariate‐dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate‐dependent censoring. We consider a covariate‐adjusted weight function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate‐adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate‐adjusted weight approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research. Here, cancer relapse and death in complete remission are two competing risks.  相似文献   
30.
Inverse probability weighting (IPW) and multiple imputation are two widely adopted approaches dealing with missing data. The former models the selection probability, and the latter models data distribution. Consistent estimation requires correct specification of corresponding models. Although the augmented IPW method provides an extra layer of protection on consistency, it is usually not sufficient in practice as the true data‐generating process is unknown. This paper proposes a method combining the two approaches in the same spirit of calibration in sampling survey literature. Multiple models for both the selection probability and data distribution can be simultaneously accounted for, and the resulting estimator is consistent if any model is correctly specified. The proposed method is within the framework of estimating equations and is general enough to cover regression analysis with missing outcomes and/or missing covariates. Results on both theoretical and numerical investigation are provided.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号