首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2091篇
  免费   60篇
  国内免费   21篇
管理学   251篇
民族学   1篇
人口学   22篇
丛书文集   31篇
理论方法论   90篇
综合类   312篇
社会学   29篇
统计学   1436篇
  2024年   1篇
  2023年   13篇
  2022年   17篇
  2021年   21篇
  2020年   58篇
  2019年   67篇
  2018年   68篇
  2017年   117篇
  2016年   60篇
  2015年   70篇
  2014年   62篇
  2013年   507篇
  2012年   166篇
  2011年   60篇
  2010年   70篇
  2009年   62篇
  2008年   68篇
  2007年   79篇
  2006年   74篇
  2005年   61篇
  2004年   42篇
  2003年   63篇
  2002年   45篇
  2001年   40篇
  2000年   29篇
  1999年   26篇
  1998年   20篇
  1997年   26篇
  1996年   19篇
  1995年   13篇
  1994年   16篇
  1993年   17篇
  1992年   22篇
  1991年   10篇
  1990年   6篇
  1989年   8篇
  1988年   13篇
  1987年   12篇
  1986年   7篇
  1985年   6篇
  1984年   8篇
  1983年   5篇
  1982年   4篇
  1981年   5篇
  1980年   3篇
  1979年   1篇
  1978年   2篇
  1977年   2篇
  1975年   1篇
排序方式: 共有2172条查询结果,搜索用时 0 毫秒
101.
We give a critical synopsis of classical and recent tests for Poissonity, our emphasis being on procedures which are consistent against general alternatives. Two classes of weighted Cramér–von Mises type test statistics, based on the empirical probability generating function process, are studied in more detail. Both of them generalize already known test statistics by introducing a weighting parameter, thus providing more flexibility with regard to power against specific alternatives. In both cases, we prove convergence in distribution of the statistics under the null hypothesis in the setting of a triangular array of rowwise independent and identically distributed random variables as well as consistency of the corresponding test against general alternatives. Therefore, a sound theoretical basis is provided for the parametric bootstrap procedure, which is applied to obtain critical values in a large-scale simulation study. Each of the tests considered in this study, when implemented via the parametric bootstrap method, maintains a nominal level of significance very closely, even for small sample sizes. The procedures are applied to four well-known data sets.  相似文献   
102.
This paper concerns the geometric treatment of graphical models using Bayes linear methods. We introduce Bayes linear separation as a second order generalised conditional independence relation, and Bayes linear graphical models are constructed using this property. A system of interpretive and diagnostic shadings are given, which summarise the analysis over the associated moral graph. Principles of local computation are outlined for the graphical models, and an algorithm for implementing such computation over the junction tree is described. The approach is illustrated with two examples. The first concerns sales forecasting using a multivariate dynamic linear model. The second concerns inference for the error variance matrices of the model for sales, and illustrates the generality of our geometric approach by treating the matrices directly as random objects. The examples are implemented using a freely available set of object-oriented programming tools for Bayes linear local computation and graphical diagnostic display.  相似文献   
103.
This article suggests an efficient method of estimating a rare sensitive attribute which is assumed following Poisson distribution by using three-stage unrelated randomized response model instead of the Land et al. model (2011 Land, M., S. Singh, and S. A. Sedory. 2011. Estimation of a rare sensitive attribute using poisson distribution. Statistics 46 (3):35160. doi:10.1080/02331888.2010.524300.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) when the population consists of some different sized clusters and clusters selected by probability proportional to size(:pps) sampling. A rare sensitive parameter is estimated by using pps sampling and equal probability two-stage sampling when the parameter of a rare unrelated attribute is assumed to be known and unknown.

We extend this method to the case of stratified population by applying stratified pps sampling and stratified equal probability two-stage sampling. An empirical study is carried out to show the efficiency of the two proposed methods when the parameter of a rare unrelated attribute is assumed to be known and unknown.  相似文献   
104.
In analyzing data from unreplicated factorial designs, the half-normal probability plot is commonly used to screen for the ‘vital few’ effects. Recently, many formal methods have been proposed to overcome the subjectivity of this plot. Lawson (1998) (hereafter denoted as LGB) suggested a hybrid method based on the half-normal probability plot, which is a blend of Lenth (1989) and Loh (1992) method. The method consists of fitting a simple least squares line to the inliers, which are determined by the Lenth method. The effects exceeding the prediction limits based on the fitted line are candidates for the vital few effects. To improve the accuracy of partitioning the effects into inliers and outliers, we propose a modified LGB method (hereafter denoted as the Mod_LGB method), in which more outliers can be classified by using both the Carling’s modification of the box plot (Carling, 2000) and Lenth method. If no outlier exists or there is a wide range in the inliers as determined by the Lenth method, more outliers can be found by the Carling method. A simulation study is conducted in unreplicated 24 designs with the number of active effects ranging from 1 to 6 to compare the efficiency of the Lenth method, original LGB methods, and the proposed modified version of the LGB method.  相似文献   
105.
106.
Linear increments (LI) are used to analyse repeated outcome data with missing values. Previously, two LI methods have been proposed, one allowing non‐monotone missingness but not independent measurement error and one allowing independent measurement error but only monotone missingness. In both, it was suggested that the expected increment could depend on current outcome. We show that LI can allow non‐monotone missingness and either independent measurement error of unknown variance or dependence of expected increment on current outcome but not both. A popular alternative to LI is a multivariate normal model ignoring the missingness pattern. This gives consistent estimation when data are normally distributed and missing at random (MAR). We clarify the relation between MAR and the assumptions of LI and show that for continuous outcomes multivariate normal estimators are also consistent under (non‐MAR and non‐normal) assumptions not much stronger than those of LI. Moreover, when missingness is non‐monotone, they are typically more efficient.  相似文献   
107.
108.
Usually, parametric procedures used for conditional variance modelling are associated with model risk. Model risk may affect the volatility and conditional value at risk estimation process either due to estimation or misspecification risks. Hence, non-parametric artificial intelligence models can be considered as alternative models given that they do not rely on an explicit form of the volatility. In this paper, we consider the least-squares support vector regression (LS-SVR), weighted LS-SVR and Fixed size LS-SVR models in order to handle the problem of conditional risk estimation taking into account issues of model risk. A simulation study and a real application show the performance of proposed volatility and VaR models.  相似文献   
109.
With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the covariates are independent. Covariate‐dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate‐dependent censoring. We consider a covariate‐adjusted weight function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate‐adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate‐adjusted weight approach works well for the variance estimator as well. We illustrate our methods with bone marrow transplant data from the Center for International Blood and Marrow Transplant Research. Here, cancer relapse and death in complete remission are two competing risks.  相似文献   
110.
Inverse probability weighting (IPW) and multiple imputation are two widely adopted approaches dealing with missing data. The former models the selection probability, and the latter models data distribution. Consistent estimation requires correct specification of corresponding models. Although the augmented IPW method provides an extra layer of protection on consistency, it is usually not sufficient in practice as the true data‐generating process is unknown. This paper proposes a method combining the two approaches in the same spirit of calibration in sampling survey literature. Multiple models for both the selection probability and data distribution can be simultaneously accounted for, and the resulting estimator is consistent if any model is correctly specified. The proposed method is within the framework of estimating equations and is general enough to cover regression analysis with missing outcomes and/or missing covariates. Results on both theoretical and numerical investigation are provided.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号