首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   376篇
  免费   9篇
管理学   19篇
民族学   1篇
人才学   1篇
人口学   2篇
丛书文集   5篇
理论方法论   2篇
综合类   58篇
社会学   6篇
统计学   291篇
  2021年   1篇
  2020年   12篇
  2019年   9篇
  2018年   12篇
  2017年   29篇
  2016年   5篇
  2015年   7篇
  2014年   19篇
  2013年   113篇
  2012年   26篇
  2011年   17篇
  2010年   10篇
  2009年   14篇
  2008年   11篇
  2007年   10篇
  2006年   13篇
  2005年   8篇
  2004年   6篇
  2003年   1篇
  2002年   1篇
  2001年   1篇
  2000年   3篇
  1999年   5篇
  1998年   8篇
  1997年   7篇
  1996年   4篇
  1995年   7篇
  1994年   1篇
  1993年   1篇
  1992年   1篇
  1989年   1篇
  1988年   3篇
  1986年   1篇
  1985年   2篇
  1983年   3篇
  1982年   1篇
  1981年   4篇
  1979年   6篇
  1978年   1篇
  1976年   1篇
排序方式: 共有385条查询结果,搜索用时 31 毫秒
11.
For many continuous distributions, a closed-form expression for their quantiles does not exist. Numerical approximations for their quantiles are developed on a distribution-by-distribution basis. This work develops a general approximation for quantiles using the Taylor expansion. Our method only requires that the distribution has a continuous probability density function and its derivatives can be derived to a certain order (usually 3 or 4). We demonstrate our unified approach by approximating the quantiles of the normal, exponential, and chi-square distributions. The approximation works well for these distributions.  相似文献   
12.
In this article, a non-iterative posterior sampling algorithm for linear quantile regression model based on the asymmetric Laplace distribution is proposed. The algorithm combines the inverse Bayes formulae, sampling/importance resampling, and the expectation maximization algorithm to obtain independently and identically distributed samples approximately from the observed posterior distribution, which eliminates the convergence problems in the iterative Gibbs sampling and overcomes the difficulty in evaluating the standard deviance in the EM algorithm. The numeric results in simulations and application to the classical Engel data show that the non-iterative sampling algorithm is more effective than the Gibbs sampling and EM algorithm.  相似文献   
13.
Frailty models are used in the survival analysis to account for the unobserved heterogeneity in the individual risks to disease and death. To analyze the bivariate data on related survival times (e.g., matched pairs experiments, twin or family data), the shared frailty models were suggested. In this article, we introduce the shared gamma frailty models with the reversed hazard rate. We develop the Bayesian estimation procedure using the Markov chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. We apply the model to a real life bivariate survival dataset.  相似文献   
14.
孙旭等 《统计研究》2019,36(7):119-128
代际流动表可以统计子代与其父代社会地位配对数据的交互频数,反映了社会资源占有的优劣势在父子两代人之间的比较。对财富、阶级、特权等社会基本特征演变的实证考察,均依赖于代际流动表的量化分析。对数线性模型是流动表建模分析的基本工具,通过对列联表单元格频数进行拟合,可以识别流动表行分类与列分类之间的强弱交互效应,刻画父子社会地位间的交互结构。本文利用复杂网络社区发现算法分析父子社会地位的关联结构,针对简约对数线性模型拟合精度不够的问题,提出一种新的建模思路:利用社区发现算法对简约对数线性模型的残差列联表进行关联关系挖掘,将发现的社区效应作为附加参数约束引入原对数线性模型,以改善数据的拟合情况。由于该方法只在原简约对数线性模型中增加了一个参数约束,因此仍可以保证建模结果的简洁性及理论意义,同时社区效应补充了原对数线性模型对经验数据结构的解读。论文用此方法对来源于中国综合社会调查数据的经验代际职业流动表进行建模分析,较好地解释了子代职业阶层与父代职业阶层间的关联模式。  相似文献   
15.
Summary.  Because highly correlated data arise from many scientific fields, we investigate parameter estimation in a semiparametric regression model with diverging number of predictors that are highly correlated. For this, we first develop a distribution-weighted least squares estimator that can recover directions in the central subspace, then use the distribution-weighted least squares estimator as a seed vector and project it onto a Krylov space by partial least squares to avoid computing the inverse of the covariance of predictors. Thus, distrbution-weighted partial least squares can handle the cases with high dimensional and highly correlated predictors. Furthermore, we also suggest an iterative algorithm for obtaining a better initial value before implementing partial least squares. For theoretical investigation, we obtain strong consistency and asymptotic normality when the dimension p of predictors is of convergence rate O { n 1/2/ log ( n )} and o ( n 1/3) respectively where n is the sample size. When there are no other constraints on the covariance of predictors, the rates n 1/2 and n 1/3 are optimal. We also propose a Bayesian information criterion type of criterion to estimate the dimension of the Krylov space in the partial least squares procedure. Illustrative examples with a real data set and comprehensive simulations demonstrate that the method is robust to non-ellipticity and works well even in 'small n –large p ' problems.  相似文献   
16.
We derive estimators of the mean of a function of a quality-of-life adjusted failure time, in the presence of competing right censoring mechanisms. Our approach allows for the possibility that some or all of the competing censoring mechanisms are associated with the endpoint, even after adjustment for recorded prognostic factors, with the degree of residual association possibly different for distinct censoring processes. Our methods generalize from a single to many censoring processes and from ignorable to non-ignorable censoring processes.  相似文献   
17.
Summary.  We develop a general non-parametric approach to the analysis of clustered data via random effects. Assuming only that the link function is known, the regression functions and the distributions of both cluster means and observation errors are treated non-parametrically. Our argument proceeds by viewing the observation error at the cluster mean level as though it were a measurement error in an errors-in-variables problem, and using a deconvolution argument to access the distribution of the cluster mean. A Fourier deconvolution approach could be used if the distribution of the error-in-variables were known. In practice it is unknown, of course, but it can be estimated from repeated measurements, and in this way deconvolution can be achieved in an approximate sense. This argument might be interpreted as implying that large numbers of replicates are necessary for each cluster mean distribution, but that is not so; we avoid this requirement by incorporating statistical smoothing over values of nearby explanatory variables. Empirical rules are developed for the choice of smoothing parameter. Numerical simulations, and an application to real data, demonstrate small sample performance for this package of methodology. We also develop theory establishing statistical consistency.  相似文献   
18.
Summary.  The family of inverse regression estimators that was recently proposed by Cook and Ni has proven effective in dimension reduction by transforming the high dimensional predictor vector to its low dimensional projections. We propose a general shrinkage estimation strategy for the entire inverse regression estimation family that is capable of simultaneous dimension reduction and variable selection. We demonstrate that the new estimators achieve consistency in variable selection without requiring any traditional model, meanwhile retaining the root n estimation consistency of the dimension reduction basis. We also show the effectiveness of the new estimators through both simulation and real data analysis.  相似文献   
19.
Statistical inferences for the geometric process (GP) are derived when the distribution of the first occurrence time is assumed to be inverse Gaussian (IG). An α-series process, as a possible alternative to the GP, is introduced since the GP is sometimes inappropriate to apply some reliability and scheduling problems. In this study, statistical inference problem for the α-series process is considered where the distribution of first occurrence time is IG. The estimators of the parameters α, μ, and σ2 are obtained by using the maximum likelihood (ML) method. Asymptotic distributions and consistency properties of the ML estimators are derived. In order to compare the efficiencies of the ML estimators with the widely used nonparametric modified moment (MM) estimators, Monte Carlo simulations are performed. The results showed that the ML estimators are more efficient than the MM estimators. Moreover, two real life datasets are given for application purposes.  相似文献   
20.
The cumulative incidence function plays an important role in assessing its treatment and covariate effects with competing risks data. In this article, we consider an additive hazard model allowing the time-varying covariate effects for the subdistribution and propose the weighted estimating equation under the covariate-dependent censoring by fitting the Cox-type hazard model for the censoring distribution. When there exists some association between the censoring time and the covariates, the proposed coefficients’ estimations are unbiased and the large-sample properties are established. The finite-sample properties of the proposed estimators are examined in the simulation study. The proposed Cox-weighted method is applied to a competing risks dataset from a Hodgkin's disease study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号