首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   225篇
  免费   13篇
  国内免费   8篇
管理学   49篇
丛书文集   1篇
理论方法论   1篇
综合类   28篇
社会学   4篇
统计学   163篇
  2024年   1篇
  2023年   6篇
  2022年   4篇
  2021年   7篇
  2020年   10篇
  2019年   15篇
  2018年   15篇
  2017年   23篇
  2016年   9篇
  2015年   12篇
  2014年   12篇
  2013年   39篇
  2012年   7篇
  2011年   5篇
  2010年   9篇
  2009年   14篇
  2008年   7篇
  2007年   8篇
  2006年   9篇
  2005年   5篇
  2004年   7篇
  2003年   3篇
  2002年   4篇
  2001年   4篇
  2000年   4篇
  1999年   1篇
  1998年   2篇
  1996年   1篇
  1995年   2篇
  1994年   1篇
排序方式: 共有246条查询结果,搜索用时 31 毫秒
141.
In this paper, we consider dynamic panel data models where the autoregressive parameter changes over time. We propose the GMM and ML estimators for this model. We conduct Monte Carlo simulation to compare the performance of these two estimators. The simulation results show that the ML estimator outperforms the GMM estimator.  相似文献   
142.
In the medical literature, there has been an increased interest in evaluating association between exposure and outcomes using nonrandomized observational studies. However, because assignments to exposure are not random in observational studies, comparisons of outcomes between exposed and nonexposed subjects must account for the effect of confounders. Propensity score methods have been widely used to control for confounding, when estimating exposure effect. Previous studies have shown that conditioning on the propensity score results in biased estimation of conditional odds ratio and hazard ratio. However, research is lacking on the performance of propensity score methods for covariate adjustment when estimating the area under the ROC curve (AUC). In this paper, AUC is proposed as measure of effect when outcomes are continuous. The AUC is interpreted as the probability that a randomly selected nonexposed subject has a better response than a randomly selected exposed subject. A series of simulations has been conducted to examine the performance of propensity score methods when association between exposure and outcomes is quantified by AUC; this includes determining the optimal choice of variables for the propensity score models. Additionally, the propensity score approach is compared with that of the conventional regression approach to adjust for covariates with the AUC. The choice of the best estimator depends on bias, relative bias, and root mean squared error. Finally, an example looking at the relationship of depression/anxiety and pain intensity in people with sickle cell disease is used to illustrate the estimation of the adjusted AUC using the proposed approaches.  相似文献   
143.
Nonparametric estimation and inferences of conditional distribution functions with longitudinal data have important applications in biomedical studies, such as epidemiological studies and longitudinal clinical trials. Estimation approaches without any structural assumptions may lead to inadequate and numerically unstable estimators in practice. We propose in this paper a nonparametric approach based on time-varying parametric models for estimating the conditional distribution functions with a longitudinal sample. Our model assumes that the conditional distribution of the outcome variable at each given time point can be approximated by a parametric model after local Box–Cox transformation. Our estimation is based on a two-step smoothing method, in which we first obtain the raw estimators of the conditional distribution functions at a set of disjoint time points, and then compute the final estimators at any time by smoothing the raw estimators. Applications of our two-step estimation method have been demonstrated through a large epidemiological study of childhood growth and blood pressure. Finite sample properties of our procedures are investigated through a simulation study. Application and simulation results show that smoothing estimation from time-variant parametric models outperforms the existing kernel smoothing estimator by producing narrower pointwise bootstrap confidence band and smaller root mean squared error.  相似文献   
144.
Algebraic relationships between Hosmer–Lemeshow (HL), Pigeon–Heyse (J2), and Tsiatis (T) goodness-of-fit statistics for binary logistic regression models with continuous covariates were investigated, and their distributional properties and performances studied using simulations. Groups were formed under deciles-of-risk (DOR) and partition-covariate-space (PCS) methods. Under DOR, HL and T followed reported null distributions, while J2 did not. Under PCS, only T followed its reported null distribution, with HL and J2 dependent on model covariate number and partitioning. Generally, all had similar power. Of the three, T performed best, maintaining Type-I error rates and having a distribution invariant to covariate characteristics, number, and partitioning.  相似文献   
145.
An important aspect in the modelling of biological phenomena in living organisms, whether the measurements are of blood pressure, enzyme levels, biomechanical movements or heartbeats, etc., is time variation in the data. Thus, the recovery of a 'smooth' regression or trend function from noisy time-varying sampled data becomes a problem of particular interest. Here we use non-linear wavelet thresholding to estimate a regression or a trend function in the presence of additive noise which, in contrast to most existing models, does not need to be stationary. (Here, non-stationarity means that the spectral behaviour of the noise is allowed to change slowly over time). We develop a procedure to adapt existing threshold rules to such situations, e.g. that of a time-varying variance in the errors. Moreover, in the model of curve estimation for functions belonging to a Besov class with locally stationary errors, we derive a near-optimal rate for the -risk between the unknown function and our soft or hard threshold estimator, which holds in the general case of an error distribution with bounded cumulants. In the case of Gaussian errors, a lower bound on the asymptotic minimax rate in the wavelet coefficient domain is also obtained. Also it is argued that a stronger adaptivity result is possible by the use of a particular location and level dependent threshold obtained by minimizing Stein's unbiased estimate of the risk. In this respect, our work generalizes previous results, which cover the situation of correlated, but stationary errors. A natural application of our approach is the estimation of the trend function of non-stationary time series under the model of local stationarity. The method is illustrated on both an interesting simulated example and a biostatistical data-set, measurements of sheep luteinizing hormone, which exhibits a clear non-stationarity in its variance.  相似文献   
146.
Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations.  相似文献   
147.
The authors consider Bayesian methods for fitting three semiparametric survival models, incorporating time‐dependent covariates that are step functions. In particular, these are models due to Cox [Cox ( 1972 ) Journal of the Royal Statistical Society, Series B, 34, 187–208], Prentice & Kalbfleisch and Cox & Oakes [Cox & Oakes ( 1984 ) Analysis of Survival Data, Chapman and Hall, London]. The model due to Prentice & Kalbfleisch [Prentice & Kalbfleisch ( 1979 ) Biometrics, 35, 25–39], which has seen very limited use, is given particular consideration. The prior for the baseline distribution in each model is taken to be a mixture of Polya trees and posterior inference is obtained through standard Markov chain Monte Carlo methods. They demonstrate the implementation and comparison of these three models on the celebrated Stanford heart transplant data and the study of the timing of cerebral edema diagnosis during emergency room treatment of diabetic ketoacidosis in children. An important feature of their overall discussion is the comparison of semi‐parametric families, and ultimate criterion based selection of a family within the context of a given data set. The Canadian Journal of Statistics 37: 60–79; © 2009 Statistical Society of Canada  相似文献   
148.
 本文在Gibbs抽样条件下,利用我国1952-2006年经济增长和财政支出的时间序列数据,对瓦格纳法则在中国有效性问题进行了实证检验,并具体考察了中国政府公共支出规模扩张与经济增长之间的平滑变参数特征。实证结果表明中国经济增长与政府公共支出增长之间的因果关系并不确定,两者在数量上的关系具有明显的阶段性特点,其参数在具体形态上呈平滑变参数特征。  相似文献   
149.
积极财政政策的全要素生产率增长效应   总被引:1,自引:0,他引:1  
利用时变参数模型和面板数据模型,可以对积极财政政策的全要素生产率增长效应进行分析。分析表明,1998年以来我国实施的积极财政政策对全国经济全要素生产率增长、省份经济全要素生产率增长和技术进步具有较强的促进作用,但对省份经济效率提高却具有明显的抑制作用。总体上说,积极财政政策对我国经济增长质量的提高起到了重要的促进作用,但在具体实施的过程中,还存在着诸如支出结构不尽合理、过分强调数量而忽视使用效率等不足之处。  相似文献   
150.
具有时变自由度的t-copula蒙特卡罗组合收益风险研究   总被引:1,自引:0,他引:1  
应用时变条件t-copula函数描述股票指数收益序列之间的时变相依结构。时变条件t-copula模型的难点在于如何设定时变相依参数的演化方程,本文建立了用于描述包含时变自由度在内的所有时变相依模型参数的演化方程。进而采用蒙特卡洛仿真方法计算了各种指数组合的VaR,分析了道琼斯指数与标准普尔指数组合风险的演化趋势,并对结果进行后验测试,结果表明,时变条件t-copula函数仿真估计VaR可以覆盖最大损失风险。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号