首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is known that patients may cease participating in a longitudinal study and become lost to follow-up. The objective of this article is to present a Bayesian model to estimate the malaria transition probabilities considering individuals lost to follow-up. We consider a homogeneous population, and it is assumed that the considered period of time is small enough to avoid two or more transitions from one state of health to another. The proposed model is based on a Gibbs sampling algorithm that uses information of lost to follow-up at the end of the longitudinal study. To simulate the unknown number of individuals with positive and negative states of malaria at the end of the study and lost to follow-up, two latent variables were introduced in the model. We used a real data set and a simulated data to illustrate the application of the methodology. The proposed model showed a good fit to these data sets, and the algorithm did not show problems of convergence or lack of identifiability. We conclude that the proposed model is a good alternative to estimate probabilities of transitions from one state of health to the other in studies with low adherence to follow-up.  相似文献   

2.
2016年1月起实施的《居住证暂行条例》从根本上摒弃了农民工享有城市基本公共服务的制度桎梏,但农民工实际所能享有的水平尚取决于城市政府开放其公共服务的程度,包括“承诺”服务的领域多少和所设置的约束“条件”高低。本文以各城市居住证管理制度文件为依据,运用全国流动人口调查数据,测量城市基本公共服务面向农民工群体的开放度。结果显示:(1)城市基本公共服务全部领域超过一半面向农民工开放,开放度0.5076。即便在排他性服务领域,也超过1/3面向农民工开放,开放度达到0.3569,农民工开始实质性享有城市基本公共服务,但离平等权利仍有距离;(2)开放度在区域间的分布表现为东部城市低于中部城市更低于西部城市、直辖市低于省会城市更低于其他地级市,珠三角、长三角和京津冀城市低于三大经济圈外城市。相对发达的城市开放度更低,反映出开放度高低主要取决于城市政府的开放意愿而非提供服务的能力;(3)开放限制主要来源于“条件”约束,城市政府在排他性服务领域的承诺度0.7703,承诺服务的领域超过3/4。但约束度达0.4502,城市政府只向其中54.98%能满足约束条件的农民工开放所承诺的服务,源自约束条件的限制占到开放限制的60%以上。城市政府遵循居住证制度规范面向外来人口统一“承诺”服务,但通过“条件”约束选择性地限制了农民工获得城市基本公共服务的机会,农民工平等权利的最终实现尚需要政策制度的系统性配套改革。  相似文献   

3.
We review and compare existing methods for sample size calculation based on the logrank statistic and recommend the method of Lakatos for its accuracy and flexibility in allowing time-dependent rates of event, loss to follow-up, and noncompliance. We extend the Lakatos method to allow a general follow-up scheme, to handle non-inferiority tests, and to predict the number of events over calendar time. We apply the Lakatos method to the simple nonproportional hazard situation of delayed treatment effect to facilitate the comparison of different weighting methods and to evaluate the performance of the maximum combination tests. We use simulation studies to confirm the validity of the Lakatos method and its extensions.  相似文献   

4.
In modern football, various variables as, for example, the distance a team runs or its percentage of ball possession, are collected throughout a match. However, there is a lack of methods to make use of these on-field variables simultaneously and to connect them with the final result of the match. This paper considers data from the German Bundesliga season 2015/2016. The objective is to identify the on-field variables that are connected to the sportive success or failure of the single teams. An extended Bradley–Terry model for football matches is proposed that is able to take into account on-field covariates. Penalty terms are used to reduce the complexity of the model and to find clusters of teams with equal covariate effects. The model identifies the running distance to be the on-field covariate that is most strongly connected to the match outcome.  相似文献   

5.
When statisticians are uncertain as to which parametric statistical model to use to analyse experimental data, they will often resort to a non-parametric approach. The purpose of this paper is to provide insight into a simple approach to take when it is unclear as to the appropriate parametric model and plan to conduct a Bayesian analysis. I introduce an approximate, or substitution likelihood, first proposed by Harold Jeffreys in 1939 and show how to implement the approach combined with both a non-informative and an informative prior to provide a random sample from the posterior distribution of the median of the unknown distribution. The first example I use to demonstrate the approach is a within-patient bioequivalence design and then show how to extend the approach to a parallel group design.  相似文献   

6.
Summary.  As a part of the EUREDIT project new methods to detect multivariate outliers in incomplete survey data have been developed. These methods are the first to work with sampling weights and to be able to cope with missing values. Two of these methods are presented here. The epidemic algorithm simulates the propagation of a disease through a population and uses extreme infection times to find outlying observations. Transformed rank correlations are robust estimates of the centre and the scatter of the data. They use a geometric transformation that is based on the rank correlation matrix. The estimates are used to define a Mahalanobis distance that reveals outliers. The two methods are applied to a small data set and to one of the evaluation data sets of the EUREDIT project.  相似文献   

7.
金勇进  刘展 《统计研究》2016,33(3):11-17
利用大数据进行抽样,很多情况下抽样框的构造比较困难,使得抽取的样本属于非概率样本,难以将传统的抽样推断理论应用到非概率样本中,如何解决非概率抽样的统计推断问题,是大数据背景下抽样调查面临的严重挑战。本文提出了解决非概率抽样统计推断问题的基本思路:一是抽样方法,可以考虑基于样本匹配的样本选择、链接跟踪抽样方法等,使得到的非概率样本近似于概率样本,从而可采用概率样本的统计推断理论;二是权数的构造与调整,可以考虑基于伪设计、模型和倾向得分等方法得到类似于概率样本的基础权数;三是估计,可以考虑基于伪设计、模型和贝叶斯的混合概率估计。最后,以基于样本匹配的样本选择为例探讨了具体解决方法。  相似文献   

8.
We discuss in the present paper the analysis of heteroscedastic regression models and their applications to off-line quality control problems. It is well known that the method of pseudo-likelihood is usually preferred to full maximum likelihood since estimators of the parameters in the regression function obtained are more robust to misspecification of the variance function. Despite its popularity, however, existing theoretical results are difficult to apply and are of limited use in many applications. Using more recent results in estimating equations, we obtain an efficient algorithm for computing the pseudo-likelihood estimator with desirable convergence properties and also derive simple, explicit and easy to apply asymptotic results. These results are used to look in detail at variance minimization in off-line quality control, yielding techniques of inferences for the optimized design parameter. In application of some existing approaches to off-line quality control, such as the dual response methodology, rigorous statistical inference techniques are scarce and difficult to obtain. An example of off-line quality control is presented to discuss the practical aspects involved in the application of the results obtained and to address issues such as data transformation, model building and the optimization of design parameters. The analysis shows very encouraging results, and is seen to be able to unveil some important information not found in previous analyses.  相似文献   

9.
为了对区域能源效率做出客观评价,提出一种新的基于不可控因素的地级城市分类的能源效率分类比较方法,旨在研究能源效率的地区可比性问题。分类问题的讨论包括类别个数的确定和分类方法的选取;分类个数和分类原则的确定采用预测强度和基础因子相结合的办法,进一步采用k最近邻分类方法对其余不可控因子进行分类预测,以避免所谓的自评判问题;运用文中所给出的综合分类结果对一些城市的能源效率进行评价,便于相关城市找到提高能源效率的有效措施。"  相似文献   

10.
Summary.  This is a response to Stone's criticisms of the Spottiswoode report to the UK Treasury which was responding to the Treasury's request for improved methods to evaluate the efficiency and productivity of the 43 police districts in England and Wales. The Spottiswoode report recommended uses of data envelopment analysis (DEA) and stochastic frontier analysis (SFA), which Stone critiqued en route to proposing an alternative approach. Here we note some of the most serious errors in his criticism and inaccurate portrayals of DEA and SFA. Most of our attention is devoted to DEA, and to Stone's recommended alternative approach without much attention to SFA, partly because of his abbreviated discussion of the latter. In our response we attempt to be constructive as well as critical by showing how Stone's proposed approach can be joined to DEA to expand his proposal beyond limitations in his formulations.  相似文献   

11.
System characteristics of a redundant repairable system are studied from a Bayesian viewpoint with different types of priors assumed for the unknown parameters. The system consists of two primary units, one standby unit, and one repair facility which is activated when switching to standby fails. Times to failure and times to repair of the operating units are assumed to follow exponential distributions. When time to failure and time to repair have uncertain parameters, a Bayesian approach is adopted to evaluate system characteristics. Monte Carlo simulation is used to derive the posterior distribution for the mean time to system failure and steady-state availability. Some numerical experiments are performed to illustrate the results derived in this paper.  相似文献   

12.
Clinical trials are often designed to compare several treatments with a common control arm in pairwise fashion. In this paper we study optimal designs for such studies, based on minimizing the total number of patients required to achieve a given level of power. A common approach when designing studies to compare several treatments with a control is to achieve the desired power for each individual pairwise treatment comparison. However, it is often more appropriate to characterize power in terms of the family of null hypotheses being tested, and to control the probability of rejecting all, or alternatively any, of these individual hypotheses. While all approaches lead to unbalanced designs with more patients allocated to the control arm, it is found that the optimal design and required number of patients can vary substantially depending on the chosen characterization of power. The methods make allowance for both continuous and binary outcomes and are illustrated with reference to two clinical trials, one involving multiple doses compared to placebo and the other involving combination therapy compared to mono-therapies. In one example a 55% reduction in sample size is achieved through an optimal design combined with the appropriate characterization of power.  相似文献   

13.
Damage to materials of construction by atmospheric pollutants falls into three categories: corrosion, soiling, and cleaning damage. All three occur in varying degrees in the absence of pollution; to assess the impact of pollutants on materials of construction, therefore, it is necessary to determine the incremental impact attributable to the pollutants. Research to date has tended to the collection of data on effects of inadequately characterized pollutant complexes on inadequately characterized materials, followed by the imposition of arbitrary regression lines on the data, ignoring the underlying chemical kinetics. An outlook on the problem is given and approaches are suggested that will decompose the series of steps into manageable portions. This viewpoint should lead to increased understanding of the phenomena involved and thus to the ability to set rational thresholds of impact as an ingredient in the process of considering the possibility of basing ambient environmental standards on such data.  相似文献   

14.
A databaseof failures of many types of medical equipment was analysed,to study the dependence of failure rate on equipment age andon time since repair. The intention was to use this large datasetto assess the validity of some widely-used models of failurerate, such as the power-law and loglinear Poisson processes,and so to recommend simple and adequate models to those practitionershaving little data to discriminate between rival models. Theaim is also to illustrate a methodology for computing policycosts from failure databases. The power-law process model wasfound to fit slightly better overall than did the loglinear andlinear processes. Some related models were created to fit anobserved peaking of failure rate. The data showed a decreasinghazard of (first) failure after repair for some equipment types.This can be due to imperfect or hazardous repair, and also todiffering failure rates among a population of machines. Two simplemodels of imperfect repair were used to fit the data, and anEmpirical Bayes method was used to fit a model of variable failurerate between machines. Neglect of such variation can lead toan over-estimate of the hazardousness of repair.  相似文献   

15.
Summary.  We propose to use calibrated imputation to compensate for missing values. This technique consists of finding final imputed values that are as close as possible to preliminary imputed values and are calibrated to satisfy constraints. Preliminary imputed values, potentially justified by an imputation model, are obtained through deterministic single imputation. Using appropriate constraints, the resulting imputed estimator is asymptotically unbiased for estimation of linear population parameters such as domain totals. A quasi-model-assisted approach is considered in the sense that inferences do not depend on the validity of an imputation model and are made with respect to the sampling design and a non-response model. An imputation model may still be used to generate imputed values and thus to improve the efficiency of the imputed estimator. This approach has the characteristic of handling naturally the situation where more than one imputation method is used owing to missing values in the variables that are used to obtain imputed values. We use the Taylor linearization technique to obtain a variance estimator under a general non-response model. For the logistic non-response model, we show that ignoring the effect of estimating the non-response model parameters leads to overestimating the variance of the imputed estimator. In practice, the overestimation is expected to be moderate or even negligible, as shown in a simulation study.  相似文献   

16.
GCRM模型与ASRF模型相比,能给出与经济资本测度目标相一致的资本数量,而现有文献对GCRM模型的到期收益率没有给出明确的刻画,本文则通过假设资产的到期收益率与其信用经济资本相关,得出了基于GCRM模型的信用经济资本测度和贷款定价方法,它能够刻画借款者的违约概率、违约损失率,以及商业银行的风险偏好(目标支付概率)和资本融资成本对经济资本和贷款定价的影响,为商业银行相关领域的决策提供了参考。  相似文献   

17.
With increased costs of drug development the need for efficient studies has become critical. A key decision point on the development pathway has become the proof of concept study. These studies must provide clear information to the project teams to enable decision making about further developing a drug candidate but also to gain evidence that any effect size is sufficient to warrant this development given the current market environment. Our case study outlines one such proof of concept trial where a new candidate therapy for neuropathic pain was investigated to assess dose-response and to evaluate the magnitude of its effect compared to placebo. A Normal Dynamic Linear Model was used to estimate the dose-response--enforcing some smoothness in the dose-response, but allowing for the fact that the dose-response may be non-monotonic. A pragmatic, parallel group study design was used with interim analyses scheduled to allow the sponsor to drop ineffective doses or to stop the study. Simulations were performed to assess the operating characteristics of the study design. The study results are presented. Significant cost savings were made when it transpired that the new candidate drug did not show superior efficacy when compared placebo and the study was stopped.  相似文献   

18.
This article describes a new approach to learning curve estimation. Our approach is to formulate statistical procedures that conform to alternative learning curve theories. This leads to the development of nonlinear statistical models of the learning curves. For the three data sets analyzed, autocorrelation seems to be an important problem. Parameter estimates were derived using the maximum likelihood principle in the presence of first-order autocorrelation. Nonnested tests were used to select the appropriate formulation of the learning curve. Research conclusions are to use unit data when estimating a learning curve and to be prepared to treat autocorrelation if present.  相似文献   

19.
The product limit or Kaplan‐Meier (KM) estimator is commonly used to estimate the survival function in the presence of incomplete time to event. Application of this method assumes inherently that the occurrence of an event is known with certainty. However, the clinical diagnosis of an event is often subject to misclassification due to assay error or adjudication error, by which the event is assessed with some uncertainty. In the presence of such errors, the true distribution of the time to first event would not be estimated accurately using the KM method. We develop a method to estimate the true survival distribution by incorporating negative predictive values and positive predictive values, into a KM‐like method of estimation. This allows us to quantify the bias in the KM survival estimates due to the presence of misclassified events in the observed data. We present an unbiased estimator of the true survival function and its variance. Asymptotic properties of the proposed estimators are provided, and these properties are examined through simulations. We demonstrate our methods using data from the Viral Resistance to Antiviral Therapy of Hepatitis C study.  相似文献   

20.
The aim of this study is to investigate the early development of body mass index (BMI), a standard tool for assessing the body shape and average level of adiposity for children and adults. The main aim of the study is to identify the primary trajectories of BMI development and to investigate the changes of certain growth characteristics over time. Based on our longitudinal data of 4223 Finnish children, we took anthropometric measurements from birth up to 15 years of age for birth years 1974, 1981, 1991 and 1995, but only up to 11 years of age for the birth year 2001. As a statistical method, we utilized trajectory analysis with the methods of nonparametric regression. We identified four main trajectories of BMI growth. Two of these trajectories do not seem to follow the normal growth pattern. The highest growth track appears to yield to a track that may yield to overweight and the low birth BMI track shows that the girls’ track differs that of boys on the same track, and on the normal tracks. The so-called adiposity rebound time decreased over time and started earlier for those on the overweight track. According to our study, this kind of acceleration of growth might be more of a general phenomenon that also relates to the other phases of BMI development. The major change seems to occur especially for those children on high growth tracks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号