首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Multiple imputation has emerged as a widely used model-based approach in dealing with incomplete data in many application areas. Gaussian and log-linear imputation models are fairly straightforward to implement for continuous and discrete data, respectively. However, in missing data settings which include a mix of continuous and discrete variables, correct specification of the imputation model could be a daunting task owing to the lack of flexible models for the joint distribution of variables of different nature. This complication, along with accessibility to software packages that are capable of carrying out multiple imputation under the assumption of joint multivariate normality, appears to encourage applied researchers for pragmatically treating the discrete variables as continuous for imputation purposes, and subsequently rounding the imputed values to the nearest observed category. In this article, I introduce a distance-based rounding approach for ordinal variables in the presence of continuous ones. The first step of the proposed rounding process is predicated upon creating indicator variables that correspond to the ordinal levels, followed by jointly imputing all variables under the assumption of multivariate normality. The imputed values are then converted to the ordinal scale based on their Euclidean distances to a set of indicators, with minimal distance corresponding to the closest match. I compare the performance of this technique to crude rounding via commonly accepted accuracy and precision measures with simulated data sets.  相似文献   

2.
A novel family of mixture models is introduced based on modified t-factor analyzers. Modified factor analyzers were recently introduced within the Gaussian context and our work presents a more flexible and robust alternative. We introduce a family of mixtures of modified t-factor analyzers that uses this generalized version of the factor analysis covariance structure. We apply this family within three paradigms: model-based clustering; model-based classification; and model-based discriminant analysis. In addition, we apply the recently published Gaussian analogue to this family under the model-based classification and discriminant analysis paradigms for the first time. Parameter estimation is carried out within the alternating expectation-conditional maximization framework and the Bayesian information criterion is used for model selection. Two real data sets are used to compare our approach to other popular model-based approaches; in these comparisons, the chosen mixtures of modified t-factor analyzers model performs favourably. We conclude with a summary and suggestions for future work.  相似文献   

3.
We propose two probability-like measures of individual cluster-membership certainty that can be applied to a hard partition of the sample such as that obtained from the partitioning around medoids (PAM) algorithm, hierarchical clustering or k-means clustering. One measure extends the individual silhouette widths and the other is obtained directly from the pairwise dissimilarities in the sample. Unlike the classic silhouette, however, the measures behave like probabilities and can be used to investigate an individual’s tendency to belong to a cluster. We also suggest two possible ways to evaluate the hard partition using these measures. We evaluate the performance of both measures in individuals with ambiguous cluster membership, using simulated binary datasets that have been partitioned by the PAM algorithm or continuous datasets that have been partitioned by hierarchical clustering and k-means clustering. For comparison, we also present results from soft-clustering algorithms such as soft analysis clustering (FANNY) and two model-based clustering methods. Our proposed measures perform comparably to the posterior probability estimators from either FANNY or the model-based clustering methods. We also illustrate the proposed measures by applying them to Fisher’s classic dataset on irises.  相似文献   

4.
In this article we compare some common ratio estimators for estimating the population total of a given characteristic. The sampling schemes considered are simple random sampling (S.R.S.) and S.R.S.under stratification. The comparisons are made using the Pitman Nearness criterion under the model-based approach. The error term is assumed normal with mean zero and variance σg(x). The function g(x) is a known function of the auxiliary variable x. Special interest is on the cases of g(x) =l and x. The result is found the same as that using MSE criterion, although the PN is very different from the MSE intrinsically.  相似文献   

5.
《统计学通讯:理论与方法》2012,41(13-14):2342-2355
We propose a distance-based method to relate two data sets. We define and study some measures of multivariate association based on distances between observations. The proposed approach can be used to deal with general data sets (e.g., observations on continuous, categorical or mixed variables). An application, using Hellinger distance, provides the relationships between two regions of hyperspectral images.  相似文献   

6.
This paper describes an application of small area estimation (SAE) techniques under area-level spatial random effect models when only area (or district or aggregated) level data are available. In particular, the SAE approach is applied to produce district-level model-based estimates of crop yield for paddy in the state of Uttar Pradesh in India using the data on crop-cutting experiments supervised under the Improvement of Crop Statistics scheme and the secondary data from the Population Census. The diagnostic measures are illustrated to examine the model assumptions as well as reliability and validity of the generated model-based small area estimates. The results show a considerable gain in precision in model-based estimates produced applying SAE. Furthermore, the model-based estimates obtained by exploiting spatial information are more efficient than the one obtained by ignoring this information. However, both of these model-based estimates are more efficient than the direct survey estimate. In many districts, there is no survey data and therefore it is not possible to produce direct survey estimates for these districts. The model-based estimates generated using SAE are still reliable for such districts. These estimates produced by using SAE will provide invaluable information to policy-analysts and decision-makers.  相似文献   

7.
Nowadays, sensory properties of materials are subject to growing attention both in an hedonic point of view and in an utilitarian one. Hence, the formulation of the foundations of an instrumental metrological approach that will allow for the characterization of visual similarities between textures belonging to the same type becomes a challenge of the research activities in the domain of perception. In this paper, our specific objective is to link an instrumental approach of metrology of the assessment of visual textures with a metrology approach based on a softcopy experiment performed by human judges. The experiment consisted in ranking of isochromatic colored textures according to the visual contrast. A fixed effects additive model is considered for the analysis of the rank data collected from the softcopy experiment. The model is fitted to the data using a least-squares criterion. The resulting data analysis gives rise to a sensory scale that shows a non-linear correlation and a monotonic functional relationship with the physical attribute on which the ranking experiment is based. Furthermore, the capacity of the judges to discriminate the textures according to the visual contrast varies according to the color ranges and the textures types.  相似文献   

8.
The generalized estimating equation is a popular method for analyzing correlated response data. It is important to determine a proper working correlation matrix at the time of applying the generalized estimating equation since an improper selection sometimes results in inefficient parameter estimates. We propose a criterion for the selection of an appropriate working correlation structure. The proposed criterion is based on a statistic to test the hypothesis that the covariance matrix equals a given matrix, and also measures the discrepancy between the covariance matrix estimator and the specified working covariance matrix. We evaluated the performance of the proposed criterion through simulation studies assuming that for each subject, the number of observations remains the same. The results revealed that when the proposed criterion was adopted, the proportion of selecting a true correlation structure was generally higher than that when other competing approaches were adopted. The proposed criterion was applied to longitudinal wheeze data, and it was suggested that the resultant correlation structure was the most accurate.  相似文献   

9.
Model-based estimators are becoming very popular in statistical offices because Governments require accurate estimates for small domains that were not planned when the study was designed, as their inclusion would have produced an increase in the cost of the study. The sample sizes in these domains are very small or even zero; consequently, traditional direct design-based estimators lead to unacceptably large standard errors. In this regard, model-based estimators that 'borrow information' from related areas by using auxiliary information are appropriate. This paper reviews, under the model-based approach, a BLUP synthetic and an EBLUP estimator. The goal is to obtain estimators of domain totals when there are several domains with very small sample sizes or without sampled units. We also provide detailed expressions of the mean squared error at different levels of aggregation. The results are illustrated with real data from the Basque Country Business Survey.  相似文献   

10.
In the framework of model-based cluster analysis, finite mixtures of Gaussian components represent an important class of statistical models widely employed for dealing with quantitative variables. Within this class, we propose novel models in which constraints on the component-specific variance matrices allow us to define Gaussian parsimonious clustering models. Specifically, the proposed models are obtained by assuming that the variables can be partitioned into groups resulting to be conditionally independent within components, thus producing component-specific variance matrices with a block diagonal structure. This approach allows us to extend the methods for model-based cluster analysis and to make them more flexible and versatile. In this paper, Gaussian mixture models are studied under the above mentioned assumption. Identifiability conditions are proved and the model parameters are estimated through the maximum likelihood method by using the Expectation-Maximization algorithm. The Bayesian information criterion is proposed for selecting the partition of the variables into conditionally independent groups. The consistency of the use of this criterion is proved under regularity conditions. In order to examine and compare models with different partitions of the set of variables a hierarchical algorithm is suggested. A wide class of parsimonious Gaussian models is also presented by parameterizing the component-variance matrices according to their spectral decomposition. The effectiveness and usefulness of the proposed methodology are illustrated with two examples based on real datasets.  相似文献   

11.
ABSTRACT

There is a growing interest to get a fully MR based radiotherapy. The most important development needed is to obtain improved bone tissue estimation. The existing model-based methods perform poorly on bone tissues. This paper was aimed at obtaining improved bone tissue estimation. Skew-Gaussian mixture model and Gaussian mixture model were proposed to investigate CT image estimation from MR images by partitioning the data into two major tissue types. The performance of the proposed models was evaluated using the leave-one-out cross-validation method on real data. In comparison with the existing model-based approaches, the model-based partitioning approach outperformed in bone tissue estimation, especially in dense bone tissue estimation.  相似文献   

12.
The p-value-based adjustment of individual endpoints and the global test for an overall inference are the two general approaches for the analysis of multiple endpoints. Statistical procedures developed for testing multivariate outcomes often assume that the multivariate endpoints are either independent or normally distributed. This paper presents a general approach for the analysis of multivariate binary data under the framework of generalized linear models. The generalized estimating equations (GEE) approach is applied to estimate the correlation matrix of the test statistics using the identity and exchangeable working correlation matrices with the model-based as well as robust estimators. The objectives of the approaches are the adjustment of p-values of individual endpoints to identify the affected endpoints as well as the global test of an overall effect. A Monte Carlo simulation was conducted to evaluate the overall family wise error (FWE) rates of the single-step down p-value adjustment approach from two adjustment methods to three global test statistics. The p-value adjustment approach seems to control the FWE better than the global approach Applications of the proposed methods are illustrated by analyzing a carcinogenicity experiment designed to study the dose response trend for 10 tumor sites, and a developmental toxicity experiment with three malformation types: external, visceral, and skeletal.  相似文献   

13.
In the analysis of retrospective data or when interpreting results from a single-arm phase II clinical trial relative to historical data, it is often of interest to show plots summarizing time-to-event outcomes comparing treatment groups. If the groups being compared are imbalanced with respect to factors known to influence outcome, these plots can be misleading and seemingly incompatible with results obtained from a regression model that accounts for these imbalances. We consider ways in which covariate information can be used to obtain adjusted curves for time-to-event outcomes. We first review a common model-based method and then suggest another model-based approach that is not as reliant on model assumptions. Finally, an approach that is partially model free is suggested. Each method is applied to an example from hematopoietic cell transplantation.  相似文献   

14.
Cluster analysis is one of the most widely used method in statistical analyses, in which homogeneous subgroups are identified in a heterogeneous population. Due to the existence of the continuous and discrete mixed data in many applications, so far, some ordinary clustering methods such as, hierarchical methods, k-means and model-based methods have been extended for analysis of mixed data. However, in the available model-based clustering methods, by increasing the number of continuous variables, the number of parameters increases and identifying as well as fitting an appropriate model may be difficult. In this paper, to reduce the number of the parameters, for the model-based clustering mixed data of continuous (normal) and nominal data, a set of parsimonious models is introduced. Models in this set are extended, using the general location model approach, for modeling distribution of mixed variables and applying factor analyzer structure for covariance matrices. The ECM algorithm is used for estimating the parameters of these models. In order to show the performance of the proposed models for clustering, results from some simulation studies and analyzing two real data sets are presented.  相似文献   

15.
Incorporating historical data has a great potential to improve the efficiency of phase I clinical trials and to accelerate drug development. For model-based designs, such as the continuous reassessment method (CRM), this can be conveniently carried out by specifying a “skeleton,” that is, the prior estimate of dose limiting toxicity (DLT) probability at each dose. In contrast, little work has been done to incorporate historical data into model-assisted designs, such as the Bayesian optimal interval (BOIN), Keyboard, and modified toxicity probability interval (mTPI) designs. This has led to the misconception that model-assisted designs cannot incorporate prior information. In this paper, we propose a unified framework that allows for incorporating historical data into model-assisted designs. The proposed approach uses the well-established “skeleton” approach, combined with the concept of prior effective sample size, thus it is easy to understand and use. More importantly, our approach maintains the hallmark of model-assisted designs: simplicity—the dose escalation/de-escalation rule can be tabulated prior to the trial conduct. Extensive simulation studies show that the proposed method can effectively incorporate prior information to improve the operating characteristics of model-assisted designs, similarly to model-based designs.  相似文献   

16.
Missing variances, on the basis of the summary-level data, can be a problem when an inverse variance weighted meta-analysis is undertaken. A wide range of approaches in dealing with this issue exist, such as excluding data without a variance measure, using a function of sample size as a weight and imputing the missing standard errors/deviations. A non-linear mixed effects modelling approach was taken to describe the time-course of standard deviations across 14 studies. The model was then used to make predictions of the missing standard deviations, thus, enabling a precision weighted model-based meta-analysis of a mean pain endpoint over time. Maximum likelihood and Bayesian approaches were implemented with example code to illustrate how this imputation can be carried out and to compare the output from each method. The resultant imputations were nearly identical for the two approaches. This modelling approach acknowledges the fact that standard deviations are not necessarily constant over time and can differ between treatments and across studies in a predictable way.  相似文献   

17.
A common scenario in finite population inference is that it is possible to find a working superpopulation model which explains the main features of the population but which may not capture all the fine details. In addition, there are often outliers in the population which do not follow the assumed superpopulation model. In situations like these, it is still advantageous to make use of the working model to estimate finite population quantities, provided that we do it in a robust manner. The approach that we suggest is first to fit the working model to the sample and then to fine-tune for departures from the model assumed by estimating the conditional distribution of the residuals as a function of the auxiliary variable. This is a more direct approach to handling outliers and model misspecification than the Huber approach that is currently being used. Two simple methods, stratification and nearest neighbour smoothing, are used to estimate the conditional distributions of the residuals, which result in two modifications to the standard model-based estimator of the population distribution function. The estimators suggested perform very well in simulation studies involving two types of model departure and have small variances due to their model-based construction as well as acceptable bias. The potential advantage of the proposed robustified model-based approach over direct nonparametric regression is also demonstrated.  相似文献   

18.
A researcher using complex longitudinal survey data for event history analysis has to make several choices that affect the analysis results. These choices include the following: whether a design-based or a model-based approach for the analysis is taken, which subset of data to use and, if a design-based approach is chosen, which weights to use. We discuss different choices and illustrate their effects using longitudinal register data linked at person-level with the Finnish subset of the European Community Household Panel data. The use of register data enables us to construct an event history data set without nonresponse and attrition. Design-based estimates from these data are used as benchmarks against design-based and model-based estimates from subsets of data usually available for a survey data analyst. Our illustration suggests that the often recommended way to use panel data for longitudinal analyses, data from total respondents and weights from the last wave analysed may not be the best way to go. Instead, using all available data and weights from the first survey wave appears to be a safe choice for longitudinal analyses based on multipurpose survey data.  相似文献   

19.
This case-study fits a variety of neural network (NN) models to the well-known air line data and compares the resulting forecasts with those obtained from the Box–Jenkins and Holt–Winters methods. Many potential problems in fitting NN models were revealed such as the possibility that the fitting routine may not converge or may converge to a local minimum. Moreover it was found that an NN model which fits well may give poor out-of-sample forecasts. Thus we think it is unwise to apply NN models blindly in 'black box' mode as has sometimes been suggested. Rather, the wise analyst needs to use traditional modelling skills to select a good NN model, e.g. to select appropriate lagged variables as the 'inputs'. The Bayesian information criterion is preferred to Akaike's information criterion for comparing different models. Methods of examining the response surface implied by an NN model are examined and compared with the results of alternative nonparametric procedures using generalized additive models and projection pursuit regression. The latter imposes less structure on the model and is arguably easier to understand.  相似文献   

20.
This article is concerned with testing multiple hypotheses, one for each of a large number of small data sets. Such data are sometimes referred to as high-dimensional, low-sample size data. Our model assumes that each observation within a randomly selected small data set follows a mixture of C shifted and rescaled versions of an arbitrary density f. A novel kernel density estimation scheme, in conjunction with clustering methods, is applied to estimate f. Bayes information criterion and a new criterion weighted mean of within-cluster variances are used to estimate C, which is the number of mixture components or clusters. These results are applied to the multiple testing problem. The null sampling distribution of each test statistic is determined by f, and hence a bootstrap procedure that resamples from an estimate of f is used to approximate this null distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号