首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
The paper describes nonparametric approaches for comparing three-period, two-treatment, four-sequence crossover designs through testing the hypothesis that the treatments are interchangeable. The proposed approaches are based on a model which incorporates, along with the direct treatment effects, self and mixed carryover effects. Related asymptotic results are obtained. Comparisons among the designs are made numerically with respect to type I error rate and power of the tests considering compound symmetry and autoregressive covariance structures of the response variables. Lengths of the confidence intervals of the treatment differences are also used to make comparative study among the designs.  相似文献   

2.
Binary response models consider pseudo-R 2 measures which are not based on residuals while several concepts of residuals were developed for tests. In this paper the endogenous variable of the latent model corresponding to the binary observable model is substituted by a pseudo variable. Then goodness of fit measures and tests can be based on a joint concept of residuals as for linear models. Different kinds of residuals based on probit ML estimates are employed. The analytical investigations and the simulation results lead to the recommendation to use standardized residuals where there is no difference between observed and generalized residuals. In none of the investigated situations this estimator is far away from the best result. While in large samples all considered estimators are very similar, small sample properties speak in favour of residuals which are modifications of those suggested in the literature. An empirical application demonstrates that it is not necessary to develop new testing procedures for the observable models with dichotomous regressands. Well-know approaches for linear models with continuous endogenous variables which are implemented in usual econometric packages can be used for pseudo latent models. An erratum to this article is available at .  相似文献   

3.
ABSTRACT

Various approaches can be used to construct a model from a null distribution and a test statistic. I prove that one such approach, originating with D. R. Cox, has the property that the p-value is never greater than the Generalized Likelihood Ratio (GLR). When combined with the general result that the GLR is never greater than any Bayes factor, we conclude that, under Cox’s model, the p-value is never greater than any Bayes factor. I also provide a generalization, illustrations for the canonical Normal model, and an alternative approach based on sufficiency. This result is relevant for the ongoing discussion about the evidential value of small p-values, and the movement among statisticians to “redefine statistical significance.”  相似文献   

4.
Assessment of the time needed to attain steady state is a key pharmacokinetic objective during drug development. Traditional approaches for assessing steady state include ANOVA‐based methods for comparing mean plasma concentration values from each sampling day, with either a difference or equivalence test. However, hypothesis‐testing approaches are ill suited for assessment of steady state. This paper presents a nonlinear mixed effects modelling approach for estimation of steady state attainment, based on fitting a simple nonlinear mixed model to observed trough plasma concentrations. The simple nonlinear mixed model is developed and proposed for use under certain pharmacokinetic assumptions. The nonlinear mixed modelling estimation approach is described and illustrated by application to trough data from a multiple dose trial in healthy subjects. The performance of the nonlinear mixed modelling approach is compared to ANOVA‐based approaches by means of simulation techniques. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

5.
Common software release procedures based on statistical techniques try to optimize the trade-off between further testing costs and costs due to remaining errors. We propose new software release procedures where the aim is to certify with a certain confidence level that the software does not contain errors. The underlying model is a discrete time model similar to the geometric Moranda model. The decisions are based on a mix of classical and Bayesian approaches to sequential testing and do not require any assumption on the initial number of errors.  相似文献   

6.
The ordinary least squares (OLS)estimator of regression coeffecient is implicitly based on I.I.D.assumption, which is rarely satisfied by survey data. Many approaches are proposed in the literature which can be classified in two broad categories as model based and design consistent.Du Mouchel and Duncan (1983) proposed a test statistic λwhich helps in testing the ignorability of sampling weights.In this article a preliminary test estimator based on λ is proposed. The model based properties of this estimator has been invetigated theoritically where as to study the design based properties simulation approach is adopted. It has been observed that the proposed estimator is a better cimpromise between model based and randomization based inferential frame work.  相似文献   

7.
Abstract. We propose a criterion for selecting a capture–recapture model for closed populations, which follows the basic idea of the focused information criterion (FIC) of Claeskens and Hjort. The proposed criterion aims at selecting the model which, among the available models, leads to the smallest mean‐squared error (MSE) of the resulting estimator of the population size and is based on an index which, up to a constant term, is equal to the asymptotic MSE of the estimator. Two alternative approaches to estimate this FIC index are proposed. We also deal with multimodel inference; in this case, the population size is estimated by using a weighted average of the estimates coming from different models, with weights chosen so as to minimize the MSE of the resulting estimator. The proposed model selection approach is compared with more common approaches through a series of simulations. It is also illustrated by an application based on a dataset coming from a live‐trapping experiment.  相似文献   

8.
Inference in generalized linear mixed models with crossed random effects is often made cumbersome by the high-dimensional intractable integrals involved in the marginal likelihood. This article presents two inferential approaches based on the marginal composite likelihood for the normal Bradley-Terry model. The two approaches are illustrated by a simulation study to evaluate their performance. Thereafter, the asymptotic variances of the estimated variance component are compared.  相似文献   

9.
In this paper, Bayesian decision procedures are developed for dose-escalation studies based on binary measures of undesirable events and continuous measures of therapeutic benefit. The methods generalize earlier approaches where undesirable events and therapeutic benefit are both binary. A logistic regression model is used to model the binary responses, while a linear regression model is used to model the continuous responses. Prior distributions for the unknown model parameters are suggested. A gain function is discussed and an optional safety constraint is included.  相似文献   

10.
Multiple imputation is a common approach for dealing with missing values in statistical databases. The imputer fills in missing values with draws from predictive models estimated from the observed data, resulting in multiple, completed versions of the database. Researchers have developed a variety of default routines to implement multiple imputation; however, there has been limited research comparing the performance of these methods, particularly for categorical data. We use simulation studies to compare repeated sampling properties of three default multiple imputation methods for categorical data, including chained equations using generalized linear models, chained equations using classification and regression trees, and a fully Bayesian joint distribution based on Dirichlet process mixture models. We base the simulations on categorical data from the American Community Survey. In the circumstances of this study, the results suggest that default chained equations approaches based on generalized linear models are dominated by the default regression tree and Bayesian mixture model approaches. They also suggest competing advantages for the regression tree and Bayesian mixture model approaches, making both reasonable default engines for multiple imputation of categorical data. Supplementary material for this article is available online.  相似文献   

11.
Motivated by the joint analysis of longitudinal quality of life data and recurrence free survival times from a cancer clinical trial, we present in this paper two approaches to jointly model the longitudinal proportional measurements, which are confined in a finite interval, and survival data. Both approaches assume a proportional hazards model for the survival times. For the longitudinal component, the first approach applies the classical linear mixed model to logit transformed responses, while the second approach directly models the responses using a simplex distribution. A semiparametric method based on a penalized joint likelihood generated by the Laplace approximation is derived to fit the joint model defined by the second approach. The proposed procedures are evaluated in a simulation study and applied to the analysis of breast cancer data motivated this research.  相似文献   

12.
Fault detection and Isolation takes a strategic position in modern industrial processes for which various approaches are proposed. These approaches are usually developed and based on a consistency test between an observed state of the process provided by sensors and an expected behaviour provided by a mathematical model of the system. These methods require a reliable model of the system to be monitored which is a complex task. Alternatively, we propose in this paper to use blind source separation filters (BSSFs) in order to detect and isolate faults in a three tank pilot plant. This technique is very beneficial as it uses blind identification without an explicit mathematical model of the system. The independent component analysis (ICA), relying on the assumption of the statistical independence of the extracted sources, is used as a tool for each BSSF to extract signals of the process under consideration. The experimental results show the effectiveness and robustness of this approach in detecting and isolating faults that are on sensors in the system.  相似文献   

13.
A novel family of mixture models is introduced based on modified t-factor analyzers. Modified factor analyzers were recently introduced within the Gaussian context and our work presents a more flexible and robust alternative. We introduce a family of mixtures of modified t-factor analyzers that uses this generalized version of the factor analysis covariance structure. We apply this family within three paradigms: model-based clustering; model-based classification; and model-based discriminant analysis. In addition, we apply the recently published Gaussian analogue to this family under the model-based classification and discriminant analysis paradigms for the first time. Parameter estimation is carried out within the alternating expectation-conditional maximization framework and the Bayesian information criterion is used for model selection. Two real data sets are used to compare our approach to other popular model-based approaches; in these comparisons, the chosen mixtures of modified t-factor analyzers model performs favourably. We conclude with a summary and suggestions for future work.  相似文献   

14.
In applied statistical data analysis, overdispersion is a common feature. It can be addressed using both multiplicative and additive random effects. A multiplicative model for count data incorporates a gamma random effect as a multiplicative factor into the mean, whereas an additive model assumes a normally distributed random effect, entered into the linear predictor. Using Bayesian principles, these ideas are applied to longitudinal count data, based on the so-called combined model. The performance of the additive and multiplicative approaches is compared using a simulation study.  相似文献   

15.
The accelerated failure time (AFT) models have proved useful in many contexts, though heavy censoring (as for example in cancer survival) and high dimensionality (as for example in microarray data) cause difficulties for model fitting and model selection. We propose new approaches to variable selection for censored data, based on AFT models optimized using regularized weighted least squares. The regularized technique uses a mixture of \(\ell _1\) and \(\ell _2\) norm penalties under two proposed elastic net type approaches. One is the adaptive elastic net and the other is weighted elastic net. The approaches extend the original approaches proposed by Ghosh (Adaptive elastic net: an improvement of elastic net to achieve oracle properties, Technical Reports 2007) and Hong and Zhang (Math Model Nat Phenom 5(3):115–133 2010), respectively. We also extend the two proposed approaches by adding censoring observations as constraints into their model optimization frameworks. The approaches are evaluated on microarray and by simulation. We compare the performance of these approaches with six other variable selection techniques-three are generally used for censored data and the other three are correlation-based greedy methods used for high-dimensional data.  相似文献   

16.
Modeling clustered categorical data based on extensions of generalized linear model theory has received much attention in recent years. The rapidly increasing number of approaches suitable for categorical data in which clusters are uncorrelated, but correlations exist within a cluster, has caused uncertainty among applied scientists as to their respective merits and demerits. Upon centering estimation around solving an unbiased estimating function for mean parameters and estimation of covariance parameters describing within-cluster or among-cluster heterogeneity, many approaches can easily be related. This contribution describes a series of algorithms and their implementation in detail, based on a classification of inferential procedures for clustered data.  相似文献   

17.
Model‐based phase I dose‐finding designs rely on a single model throughout the study for estimating the maximum tolerated dose (MTD). Thus, one major concern is about the choice of the most suitable model to be used. This is important because the dose allocation process and the MTD estimation depend on whether or not the model is reliable, or whether or not it gives a better fit to toxicity data. The aim of our work was to propose a method that would remove the need for a model choice prior to the trial onset and then allow it sequentially at each patient's inclusion. In this paper, we described model checking approach based on the posterior predictive check and model comparison approach based on the deviance information criterion, in order to identify a more reliable or better model during the course of a trial and to support clinical decision making. Further, we presented two model switching designs for a phase I cancer trial that were based on the aforementioned approaches, and performed a comparison between designs with or without model switching, through a simulation study. The results showed that the proposed designs had the advantage of decreasing certain risks, such as those of poor dose allocation and failure to find the MTD, which could occur if the model is misspecified. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
Sun W  Li H 《Lifetime data analysis》2004,10(3):229-245
The additive genetic gamma frailty model has been proposed for genetic linkage analysis for complex diseases to account for variable age of onset and possible covariates effects. To avoid ascertainment biases in parameter estimates, retrospective likelihood ratio tests are often used, which may result in loss of efficiency due to conditioning. This paper considers when the sibships are ascertained by having at least two affected sibs with the disease before a given age and provides two approaches for estimating the parameters in the additive gamma frailty model. One approach is based on the likelihood function conditioning on the ascertainment event, the other is based on maximizing a full ascertainment-adjusted likelihood. Explicit forms for these likelihood functions are derived. Simulation studies indicate that when the baseline hazard function can be correctly pre-specified, both approaches give accurate estimates of the model parameters. However, when the baseline hazard function has to be estimated simultaneously, only the ascertainment-adjusted likelihood method gives an unbiased estimate of the parameters. These results imply that the ascertainment-adjusted likelihood ratio test in the context of the additive genetic gamma frailty may be used for genetic linkage analysis.  相似文献   

19.
Statistical modeling of credit risk for retail clients is considered. Due to the lack of detailed updated information about the counterparty, traditional approaches such as Merton’s firm-value model, are not applicable. Moreover, the credit default data for retail clients typically exhibit a very small percentage of default rates. This motivates a statistical model based on survival analysis under extreme censoring for the time-to-default variable. The model incorporates the stochastic nature of default and is based on incomplete information. Consistency and asymptotic normality of maximum likelihood estimates of the parameters characterizing the time-to-default distribution are derived. A criterion for constructing confidence ellipsoids for the parameters is obtained from the asymptotic results. An extended model with explanatory variables is also discussed. The results are illustrated by a data example with 670 mortgages.  相似文献   

20.
The mean residual life (MRL) measures the remaining life expectancy and is useful in actuarial studies, biological experiments and clinical trials. To assess the covariate effect, an additive MRL regression model has been proposed in the literature. In this paper, we focus on the topic of model checking. Specifically, we develop two goodness-of-fit tests to test the additive MRL model assumption. We explore the large sample properties of the test statistics and show that both of them are based on asymptotic Gaussian processes so that resampling approaches can be applied to find the rejection regions. Simulation studies indicate that our methods work reasonably well for sample sizes ranging from 50 to 200. Two empirical data sets are analyzed to illustrate the approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号