首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Choice-based conjoint experiments are used when choice alternatives can be described in terms of attributes. The objective is to infer the value that respondents attach to attribute levels. This method involves the design of profiles on the basis of attributes specified at certain levels. Respondents are presented sets of profiles and asked to select the one they consider best. However if choice sets have too many profiles, they may be difficult to implement. In this paper we provide strategies for reducing the number of profiles in choice sets. We consider situations where only a subset of interactions is of interest, and we obtain connected main effect plans with smaller choice sets that are capable of estimating subsets of two-factor and three-factor interactions in 2n and 3n plans. We also provide connected main effect plans for mixed level designs.  相似文献   

2.
This paper explores the effect of sample size, scale of parameters and size of the choice set on the maximum likelihood estimator of the parameters of the multinomial logit model. Data were generated by simulations under a three-way factorial experimental design for logit models containing three, four and five explanatory variables. Simulation data were analyzed by analysis of covariance and a regression model of the performance measure, the log root mean-squared error (LRMSE), fitted against the three factors and their interactions. Several important conclusions emerged. First, the LRMSE improves, but at a decreasing rate, with increases in the model's degrees of freedom. Second, the number of choice alternatives in the decision makers' choice sets has a significant impact on the LRMSE; however, heterogeneity in the choice sets across the sample has little or no impact. Finally, the scale of parameters and all of its two-way interactions with the other two factors significantly affect the LRMSE. Using the regression results, a family of iso-LRMSE curves are derived in the space of model degrees of freedom and scale of parameters. Their implications for researchers in choosing sample size and scale of parameters is discussed.  相似文献   

3.
In a stated preference discrete choice experiment each subject is typically presented with several choice sets, and each choice set contains a number of alternatives. The alternatives are defined in terms of their name (brand) and their attributes at specified levels. The task for the subject is to choose from each choice set the alternative with highest utility for them. The multinomial is an appropriate distribution for the responses to each choice set since each subject chooses one alternative, and the multinomial logit is a common model. If the responses to the several choice sets are independent, the likelihood function is simply the product of multinomials. The most common and generally preferred method of estimating the parameters of the model is maximum likelihood (that is, selecting as estimates those values that maximize the likelihood function). If the assumption of within-subject independence to successive choice tasks is violated (it is almost surely violated), the likelihood function is incorrect and maximum likelihood estimation is inappropriate. The most serious errors involve the estimation of the variance-covariance matrix of the model parameter estimates, and the corresponding variances of market shares and changes in market shares.

In this paper we present an alternative method of estimation of the model parameter coefficients that incorporates a first-order within-subject covariance structure. The method involves the familiar log-odds transformation and application of the multivariate delta method. Estimation of the model coefficients after the transformation is a straightforward generalized least squares regression, and the corresponding improved estimate of the variance-covariance matrix is in closed form. Estimates of market share (and change in market share) follow from a second application of the multivariate delta method. The method and comparison with maximum likelihood estimation are illustrated with several simulated and actual data examples.

Advantages of the proposed method are: 1) it incorporates the within-subject covariance structure; 2) it is completely data driven; 3) it requires no additional model assumptions; 4) assuming asymptotic normality, it provides a simple procedure for computing confidence regions on market shares and changes in market shares; and 5) it produces results that are asymptotically equivalent to those produced by maximum likelihood when the data are independent.  相似文献   

4.
We consider the problem of testing against trend and umbrella alternatives, with known and unknown peak, in two-way layouts with fixed effects. We consider the non-parametric two-way layout ANOVA model of Akritas and Arnold (J. Amer. Statist. Assoc. 89 (1994) 336), and use the non-parametric formulation of patterned alternatives introduced by Akritas and Brunner (Research Developments in Probability and Statistics: Festschrift in honor of Madan L. Puri, VSP, Zeist, The Netherlands, 1996, pp. 277–288). The hypotheses of no main effects and of no simple effects are both considered. New rank test statistics are developed to specifically detect these types of alternatives. For main effects, we consider two types of statistics, one using weights similar to Hettmansperger and Norton (J. Amer. Statist. Assoc. 82 (1987) 292) and one with weights which maximize the asymptotic efficacy. For simple effects, we consider in detail only statistics to detect trend or umbrella patterns with known peaks, and refer to Callegari (Ph.D. Thesis, University of Padova, Italy) for a discussion about possible statistics for umbrella alternatives with unknown peaks. The null asymptotic distributions of the new statistics are derived. A number of simulation studies investigate their finite sample behaviors and compare the achieved alpha levels and power with some alternative procedures. An application to data used in a clinical study is presented to illustrate how to utilize some of the proposed tests for main effects.  相似文献   

5.
Classical inferential procedures induce conclusions from a set of data to a population of interest, accounting for the imprecision resulting from the stochastic component of the model. Less attention is devoted to the uncertainty arising from (unplanned) incompleteness in the data. Through the choice of an identifiable model for non-ignorable non-response, one narrows the possible data-generating mechanisms to the point where inference only suffers from imprecision. Some proposals have been made for assessing the sensitivity to these modelling assumptions; many are based on fitting several plausible but competing models. For example, we could assume that the missing data are missing at random in one model, and then fit an additional model where non-random missingness is assumed. On the basis of data from a Slovenian plebiscite, conducted in 1991, to prepare for independence, it is shown that such an ad hoc procedure may be misleading. We propose an approach which identifies and incorporates both sources of uncertainty in inference: imprecision due to finite sampling and ignorance due to incompleteness. A simple sensitivity analysis considers a finite set of plausible models. We take this idea one step further by considering more degrees of freedom than the data support. This produces sets of estimates (regions of ignorance) and sets of confidence regions (combined into regions of uncertainty).  相似文献   

6.
空间回归模型选择的反思   总被引:1,自引:0,他引:1  
空间计量经济学存在两种最基本的模型:空间滞后模型和空间误差模型,这里旨在重新思考和探讨这两种空间回归模型的选择,结论为:Moran’s I指数可以用来判断回归模型后的残差是否存在空间依赖性;在实证分析中,采用拉格朗日乘子检验判断两种模型优劣是最常见的做法。然而,该检验仅仅是基于统计推断而忽略了理论基础,因此,可能导致选择错误的模型;在实证分析中,空间误差模型经常被选择性遗忘,而该模型的适用性较空间滞后模型更为广泛;实证分析大多缺乏空间回归模型设定的探讨,Anselin提出三个统计量,并且,如果模型设定正确,应该遵从Wald统计量>Log likelihood统计量>LM统计量的排列顺序。  相似文献   

7.
Conjoint analysis is concerned with understanding how people make choice between products or services (alternatives) or a combination of product and service (choice set), so that businesses can design new products or services that better meet customers needs. In this situation, logit model (Multinomial Logit Model) has been used to calculate the probability related to choosing an alternative in a choice set with the highest utility. Then I considered several choice sets instead of one. In this article, I have used the locally D-optimal design for the combination of the level of attributes (two attributes each with two levels) to create alternatives. The optimal combination of alternatives in choice sets which help us to have a suitable choice.  相似文献   

8.
Empirical Bayes methods are used to estimate cell probabi-lities under a multiplicative-Interaction model for a two-way contingency table. The methods assign uniform and normal priors with unknown variances to the main effects and the separable scores. A priori the analysis assumes exchangeability of sets of parameters. The unknown variance components are estimated empirically from the data via the EM algorithm as discussed by Laird (1978)and Dempster, Laird and Rubin (1977). An example Is Included.  相似文献   

9.
A relevant problem in Statistics relates to obtaining conclusions about the shape of the distribution of an experiment from which a sample is drawn. We will consider this problem when the available information from the experimental performance cannot be exactly perceived, but that rather it may be assimilated with fuzzy information (as defined by L.A. Zadeh, and H. Tanaka, T. Okuda and K. Asai).If the hypothetical distribution is completely specified, the extension of the chi-square goodness of fit test on the basis of some concepts in Fuzzy Sets Theory does not entail difficulties. Nevertheless, if the hypothetical distribution involves unknown parameters, the extension of the chi- square goodness of fit test requires the estimation of those parameters from the fuzzy data. The aim of the present paper is to prove that, under certain natural assumptions, the minimum inaccuracy principle of estimation from fuzzy observations (which we have suggested in a previous paper as an operative extension of the maximum likelihood principle) supplies a suitable method for the above requirement.  相似文献   

10.
Row x column interaction is frequently assumed to be negligible in two-way classifications having one observation per cell. Absence of interaction allows the researcher to estimate experimental error and to proceed with making inferences about row and column effects. If additivity is suspect, it is conventional to test it against a structured alternative. If the structured alternative missspecifies the existing nonadditivity, then the power of the test is low, even if the magnitude of the existing nonadditivity is large. The locally best invariant (LBI) test of additivity is less subject to model misspecification because a particular structural alternative need not be hypothesized. This paper illustrates the LBI test of additivity and compares its power to that of the Johnson-Graybill likelihood ratio (LR) test. The LBI test performs as well as the LR test under a Johnson-Graybill alternative and performs better than the LR test under more general alternatives.  相似文献   

11.
If an assumption, such as homoscedasticity, or some other aspect of an inference problem, such as the number of cases, is altered, our conclusions may change and different parts of the conclusions can be affected in different ways. Most diagnostic procedures measure the influence on one particular aspect of the conclusion - such as model fit or change in parameter estimates. The effect on all aspects of the conclusions can be described by the difference in two log likelihood functions and when the log likelihood functions come from an exponential family or are quasi-likelihoods, this difference can be factored into three terms: one depending only on the alteration, another depending only on the aspects of the conclusions to be considered, and a third term depending on both. The third term is interesting because it shows which aspects of the conclusions are relatively insensitive even to large alterations.  相似文献   

12.
Estimation in mixed linear models is, in general, computationally demanding, since applied problems may involve extensive data sets and large numbers of random effects. Existing computer algorithms are slow and/or require large amounts of memory. These problems are compounded in generalized linear mixed models for categorical data, since even approximate methods involve fitting of a linear mixed model within steps of an iteratively reweighted least squares algorithm. Only in models in which the random effects are hierarchically nested can the computations for fitting these models to large data sets be carried out rapidly. We describe a data augmentation approach to these computational difficulties in which we repeatedly fit an overlapping series of submodels, incorporating the missing terms in each submodel as 'offsets'. The submodels are chosen so that they have a nested random-effect structure, thus allowing maximum exploitation of the computational efficiency which is available in this case. Examples of the use of the algorithm for both metric and discrete responses are discussed, all calculations being carried out using macros within the MLwiN program.  相似文献   

13.
The proportional hazards regression model of Cox(1972) is widely used in analyzing survival data. We examine several goodness of fit tests for checking the proportionality of hazards in the Cox model with two-sample censored data, and compare the performance of these tests by a simulation study. The strengths and weaknesses of the tests are pointed out. The effects of the extent of random censoring on the size and power are also examined. Results of a simulation study demonstrate that Gill and Schumacher's test is most powerful against a broad range of monotone departures from the proportional hazards assumption, but it may not perform as well fail for alternatives of nonmonotone hazard ratio. For the latter kind of alternatives, Andersen's test may detect patterns of irregular changes in hazards.  相似文献   

14.
消费者信心是对消费者整体所表现出来的信心程度及其变动的一种测度。使用离散顺序选择模型对2009年第1季度中国大陆消费者信心指数的原始调查数据进行实证分析,结果显示:消费者对未来经济发展的信心受到其对未来就业、收入、生活和投资四个方面信心的显著影响。  相似文献   

15.
Iman (1974) and Conover and Iman (1976) have shown by means of simulation studies that a rank transform, whereby the ordinary F-tests are applied to the ranks of the original observations in two-way experimental designs, presents a remarkably powerful method of analysis. In the present study it is shown that this rank transform is closely related to the procedure proposed by Lemmer and Stoker (1967) for the two-way analysis of variance. The main conclusions are that the Iman tests are generally slightly more powerful than those of Lemmer and Stoker, but that in the presence of interaction the latter tests for main effects are safer to use than the former because interaction tends to make the Iman tests for main effects significant even if no main effects exist.  相似文献   

16.
In this study, our aim was to investigate the changes of different data structures and different sample sizes on the structural equation modeling and the influence of these factors on the model fit measures. Examining the created structural equation modeling under different data structures and sample sizes, the evaluation of model fit measures were performed with a simulation study. As a result of the simulation study, optimization and negative variance estimation problems have been encountered depending on the sample size and changing correlations. It was observed that these problems disappeared either by increasing the sample size or the correlations between the variables in factor. For upcoming studies, the choice of RMSEA and IFI model fit measures can be suggested in all sample sizes and the correlation values for data sets are ensured the multivariate normal distribution assumption.  相似文献   

17.
The AMMI (additive main effects-multiplicative interaction) model is often used to investigate interactions in two-way tables, in particular for genotype-environment interactions. Both Gollob (1968) and Mandel (1969, 1971) proposed methods for testing the significance of such interactions. These methods are compared using simulated data. Our results support Mandel's conclusions, but his method is conservative and we recommend a test proposed by Johnson & Graybill (1972).  相似文献   

18.
Due to destructiveness of natural disasters, restriction of disaster scenarios and some human causes, missing data usually occur in disaster decision-making problems. In order to estimate missing values of alternatives, this paper focuses on imputing heterogeneous attribute values of disaster based on an improved K nearest neighbor imputation (KNNI) method. Firstly, some definitions of trapezoidal fuzzy numbers (TFNs) are introduced and three types of attributes (i.e. linguistic term sets, intervals and real numbers) are converted to TFNs. Then the correlated degree model is utilized to extract related attributes to form instances that will be used in K nearest neighbor algorithm, and a novel KNNI method merging with correlated degree model is presented. Finally, an illustrative example is given to verify the proposed method and to demonstrate its feasibility and effectiveness.  相似文献   

19.
The choice of the model framework in a regression setting depends on the nature of the data. The focus of this study is on changepoint data, exhibiting three phases: incoming and outgoing, both of which are linear, joined by a curved transition. Bent-cable regression is an appealing statistical tool to characterize such trajectories, quantifying the nature of the transition between the two linear phases by modeling the transition as a quadratic phase with unknown width. We demonstrate that a quadratic function may not be appropriate to adequately describe many changepoint data. We then propose a generalization of the bent-cable model by relaxing the assumption of the quadratic bend. The properties of the generalized model are discussed and a Bayesian approach for inference is proposed. The generalized model is demonstrated with applications to three data sets taken from environmental science and economics. We also consider a comparison among the quadratic bent-cable, generalized bent-cable and piecewise linear models in terms of goodness of fit in analyzing both real-world and simulated data. This study suggests that the proposed generalization of the bent-cable model can be valuable in adequately describing changepoint data that exhibit either an abrupt or gradual transition over time.  相似文献   

20.
In this paper, asymptotic properties of the Kruskal-Wallis test in the one-way analysis of variance model and that of the Friedman test in the two-way classification model are investigated under alternatives when the treatment effects are random. It is shown that the asymptotic distribution of each statistic is the same as a mixture of central chi-squared variables. Asymptotic comparisons of the tests with respect to their parametric competitors are also performed  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号