首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Consider a population of n individuals that move independently among a finite set {1, 2,……, k} of states in a sequence of trials. t = 0. 1, 2,…, m. each according to a Markov chain with transition probability matrix P . This paper deals with the problem of estimating P on the basis of aggregate data which record only the numbers of individuals that occupy each of the k states at times t = 0. 1,2,……,m. Estimation is accomplished using conditional least squares, and asymptotic results are verified for the case n → ∞. A weighted least-squares estimator is introduced and compared with previous estimators. Some comments are made on estimability questions that arise when only aggregate data are available.  相似文献   

2.
ABSTRACT

We present methods for modeling and estimation of a concurrent functional regression when the predictors and responses are two-dimensional functional datasets. The implementations use spline basis functions and model fitting is based on smoothing penalties and mixed model estimation. The proposed methods are implemented in available statistical software, allow the construction of confidence intervals for the bivariate model parameters, and can be applied to completely or sparsely sampled responses. Methods are tested to data in simulations and they show favorable results in practice. The usefulness of the methods is illustrated in an application to environmental data.  相似文献   

3.
ABSTRACT

In this article, a procedure for comparisons between k (k ? 3) successive populations with respect to the variance is proposed when it is reasonable to assume that variances satisfy simple ordering. Critical constants required for the implementation of the proposed procedure are computed numerically and selected values of the computed critical constants are tabulated. The proposed procedure for normal distribution is extended for making comparisons between successive exponential populations with respect to scale parameter. A comparison between the proposed procedure and its existing competitor procedures is carried out, using Monte Carlo simulation. Finally, a numerical example is given to illustrate the proposed procedure.  相似文献   

4.
We consider data that are longitudinal, arising from n individuals over m time periods. Each individual moves according to the same homogeneous Markov chain, with s states. If the individual sample paths are observed, so that ‘micro-data’ are available, the transition probability matrix is estimated by maximum likelihood straightforwardly from the transition counts. If only the overall numbers in the various states at each time point are observed, we have ‘macro-data’, and the likelihood function is difficult to compute. In that case a variety of methods has been proposed in the literature. In this paper we propose methods based on generating functions and investigate their performance.  相似文献   

5.
Abstract

Robust parameter design (RPD) is an effective tool, which involves experimental design and strategic modeling to determine the optimal operating conditions of a system. The usual assumptions of RPD are that normally distributed experimental data and no contamination due to outliers. And generally the parameter uncertainties in response models are neglected. However, using normal theory modeling methods for a skewed data and ignoring parameter uncertainties can create a chain of degradation in optimization and production phases such that misleading fit, poor estimated optimal operating conditions, and poor quality products. This article presents a new approach based on confidence interval (CI) response modeling for the process mean. The proposed interval robust design makes the system median unbiased for the mean and uses midpoint of the interval as a measure of location performance response. As an alternative robust estimator for the process variance response modeling, using biweight midvariance is proposed which is both resistant and robust of efficiency where normality is not met. The results further show that the proposed interval robust design gives a robust solution to the skewed structure of the data and to contaminated data. The procedure and its advantages are illustrated using two experimental design studies.  相似文献   

6.
Abstract

In this paper we introduce a new two-parameter discrete distribution which may be useful for modeling count data. Additionally, the probability mass function is very simple and it may have a zero vertex. We show that the new discrete distribution is a particular solution of a multiple Poisson process, and that it is infinitely divisible. Additionally, various structural properties of the new discrete distribution are derived. We also discuss two methods (moments and maximum likelihood) to estimate the model parameters. The usefulness of the proposed distribution is illustrated by means of real data sets to prove its versatility in practical applications.  相似文献   

7.
Market segmentation is a key concept in marketing research. Identification of consumer segments helps in setting up and improving a marketing strategy. Hence, the need is to improve existing methods and to develop new segmentation methods. We introduce two new consumer indicators that can be used as segmentation basis in two-stage methods, the forces and the dfbetas. Both bases express a subject’s effect on the aggregate estimates of the parameters in a conditional logit model. Further, individual-level estimates, obtained by either estimating a conditional logit model for each individual separately with maximum likelihood or by hierarchical Bayes (HB) estimation of a mixed logit choice model, and the respondents’ raw choices are also used as segmentation basis. In the second stage of the methods the bases are classified into segments with cluster analysis or latent class models. All methods are applied to choice data because of the increasing popularity of choice experiments to analyze choice behavior. To verify whether two-stage segmentation methods can compete with a one-stage approach, a latent class choice model is estimated as well. A simulation study reveals the superiority of the two-stage method that clusters the HB estimates and the one-stage latent class choice model. Additionally, very good results are obtained for two-stage latent class cluster analysis of the choices as well as for the two-stage methods clustering the forces, the dfbetas and the choices.  相似文献   

8.
ABSTRACT

In this paper, we propose the use of the Data Cloning (DC) approach to estimate parameter-driven zero-inflated Poisson and Negative Binomial models for time series of counts. The data cloning algorithm obtains the familiar maximum likelihood estimators and their standard errors via a fully Bayesian estimation. This provides some computational ease as well as inferential tools such as confidence intervals and diagnostic methods which, otherwise, are not readily available for parameter-driven models. To illustrate the performance of the proposed method, we use Monte Carlo Simulations and real data on asthma-related emergency department visits in the Canadian province of Ontario.  相似文献   

9.
Abstract

In risk assessment, it is often desired to make inferences on the minimum dose levels (benchmark doses or BMDs) at which a specific benchmark risk (BMR) is attained. The estimation and inferences of BMDs are well understood in the case of an adverse response to a single-exposure agent. However, the theory of finding BMDs and making inferences on the BMDs is much less developed for cases where the adverse effect of two hazardous agents is studied simultaneously. Deutsch and Piegorsch [2012. Benchmark dose profiles for joint-action quantal data in quantitative risk assessment. Biometrics 68(4):1313–22] proposed a benchmark modeling paradigm in dual exposure setting—adapted from the single-exposure setting—and developed a strategy for conducting full benchmark analysis with joint-action quantal data, and they further extended the proposed benchmark paradigm to continuous response outcomes [Deutsch, R. C., and W. W. Piegorsch. 2013. Benchmark dose profiles for joint-action continuous data in quantitative risk assessment. Biometrical Journal 55(5):741–54]. In their 2012 article, Deutsch and Piegorsch worked exclusively with the complementary log link for modeling the risk with quantal data. The focus of the current paper is on the logit link; particularly, we consider an Abbott-adjusted [A method of computing the effectiveness of an insecticide. Journal of Economic Entomology 18(2):265–7] log-logistic model for the analysis of quantal data with nonzero background response. We discuss the estimation of the benchmark profile (BMP)—a collection of benchmark points which induce the prespecified BMR—and propose different methods for building benchmark inferences in studies involving two hazardous agents. We perform Monte Carlo simulation studies to evaluate the characteristics of the confidence limits. An example is given to illustrate the use of the proposed methods.  相似文献   

10.
《随机性模型》2013,29(2-3):821-846
Abstract

We propose a family of finite approximations for the departure process of a BMAP/MAP/1 queue. The departure process approximations are derived via an exact aggregate solution technique (called ETAQA) applied to M/G/1-type Markov processes. The proposed approximations are indexed by a parameter n(n > 1), which determines the size of the output model as n + 1 block levels of the M/G/1-type process. This output approximation preserves exactly the marginal distribution of the true departure process and the lag correlations of the interdeparture times up to lag n ? 2. Experimental results support the applicability of the proposed approximation in traffic-based decomposition of queueing networks.  相似文献   

11.
Abstract

The generalized extreme value (GEV) distribution is known as the limiting result for the modeling of maxima blocks of size n, which is used in the modeling of extreme events. However, it is possible for the data to present an excessive number of zeros when dealing with extreme data, making it difficult to analyze and estimate these events by using the usual GEV distribution. The Zero-Inflated Distribution (ZID) is widely known in literature for modeling data with inflated zeros, where the inflator parameter w is inserted. The present work aims to create a new approach to analyze zero-inflated extreme values, that will be applied in data of monthly maximum precipitation, that can occur during months where there was no precipitation, being these computed as zero. An inference was made on the Bayesian paradigm, and the parameter estimation was made by numerical approximations of the posterior distribution using Markov Chain Monte Carlo (MCMC) methods. Time series of some cities in the northeastern region of Brazil were analyzed, some of them with predominance of non-rainy months. The results of these applications showed the need to use this approach to obtain more accurate and with better adjustment measures results when compared to the standard distribution of extreme value analysis.  相似文献   

12.
Abstract

This paper focuses on inference based on the confidence distributions of the nonparametric regression function and its derivatives, in which dependent inferences are combined by obtaining information about their dependency structure. We first give a motivating example in production operation system to illustrate the necessity of the problems studied in this paper in practical applications. A goodness-of-fit test for polynomial regression model is proposed on the basis of the idea of combined confidence distribution inference, which is the Fisher’s combination statistic in some cases. On the basis of this testing results, a combined estimator for the p-order derivative of nonparametric regression function is provided as well as its large sample size properties. Consequently, the performances of the proposed test and estimation method are illustrated by three specific examples. Finally, the motivating example is analyzed in detail. The simulated and real data examples illustrate the good performance and practicability of the proposed methods based on confidence distribution.  相似文献   

13.

A computer program that performs ridge analysis on quadratic response surfaces is presented in this paper, the primary goal of which is to seek the estimated optimum operating conditions inside a spherical region of experimentation during the stage of process optimization. The computational algorithm is developed based upon the trust-region methods in nonlinear optimization and guarantees the resulting operating conditions to be globally optimal without any priori assumption on the structure of response functions. Under a particular condition termed the "hard case" arising from the trust region literature, the conventional ridge analysis procedure fails to provide a set of acceptable optimum operating settings, yet the proposed algorithm has the capability of locating a pair of non-unique global solutions achieved on an identical estimated response value. Two illustrative examples taken from the response surface methodology (RSM) literature are given to demonstrate the effectiveness and efficiency of the method addressed in the paper.  相似文献   

14.
ABSTRACT

It is well known that ignoring heteroscedasticity in regression analysis adversely affects the efficiency of estimation and renders the usual procedure for constructing prediction intervals inappropriate. In some applications, such as off-line quality control, knowledge of the variance function is also of considerable interest in its own right. Thus the modeling of variance constitutes an important part of regression analysis. A common practice in modeling variance is to assume that a certain function of the variance can be closely approximated by a function of a known parametric form. The logarithm link function is often used even if it does not fit the observed variation satisfactorily, as other alternatives may yield negative estimated variances. In this paper we propose a rich class of link functions for more flexible variance modeling which alleviates the major difficulty of negative variances. We suggest also an alternative analysis for heteroscedastic regression models that exploits the principle of “separation” discussed in Box (Signal-to-Noise Ratios, Performance Criteria and Transformation. Technometrics 1988, 30, 1–31). The proposed method does not require any distributional assumptions once an appropriate link function for modeling variance has been chosen. Unlike the analysis in Box (Signal-to-Noise Ratios, Performance Criteria and Transformation. Technometrics 1988, 30, 1–31), the estimated variances and their associated asymptotic variances are found in the original metric (although a transformation has been applied to achieve separation in a different scale), making interpretation of results considerably easier.  相似文献   

15.
The fused lasso penalizes a loss function by the L1 norm for both the regression coefficients and their successive differences to encourage sparsity of both. In this paper, we propose a Bayesian generalized fused lasso modeling based on a normal-exponential-gamma (NEG) prior distribution. The NEG prior is assumed into the difference of successive regression coefficients. The proposed method enables us to construct a more versatile sparse model than the ordinary fused lasso using a flexible regularization term. Simulation studies and real data analyses show that the proposed method has superior performance to the ordinary fused lasso.  相似文献   

16.
Abstract

Continuous-time multi-state models are commonly used to study diseases with multiple stages. Potential risk factors associated with the disease are added to the transition intensities of the model as covariates, but missing covariate measurements arise frequently in practice. We propose a likelihood-based method that deals efficiently with a missing covariate in these models. Our simulation study showed that the method performs well for both “missing completely at random” and “missing at random” mechanisms. We also applied our method to a real dataset, the Einstein Aging Study.  相似文献   

17.
ABSTRACT

In actuarial applications, mixed Poisson distributions are widely used for modelling claim counts as observed data on the number of claims often exhibit a variance noticeably exceeding the mean. In this study, a new claim number distribution is obtained by mixing negative binomial parameter p which is reparameterized as p?=?exp( ?λ) with Gamma distribution. Basic properties of this new distribution are given. Maximum likelihood estimators of the parameters are calculated using the Newton–Raphson and genetic algorithm (GA). We compared the performance of these methods in terms of efficiency by simulation. A numerical example is provided.  相似文献   

18.
Abstract

Weibull mixture models are widely used in a variety of fields for modeling phenomena caused by heterogeneous sources. We focus on circumstances in which original observations are not available, and instead the data comes in the form of a grouping of the original observations. We illustrate EM algorithm for fitting Weibull mixture models for grouped data and propose a bootstrap likelihood ratio test (LRT) for determining the number of subpopulations in a mixture model. The effectiveness of the LRT methods are investigated via simulation. We illustrate the utility of these methods by applying them to two grouped data applications.  相似文献   

19.
The paper gives a review of a number of data models for aggregate statistical data which have appeared in the computer science literature in the last ten years.After a brief introduction to the data model in general, the fundamental concepts of statistical data are introduced. These are called statistical objects because they are complex data structures (vectors, matrices, relations, time series, etc) which may have different possible representations (e.g. tables, relations, vectors, pie-charts, bar-charts, graphs, and so on). For this reason a statistical object is defined by two different types of attribute (a summary attribute, with its own summary type and with its own instances, called summary data, and the set of category attributes, which describe the summary attribute). Some conceptual models of statistical data (CSM, SDM4S), some semantic models of statistical data (SCM, SAM*, OSAM*), and some graphical models of statistical data (SUBJECT, GRASS, STORM) are also discussed.  相似文献   

20.
Abstract

In this paper, we perform the analysis of the SUR Tobit model for three left-censored dependent variables by modeling its nonlinear dependence structure through the one-parameter Clayton copula. For unbiased parameter estimation, we propose an extension of the Inference Function for Augmented Margins (IFAM) method to the trivariate case. The interval estimation for the model parameters using resampling procedures is also discussed. We perform simulation and empirical studies, whose satisfactory results indicate the good performance of the proposed model and methods. Our procedure is illustrated using real data on consumption of food items (salad dressings, lettuce, tomato) by Americans.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号