首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this work, we define a new method of ranked set sampling (RSS) which is suitable when the characteristic (variable) Y of primary interest on the units is jointly distributed with an auxiliary characteristic X on which one can take its measurement on any number of units, so that units having record values on X alone are ranked and retained for making measurement on Y. We name this RSS as concomitant record ranked set sampling (CRRSS). We propose estimators of the parameters associated with the variable Y of primary interest based on observations of the proposed CRRSS which are applicable to a very large class of distributions viz. Morgenstern family of distributions. We illustrate the application of CRRSS and our estimation technique of parameters, when the basic distribution is Morgenstern-type bivariate logistic distribution. A primary data collected by CRRSS method is demonstrated and the obtained data used to illustrate the results developed in this work.  相似文献   

2.
In this paper, we propose several approaches to estimate the parameters of the periodic first-order integer-valued autoregressive process with period T (PINAR(1)T) in the presence of missing data. By using incomplete data, we propose two approaches that are based on the conditional expectation and conditional likelihood to estimate the parameters of interest. Then we study three kinds of imputation methods for the missing data. The performances of these approaches are compared via simulations.  相似文献   

3.
Consider a finite population of size N with T possible realizations for each population unit. In reality the realizations may represent temporal, geographic or physical variations of the population unit. The paper provides design-based unbiased estimates for several population parameters of interest. Both simple random sampling and stratified sampling are considered. Some comparisons are given. An empirical study is also included with natural population data.  相似文献   

4.
This article addresses some of the issues that arise with the Dynamic Conditional Correlation (DCC) model. It is proven that the DCC large system estimator can be inconsistent, and that the traditional interpretation of the DCC correlation parameters can result in misleading conclusions. Here, we suggest a more tractable DCC model, called the cDCC model. The cDCC model allows for a large system estimator that is heuristically proven to be consistent. Sufficient stationarity conditions for cDCC processes of interest are established. The empirical performances of the DCC and cDCC large system estimators are compared via simulations and applications to real data.  相似文献   

5.
Failure time models are considered when there is a subpopulation of individuals that is immune, or not susceptible, to an event of interest. Such models are of considerable interest in biostatistics. The most common approach is to postulate a proportion p of immunes or long-term survivors and to use a mixture model [5]. This paper introduces the defective inverse Gaussian model as a cure model and examines the use of the Gibbs sampler together with a data augmentation algorithm to study Bayesian inferences both for the cured fraction and the regression parameters. The results of the Bayesian and likelihood approaches are illustrated on two real data sets.  相似文献   

6.
In this paper, we adapt recently developed simulation-based sequential algorithms to the problem concerning the Bayesian analysis of discretely observed diffusion processes. The estimation framework involves the introduction of m−1 latent data points between every pair of observations. Sequential MCMC methods are then used to sample the posterior distribution of the latent data and the model parameters on-line. The method is applied to the estimation of parameters in a simple stochastic volatility model (SV) of the U.S. short-term interest rate. We also provide a simulation study to validate our method, using synthetic data generated by the SV model with parameters calibrated to match weekly observations of the U.S. short-term interest rate.  相似文献   

7.
In this work, we study D s -optimal design for Kozak's tree taper model. The approximate D s -optimal designs are found invariant to tree size and hence create a ground to construct a general replication-free D s -optimal design. Even though the designs are found not to be dependent on the parameter value p of the Kozak's model, they are sensitive to the s×1 subset parameter vector values of the model. The 12 points replication-free design (with 91% efficiency) suggested in this study is believed to reduce cost and time for data collection and more importantly to precisely estimate the subset parameters of interest.  相似文献   

8.
ABSTRACT

In queuing theory, a major interest of researchers is studying the behavior and formation process and analyzing the performance characteristics of queues, particularly the traffic intensity, which is defined as the ratio between the arrival rate and the service rate. How these parameters can be estimated using some statistical inferential method is the mathematical problem treated here. This article aims to obtain better Bayesian estimates for the traffic intensity of M/M/1 queues, which, in Kendall notation, stand for Markovian single-server infinity queues. The Jeffreys prior is proposed to obtain the posterior and predictive distributions of some parameters of interest. Samples are obtained through simulation and some performance characteristics are analyzed. It is observed from the Bayes factor that Jeffreys prior is competitive, among informative and non-informative prior distributions, and presents the best performance in many of the cases tested.  相似文献   

9.
Extended Weibull type distribution and finite mixture of distributions   总被引:1,自引:0,他引:1  
An extended form of Weibull distribution is suggested which has two shape parameters (m and δ). Introduction of another shape parameter δ helps to express the extended Weibull distribution not only as an exact form of a mixture of distributions under certain conditions, but also provides extra flexibility to the density function over positive range. The shape of density function of the extended Weibull type distribution for various values of the parameters is shown which may be of some interest to Bayesians. Certain statistical properties such as hazard rate function, mean residual function, rth moment are defined explicitly. The proposed extended Weibull distribution is used to derive an exact form of two, three and k-component mixture of distributions. With the help of a real data set, the usefulness of mixture Weibull type distribution is illustrated by using Markov Chain Monte Carlo (MCMC), Gibbs sampling approach.  相似文献   

10.
Linear mixed models are widely used when multiple correlated measurements are made on each unit of interest. In many applications, the units may form several distinct clusters, and such heterogeneity can be more appropriately modelled by a finite mixture linear mixed model. The classical estimation approach, in which both the random effects and the error parts are assumed to follow normal distribution, is sensitive to outliers, and failure to accommodate outliers may greatly jeopardize the model estimation and inference. We propose a new mixture linear mixed model using multivariate t distribution. For each mixture component, we assume the response and the random effects jointly follow a multivariate t distribution, to conveniently robustify the estimation procedure. An efficient expectation conditional maximization algorithm is developed for conducting maximum likelihood estimation. The degrees of freedom parameters of the t distributions are chosen data adaptively, for achieving flexible trade-off between estimation robustness and efficiency. Simulation studies and an application on analysing lung growth longitudinal data showcase the efficacy of the proposed approach.  相似文献   

11.
Multivariate normal distribution approaches for dependently truncated data   总被引:1,自引:1,他引:0  
Many statistical methods for truncated data rely on the independence assumption regarding the truncation variable. In many application studies, however, the dependence between a variable X of interest and its truncation variable L plays a fundamental role in modeling data structure. For truncated data, typical interest is in estimating the marginal distributions of (L, X) and often in examining the degree of the dependence between X and L. To relax the independence assumption, we present a method of fitting a parametric model on (L, X), which can easily incorporate the dependence structure on the truncation mechanisms. Focusing on a specific example for the bivariate normal distribution, the score equations and Fisher information matrix are provided. A robust procedure based on the bivariate t-distribution is also considered. Simulations are performed to examine finite-sample performances of the proposed method. Extension of the proposed method to doubly truncated data is briefly discussed.  相似文献   

12.
The authors consider a special case of inference in the presence of nuisance parameters. They show that when the orthogonalized score function is a function of a statistic S, no Fisher information for the interest parameter is lost by using the marginal distribution of S rather than the full distribution of the observations. Therefore, no information for the interest parameter is recovered by conditioning on an ancillary statistic, and information will be lost by conditioning on an approximate ancillary statistic. This is the case for regular multivariate exponential families when the interest parameter is a subvector of the expectation parameter and the statistic is the maximum likelihood estimate of the interest parameter. Several examples are considered, including the 2 × 2 table.  相似文献   

13.
Central to many inferential situations is the estimation of rational functions of parameters. The mainstream in statistics and econometrics estimates these quantities based on the plug‐in approach without consideration of the main objective of the inferential situation. We propose the Bayesian Minimum Expected Loss (MELO) approach focusing explicitly on the function of interest, and calculating its frequentist variability. Asymptotic properties of the MELO estimator are similar to the plug‐in approach. Nevertheless, simulation exercises show that our proposal is better in situations characterised by small sample sizes and/or noisy data sets. In addition, we observe in the applications that our approach gives lower standard errors than frequently used alternatives when data sets are not very informative.  相似文献   

14.
The hazard function describes the instantaneous rate of failure at a time t, given that the individual survives up to t. In applications, the effect of covariates produce changes in the hazard function. When dealing with survival analysis, it is of interest to identify where a change point in time has occurred. In this work, covariates and censored variables are considered in order to estimate a change-point in the Weibull regression hazard model, which is a generalization of the exponential model. For this more general model, it is possible to obtain maximum likelihood estimators for the change-point and for the parameters involved. A Monte Carlo simulation study shows that indeed, it is possible to implement this model in practice. An application with clinical trial data coming from a treatment of chronic granulomatous disease is also included.  相似文献   

15.
For right-censored data, the accelerated failure time (AFT) model is an alternative to the commonly used proportional hazards regression model. It is a linear model for the (log-transformed) outcome of interest, and is particularly useful for censored outcomes that are not time-to-event, such as laboratory measurements. We provide a general and easily computable definition of the R2 measure of explained variation under the AFT model for right-censored data. We study its behavior under different censoring scenarios and under different error distributions; in particular, we also study its robustness when the parametric error distribution is misspecified. Based on Monte Carlo investigation results, we recommend the log-normal distribution as a robust error distribution to be used in practice for the parametric AFT model, when the R2 measure is of interest. We apply our methodology to an alcohol consumption during pregnancy data set from Ukraine.  相似文献   

16.
ABSTRACT

This article has two objectives. The first and narrower is to formalize the p-value function, which records all possible p-values, each corresponding to a value for whatever the scalar parameter of interest is for the problem at hand, and to show how this p-value function directly provides full inference information for any corresponding user or scientist. The p-value function provides familiar inference objects: significance levels, confidence intervals, critical values for fixed-level tests, and the power function at all values of the parameter of interest. It thus gives an immediate accurate and visual summary of inference information for the parameter of interest. We show that the p-value function of the key scalar interest parameter records the statistical position of the observed data relative to that parameter, and we then describe an accurate approximation to that p-value function which is readily constructed.  相似文献   

17.
The Dirichlet process prior allows flexible nonparametric mixture modeling. The number of mixture components is not specified in advance and can grow as new data arrive. However, analyses based on the Dirichlet process prior are sensitive to the choice of the parameters, including an infinite-dimensional distributional parameter G 0. Most previous applications have either fixed G 0 as a member of a parametric family or treated G 0 in a Bayesian fashion, using parametric prior specifications. In contrast, we have developed an adaptive nonparametric method for constructing smooth estimates of G 0. We combine this method with a technique for estimating α, the other Dirichlet process parameter, that is inspired by an existing characterization of its maximum-likelihood estimator. Together, these estimation procedures yield a flexible empirical Bayes treatment of Dirichlet process mixtures. Such a treatment is useful in situations where smooth point estimates of G 0 are of intrinsic interest, or where the structure of G 0 cannot be conveniently modeled with the usual parametric prior families. Analysis of simulated and real-world datasets illustrates the robustness of this approach.  相似文献   

18.
Absolute risk is the chance that a person with given risk factors and free of the disease of interest at age a will be diagnosed with that disease in the interval (a, a + τ]. Absolute risk is sometimes called cumulative incidence. Absolute risk is a “crude” risk because it is reduced by the chance that the person will die of competing causes of death before developing the disease of interest. Cohort studies admit flexibility in modeling absolute risk, either by allowing covariates to affect the cause-specific relative hazards or to affect the absolute risk itself. An advantage of cause-specific relative risk models is that various data sources can be used to fit the required components. For example, case–control data can be used to estimate relative risk and attributable risk, and these can be combined with registry data on age-specific composite hazard rates for the disease of interest and with national data on competing hazards of mortality to estimate absolute risk. Family-based designs, such as the kin-cohort design and collections of pedigrees with multiple affected individuals can be used to estimate the genotype-specific hazard of disease. Such analyses must be adjusted for ascertainment, and failure to take into account residual familial risk, such as might be induced by unmeasured genetic variants or by unmeasured behavioral or environmental exposures that are correlated within families, can lead to overestimates of mutation-specific absolute risk in the general population.  相似文献   

19.
Summary In this paper, we present a Bayesian analysis of the bivariate exponential distribution of Block and Basu (1974) assuming different prior densities for the parameters of the model and considering Laplace's method to obtain approximate marginal posterior and posterior moments of interest. We also find approximate Bayes estimators for the reliability of two-component systems at a specified timet 0 considering series and parallel systems. We illustrate the proposed methodology with a generated data set.  相似文献   

20.
The two parameter Gamma distribution is widely used for modeling lifetime distributions in reliability theory. There is much literature on the inference on the individual parameters of the Gamma distribution, namely the shape parameter k and the scale parameter θ when the other parameter is known. However, usually the reliability professionals have a major interest in making statistical inference about the mean lifetime μ, which equals the product θk for the Gamma distribution. The problem of inference on the mean μ when both parameters θ and k are unknown has been less attended in the literature for the Gamma distribution. In this paper we review the existing methods for interval estimation of μ. A comparative study in this paper indicates that the existing methods are either too approximate and yield less reliable confidence intervals or are computationally quite complicated and need advanced computing facilities. We propose a new simple method for interval estimation of the Gamma mean and compare its performance with the existing methods. The comparative study showed that the newly proposed computationally simple optimum power normal approximation method works best even for small sample sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号