首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This article considers the situation where, in the most general case, each observation in a sample has been “truncated” below at a different, but known value. Each observation is truncated in the sense that, had it been 1ess than the truncati on point, it would not have appeared in the sample. A goodness-of-fit test based on Gnedenko’ F statistic is developed to test the hypothesis that the underlying distribution is Pareto against the alternative of lognormality. The Chi-square and Kolmogorov tests are adapted to test the hypothesi s of lognormality with unspecified alternative. The application of these techni ques to the analysis of insurance claim data is discussed.  相似文献   

2.
Pearson's partial correlation, Kendall's partial tau, and a partial correlation based on Spearman's rho need not be consistent estimators of zero under conditional independence. The ranges of possible limiting values of these correlations are computed under multivariate normality and lognormality. Students should exercise caution when interpreting these partial correlations as a measure of conditional independence.  相似文献   

3.
It is often desirable to test non-nested hypotheses. Cox (1961, 1962) proposed forming a log-likelihood ratio from their maxima and then comparing this value to its expected value under the null hypothesis. Pitfalls exists when we apply Cox's test to the special case of testing normality versus lognormality. Pesaran (1981) and Kotz (1973) pointed out the slow convergence rate of the Cox's test. In this paper, this fact has been reemphasized; moreover, we propose an alternative likelihood ratio test which remedies problems arising from negative estimates of the asymptotic variance of Cox's test statistic and is uniformly more powerful than most commonly used tests.  相似文献   

4.
ABSTRACT

The performances of six confidence intervals for estimating the arithmetic mean of a lognormal distribution are compared using simulated data. The first interval considered is based on an exact method and is recommended in U.S. EPA guidance documents for calculating upper confidence limits for contamination data. Two intervals are based on asymptotic properties due to the Central Limit Theorem, and the other three are based on transformations and maximum likelihood estimation. The effects of departures from lognormality on the performance of these intervals are also investigated. The gamma distribution is considered to represent departures from the lognormal distribution. The average width and coverage of each confidence interval is reported for varying mean, variance, and sample size. In the lognormal case, the exact interval gives good coverage, but for small sample sizes and large variances the confidence intervals are too wide. In these cases, an approximation that incorporates sampling variability of the sample variance tends to perform better. When the underlying distribution is a gamma distribution, the intervals based upon the Central Limit Theorem tend to perform better than those based upon lognormal assumptions.  相似文献   

5.
Inequality-restricted hypotheses testing methods containing multivariate one-sided testing methods are useful in practice, especially in multiple comparison problems. In practice, multivariate and longitudinal data often contain missing values since it may be difficult to observe all values for each variable. However, although missing values are common for multivariate data, statistical methods for multivariate one-sided tests with missing values are quite limited. In this article, motivated by a dataset in a recent collaborative project, we develop two likelihood-based methods for multivariate one-sided tests with missing values, where the missing data patterns can be arbitrary and the missing data mechanisms may be non-ignorable. Although non-ignorable missing data are not testable based on observed data, statistical methods addressing this issue can be used for sensitivity analysis and might lead to more reliable results, since ignoring informative missingness may lead to biased analysis. We analyse the real dataset in details under various possible missing data mechanisms and report interesting findings which are previously unavailable. We also derive some asymptotic results and evaluate our new tests using simulations.  相似文献   

6.
In this paper we discuss a new theoretical basis for perturbation methods. In developing this new theoretical basis, we define the ideal measures of data utility and disclosure risk. Maximum data utility is achieved when the statistical characteristics of the perturbed data are the same as that of the original data. Disclosure risk is minimized if providing users with microdata access does not result in any additional information. We show that when the perturbed values of the confidential variables are generated as independent realizations from the distribution of the confidential variables conditioned on the non-confidential variables, they satisfy the data utility and disclosure risk requirements. We also discuss the relationship between the theoretical basis and some commonly used methods for generating perturbed values of confidential numerical variables.  相似文献   

7.
This paper compares methods of estimation for the parameters of a Pareto distribution of the first kind to determine which method provides the better estimates when the observations are censored, The unweighted least squares (LS) and the maximum likelihood estimates (MLE) are presented for both censored and uncensored data. The MLE's are obtained using two methods, In the first, called the ML method, it is shown that log-likelihood is maximized when the scale parameter is the minimum sample value. In the second method, called the modified ML (MML) method, the estimates are found by utilizing the maximum likelihood value of the shape parameter in terms of the scale parameter and the equation for the mean of the first order statistic as a function of both parameters. Since censored data often occur in applications, we study two types of censoring for their effects on the methods of estimation: Type II censoring and multiple random censoring. In this study we consider different sample sizes and several values of the true shape and scale parameters.

Comparisons are made in terms of bias and the mean squared error of the estimates. We propose that the LS method be generally preferred over the ML and MML methods for estimating the Pareto parameter γ for all sample sizes, all values of the parameter and for both complete and censored samples. In many cases, however, the ML estimates are comparable in their efficiency, so that either estimator can effectively be used. For estimating the parameter α, the LS method is also generally preferred for smaller values of the parameter (α ≤4). For the larger values of the parameter, and for censored samples, the MML method appears superior to the other methods with a slight advantage over the LS method. For larger values of the parameter α, for censored samples and all methods, underestimation can be a problem.  相似文献   

8.
In this paper we consider the problems of estimation and prediction when observed data from a lognormal distribution are based on lower record values and lower record values with inter-record times. We compute maximum likelihood estimates and asymptotic confidence intervals for model parameters. We also obtain Bayes estimates and the highest posterior density (HPD) intervals using noninformative and informative priors under square error and LINEX loss functions. Furthermore, for the problem of Bayesian prediction under one-sample and two-sample framework, we obtain predictive estimates and the associated predictive equal-tail and HPD intervals. Finally for illustration purpose a real data set is analyzed and simulation study is conducted to compare the methods of estimation and prediction.  相似文献   

9.
In this article we consider data-sharpening methods for nonparametric regression. In particular modifications are made to existing methods in the following two directions. First, we introduce a new tuning parameter to control the extent to which the data are to be sharpened, so that the amount of sharpening is adaptive and can be tuned to best suit the data at hand. We call this new parameter the sharpening parameter. Second, we develop automatic methods for jointly choosing the value of this sharpening parameter as well as the values of other required smoothing parameters. These automatic parameter selection methods are shown to be asymptotically optimal in a well defined sense. Numerical experiments were also conducted to evaluate their finite-sample performances. To the best of our knowledge, there is no bandwidth selection method developed in the literature for sharpened nonparametric regression.  相似文献   

10.
Approximate confidence intervals are given for the lognormal regression problem. The error in the nominal level can be reduced to O(n ?2), where n is the sample size. An alternative procedure is given which avoids the non-robust assumption of lognormality. This amounts to finding a confidence interval based on M-estimates for a general smooth function of both ? and F, where ? are the parameters of the general (possibly nonlinear) regression problem and F is the unknown distribution function of the residuals. The derived intervals are compared using theory, simulation and real data sets.  相似文献   

11.
In binary regression, imbalanced data result from the presence of values equal to zero (or one) in a proportion that is significantly greater than the corresponding real values of one (or zero). In this work, we evaluate two methods developed to deal with imbalanced data and compare them to the use of asymmetric links. The results based on simulation study show, that correction methods do not adequately correct bias in the estimation of regression coefficients and that the models with power links and reverse power considered produce better results for certain types of imbalanced data. Additionally, we present an application for imbalanced data, identifying the best model among the various ones proposed. The parameters are estimated using a Bayesian approach, considering the Hamiltonian Monte-Carlo method, utilizing the No-U-Turn Sampler algorithm and the comparisons of models were developed using different criteria for model comparison, predictive evaluation and quantile residuals.  相似文献   

12.
The analysis of extreme values is often required from short series which are biasedly sampled or contain outliers. Data for sea-levels at two UK east coast sites and data on athletics records for women's 3000 m track races are shown to exhibit such characteristics. Univariate extreme value methods provide a poor quantification of the extreme values for these data. By using bivariate extreme value methods we analyse jointly these data with related observations, from neighbouring coastal sites and 1500 m races respectively. We show that using bivariate methods provides substantial benefits, both in these applications and more generally with the amount of information gained being determined by the degree of dependence, the lengths and the amount of overlap of the two series, the homogeneity of the marginal characteristics of the variables and the presence and type of the outlier.  相似文献   

13.
Summary. Motivated by the autologistic model for the analysis of spatial binary data on the two-dimensional lattice, we develop efficient computational methods for calculating the normalizing constant for models for discrete data defined on the cylinder and lattice. Because the normalizing constant is generally unknown analytically, statisticians have developed various ad hoc methods to overcome this difficulty. Our aim is to provide computationally and statistically efficient methods for calculating the normalizing constant so that efficient likelihood-based statistical methods are then available for inference. We extend the so-called transition method to find a feasible computational method of obtaining the normalizing constant for the cylinder boundary condition. To extend the result to the free-boundary condition on the lattice we use an efficient path sampling Markov chain Monte Carlo scheme. The methods are generally applicable to association patterns other than spatial, such as clustered binary data, and to variables taking three or more values described by, for example, Potts models.  相似文献   

14.
Longitudinal data are commonly modeled with the normal mixed-effects models. Most modeling methods are based on traditional mean regression, which results in non robust estimation when suffering extreme values or outliers. Median regression is also not a best choice to estimation especially for non normal errors. Compared to conventional modeling methods, composite quantile regression can provide robust estimation results even for non normal errors. In this paper, based on a so-called pseudo composite asymmetric Laplace distribution (PCALD), we develop a Bayesian treatment to composite quantile regression for mixed-effects models. Furthermore, with the location-scale mixture representation of the PCALD, we establish a Bayesian hierarchical model and achieve the posterior inference of all unknown parameters and latent variables using Markov Chain Monte Carlo (MCMC) method. Finally, this newly developed procedure is illustrated by some Monte Carlo simulations and a case analysis of HIV/AIDS clinical data set.  相似文献   

15.
Distribution of maximum or minimum values (extreme values) of a dataset is especially used in natural phenomena including sea waves, flow discharge, wind speeds, and precipitation and it is also used in many other applied sciences such as reliability studies and analysis of environmental extreme events. So if we can explain the extremal behavior via statistical formulas, we can estimate how their behavior would be in the future. In this paper, we study extreme values of maximum precipitation in Zahedan using maximal generalized extreme value distribution, which all maxima of a data set are modeled using it. Also, we apply four methods to estimate distribution parameters including maximum likelihood estimation, probability weighted moments, elemental percentile and quantile least squares then compare estimates by average scaled absolute error criterion and obtain quantiles estimates and confidence intervals. In addition, goodness-of-fit tests are described. As a part of result, the return period of maximum precipitation is computed.  相似文献   

16.
Missing covariates data with censored outcomes put a challenge in the analysis of clinical data especially in small sample settings. Multiple imputation (MI) techniques are popularly used to impute missing covariates and the data are then analyzed through methods that can handle censoring. However, techniques based on MI are available to impute censored data also but they are not much in practice. In the present study, we applied a method based on multiple imputation by chained equations to impute missing values of covariates and also to impute censored outcomes using restricted survival time in small sample settings. The complete data were then analyzed using linear regression models. Simulation studies and a real example of CHD data show that the present method produced better estimates and lower standard errors when applied on the data having missing covariate values and censored outcomes than the analysis of the data having censored outcome but excluding cases with missing covariates or the analysis when cases with missing covariate values and censored outcomes were excluded from the data (complete case analysis).  相似文献   

17.
We propose an 1-regularized likelihood method for estimating the inverse covariance matrix in the high-dimensional multivariate normal model in presence of missing data. Our method is based on the assumption that the data are missing at random (MAR) which entails also the completely missing at random case. The implementation of the method is non-trivial as the observed negative log-likelihood generally is a complicated and non-convex function. We propose an efficient EM algorithm for optimization with provable numerical convergence properties. Furthermore, we extend the methodology to handle missing values in a sparse regression context. We demonstrate both methods on simulated and real data.  相似文献   

18.
Using a multivariate latent variable approach, this article proposes some new general models to analyze the correlated bounded continuous and categorical (nominal or/and ordinal) responses with and without non-ignorable missing values. First, we discuss regression methods for jointly analyzing continuous, nominal, and ordinal responses that we motivated by analyzing data from studies of toxicity development. Second, using the beta and Dirichlet distributions, we extend the models so that some bounded continuous responses are replaced for continuous responses. The joint distribution of the bounded continuous, nominal and ordinal variables is decomposed into a marginal multinomial distribution for the nominal variable and a conditional multivariate joint distribution for the bounded continuous and ordinal variables given the nominal variable. We estimate the regression parameters under the new general location models using the maximum-likelihood method. Sensitivity analysis is also performed to study the influence of small perturbations of the parameters of the missing mechanisms of the model on the maximal normal curvature. The proposed models are applied to two data sets: BMI, Steatosis and Osteoporosis data and Tehran household expenditure budgets.  相似文献   

19.
Recently-developed genotype imputation methods are a powerful tool for detecting untyped genetic variants that affect disease susceptibility in genetic association studies. However, existing imputation methods require individual-level genotype data, whereas in practice it is often the case that only summary data are available. For example this may occur because, for reasons of privacy or politics, only summary data are made available to the research community at large; or because only summary data are collected, as in DNA pooling experiments. In this article, we introduce a new statistical method that can accurately infer the frequencies of untyped genetic variants in these settings, and indeed substantially improve frequency estimates at typed variants in pooling experiments where observations are noisy. Our approach, which predicts each allele frequency using a linear combination of observed frequencies, is statistically straight-forward, and related to a long history of the use of linear methods for estimating missing values (e.g. Kriging). The main statistical novelty is our approach to regularizing the covariance matrix estimates, and the resulting linear predictors, which is based on methods from population genetics. We find that, besides being both fast and flexible - allowing new problems to be tackled that cannot be handled by existing imputation approaches purpose-built for the genetic context - these linear methods are also very accurate. Indeed, imputation accuracy using this approach is similar to that obtained by state-of-the art imputation methods that use individual-level data, but at a fraction of the computational cost.  相似文献   

20.
Nonignorable nonresponse is a nonresponse mechanism that depends on the values of the variable having nonresponse. When an observed data of a binomial distribution suffer missing values from a nonignorable nonresponse mechanism, the binomial distribution parameters become unidentifiable without any other auxiliary information or assumption. To address the problems of non identifiability, existing methods mostly based on the log-linear regression model. In this article, we focus on the model when the nonresponse is nonignorable and we consider to use the auxiliary data to improve identifiability; furthermore, we derive the maximum likelihood estimator (MLE) for the binomial proportion and its associated variance. We present results for an analysis of real-life data from the SARS study in China. Finally, the simulation study shows that the proposed method gives promising results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号