首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
This paper studies the estimation in the proportional odds model based on randomly truncated data. The proposed estimators for the regression coefficients include a class of minimum distance estimators defined through weighted empirical odds function. We have investigated the asymptotic properties like the consistency and the limiting distribution of the proposed estimators under mild conditions. The finite sample properties were investigated through simulation study making comparison of some of the estimators in the class. We conclude with an illustration of our proposed method to a well-known AIDS data.  相似文献   

2.
Consider the problem of estimating the mean of a p (≥3)-variate multi-normal distribution with identity variance-covariance matrix and with unweighted sum of squared error loss. A class of minimax, noncomparable (i.e. no estimate in the class dominates any other estimate in the class) estimates is proposed; the class contains rules dominating the simple James-Stein estimates. The estimates are essentially smoothed versions of the scaled, truncated James-Stein estimates studied by Efron and Morris. Explicit and analytically tractable expressions for their risks are obtained and are used to give guidelines for selecting estimates within the class.  相似文献   

3.
The phenotype of a quantitative trait locus (QTL) is often modeled by a finite mixture of normal distributions. If the QTL effect depends on the number of copies of a specific allele one carries, then the mixture model has three components. In this case, the mixing proportions have a binomial structure according to the Hardy–Weinberg equilibrium. In the search for QTL, a significance test of homogeneity against the Hardy–Weinberg normal mixture model alternative is an important first step. The LOD score method, a likelihood ratio test used in genetics, is a favored choice. However, there is not yet a general theory for the limiting distribution of the likelihood ratio statistic in the presence of unknown variance. This paper derives the limiting distribution of the likelihood ratio statistic, which can be described by the supremum of a quadratic form of a Gaussian process. Further, the result implies that the distribution of the modified likelihood ratio statistic is well approximated by a chi-squared distribution. Simulation results show that the approximation has satisfactory precision for the cases considered. We also give a real-data example.  相似文献   

4.
Pharmacokinetic studies are commonly performed using the two-stage approach. The first stage involves estimation of pharmacokinetic parameters such as the area under the concentration versus time curve (AUC) for each analysis subject separately, and the second stage uses the individual parameter estimates for statistical inference. This two-stage approach is not applicable in sparse sampling situations where only one sample is available per analysis subject similar to that in non-clinical in vivo studies. In a serial sampling design, only one sample is taken from each analysis subject. A simulation study was carried out to assess coverage, power and type I error of seven methods to construct two-sided 90% confidence intervals for ratios of two AUCs assessed in a serial sampling design, which can be used to assess bioequivalence in this parameter.  相似文献   

5.
Conditional probability distributions have been commonly used in modeling Markov chains. In this paper we consider an alternative approach based on copulas to investigate Markov-type dependence structures. Based on the realization of a single Markov chain, we estimate the parameters using one- and two-stage estimation procedures. We derive asymptotic properties of the marginal and copula parameter estimators and compare performance of the estimation procedures based on Monte Carlo simulations. At low and moderate dependence structures the two-stage estimation has comparable performance as the maximum likelihood estimation. In addition we propose a parametric pseudo-likelihood ratio test for copula model selection under the two-stage procedure. We apply the proposed methods to an environmental data set.  相似文献   

6.
This paper investigates statistical issues that arise in interlaboratory studies known as Key Comparisons when one has to link several comparisons to or through existing studies. An approach to the analysis of such a data is proposed using Gaussian distributions with heterogeneous variances. We develop conditions for the set of sufficient statistics to be complete and for the uniqueness of uniformly minimum variance unbiased estimators (UMVUE) of the contrast parametric functions. New procedures are derived for estimating these functions with estimates of their uncertainty. These estimates lead to associated confidence intervals for the laboratories (or studies) contrasts. Several examples demonstrate statistical inference for contrasts based on linkage through the pilot laboratories. Monte Carlo simulation results on performance of approximate confidence intervals are also reported.  相似文献   

7.
This paper deals with the problem of interval estimation of the scale parameter in the two-parameter exponential distribution subject to Type II double censoring. Base on a Type II doubly censored sample, we construct a class of interval estimators of the scale parameter which are better than the shortest length affine equivariant interval both in coverage probability and in length. The procedure can be repeated to make further improvement. The extension of the method leads to a smoothly improved confidence interval which improves the interval length with probability one. All improved intervals belong to the class of scale equivariant intervals.  相似文献   

8.
When combining estimates of a common parameter (of dimension d?1d?1) from independent data sets—as in stratified analyses and meta analyses—a weighted average, with weights ‘proportional’ to inverse variance matrices, is shown to have a minimal variance matrix (a standard fact when d=1d=1)—minimal in the sense that all convex combinations of the coordinates of the combined estimate have minimal variances. Minimum variance for the estimation of a single coordinate of the parameter can therefore be achieved by joint estimation of all coordinates using matrix weights. Moreover, if each estimate is asymptotically efficient within its own data set, then this optimally weighted average, with consistently estimated weights, is shown to be asymptotically efficient in the combined data set and avoids the need to merge the data sets and estimate the parameter in question afresh. This is so whatever additional non-common nuisance parameters may be in the models for the various data sets. A special case of this appeared in Fisher [1925. Theory of statistical estimation. Proc. Cambridge Philos. Soc. 22, 700–725.]: Optimal weights are ‘proportional’ to information matrices, and he argued that sample information should be used as weights rather than expected information, to maintain second-order efficiency of maximum likelihood. A number of special cases have appeared in the literature; we review several of them and give additional special cases, including stratified regression analysis—proportional-hazards, logistic or linear—, combination of independent ROC curves, and meta analysis. A test for homogeneity of the parameter across the data sets is also given.  相似文献   

9.
We propose optimal procedures to achieve the goal of partitioning k multivariate normal populations into two disjoint subsets with respect to a given standard vector. Definition of good or bad multivariate normal populations is given according to their Mahalanobis distances to a known standard vector as being small or large. Partitioning k multivariate normal populations is reduced to partitioning k non-central Chi-square or non-central F distributions with respect to the corresponding non-centrality parameters depending on whether the covariance matrices are known or unknown. The minimum required sample size for each population is determined to ensure that the probability of correct decision attains a certain level. An example is given to illustrate our procedures.  相似文献   

10.
We consider the problem of testing time series linearity. Existing time domain and spectral domain tests are discussed. A new approach relying on spectral domain properties of a time series under the null hypothesis of linearity is suggested. Under linearity, the normalized bispectral density function Z is a constant. Under the null hypothesis of linearity, properly constructed estimators of 2|Z|2 have a non-central chi-squared distribution with two degrees of freedom and constant non-centrality parameter 2|Z|2. If the null hypothesis is false, the non-centrality parameter is non-constant. This suggests goodness-of-fit tests might be effective in diagnosing non-linearity. Several approaches are introduced.  相似文献   

11.
In this paper, we consider simple random sampling without replacement from a dichotomous finite population. We investigate accuracy of the Normal approximation to the Hypergeometric probabilities for a wide range of parameter values, including the nonstandard cases where the sampling fraction tends to one and where the proportion of the objects of interest in the population tends to the boundary values, zero and one. We establish a non-uniform Berry–Esseen theorem for the Hypergeometric distribution which shows that in the nonstandard cases, the rate of Normal approximation to the Hypergeometric distribution can be considerably slower than the rate of Normal approximation to the Binomial distribution. We also report results from a moderately large numerical study and provide some guidelines for using the Normal approximation to the Hypergeometric distribution in finite samples.  相似文献   

12.
We discuss maximum likelihood and estimating equations methods for combining results from multiple studies in pooling projects and data consortia using a meta-analysis model, when the multivariate estimates with their covariance matrices are available. The estimates to be combined are typically regression slopes, often from relative risk models in biomedical and epidemiologic applications. We generalize the existing univariate meta-analysis model and investigate the efficiency advantages of the multivariate methods, relative to the univariate ones. We generalize a popular univariate test for between-studies homogeneity to a multivariate test. The methods are applied to a pooled analysis of type of carotenoids in relation to lung cancer incidence from seven prospective studies. In these data, the expected gain in efficiency was evident, sometimes to a large extent. Finally, we study the finite sample properties of the estimators and compare the multivariate ones to their univariate counterparts.  相似文献   

13.
In this paper, we consider the maximum likelihood and Bayes estimation of the scale parameter of the half-logistic distribution based on a multiply type II censored sample. However, the maximum likelihood estimator(MLE) and Bayes estimator do not exist in an explicit form for the scale parameter. We consider a simple method of deriving an explicit estimator by approximating the likelihood function and discuss the asymptotic variances of MLE and approximate MLE. Also, an approximation based on the Laplace approximation (Tierney & Kadane, 1986) is used to obtain the Bayes estimator. In order to compare the MLE, approximate MLE and Bayes estimates of the scale parameter, Monte Carlo simulation is used.  相似文献   

14.
Let X have a gamma distribution with known shape parameter θr;aL and unknown scale parameter θ. Suppose it is known that θ ≥ a for some known a > 0. An admissible minimax estimator for scale-invariant squared-error loss is presented. This estimator is the pointwise limit of a sequence of Bayes estimators. Further, the class of truncated linear estimators C = {θρρ(x) = max(a, ρ), ρ > 0} is studied. It is shown that each θρ is inadmissible and that exactly one of them is minimax. Finally, it is shown that Katz's [Ann. Math. Statist., 32, 136–142 (1961)] estimator of θ is not minimax for our loss function. Some further properties of and comparisons among these estimators are also presented.  相似文献   

15.
The weighted likelihood is a generalization of the likelihood designed to borrow strength from similar populations while making minimal assumptions. If the weights are properly chosen, the maximum weighted likelihood estimate may perform better than the maximum likelihood estimate (MLE). In a previous article, the minimum averaged mean squared error (MAMSE) weights are proposed and simulations show that they allow to outperform the MLE in many cases. In this paper, we study the asymptotic properties of the MAMSE weights. In particular, we prove that the MAMSE-weighted mixture of empirical distribution functions converges uniformly to the target distribution and that the maximum weighted likelihood estimate is strongly consistent. A short simulation illustrates the use of bootstrap in this context.  相似文献   

16.
R. Van de Ven  N. C. Weber 《Statistics》2013,47(3-4):345-352
Upper and lower bounds are obtained for the mean of the negative binomial distribution. These bounds are simple functions of a percentile determined by the shape parameter. The result is then used to obtain a robust estimate of the mean when the shape parameter is known.  相似文献   

17.
This paper establishes consistency and asymptotic distribution theory for the least squares estimate of a vector parameter of non-linear regression with long-range dependent noise. A covariance-based estimate of the memory parameter is proposed. The consistency of the estimate is established.  相似文献   

18.
Recently Jammalamadaka and Mangalam [2003. Non-parametric estimation for middle censored data. J. Nonparametric Statist. 15, 253–265] introduced a general censoring scheme called the “middle-censoring” scheme in non-parametric set up. In this paper we consider this middle-censoring scheme when the lifetime distribution of the items is exponentially distributed and the censoring mechanism is independent and non-informative. In this set up, we derive the maximum likelihood estimator and study its consistency and asymptotic normality properties. We also derive the Bayes estimate of the exponential parameter under a gamma prior. Since a theoretical construction of the credible interval becomes quite difficult, we propose and implement Gibbs sampling technique to construct the credible intervals. Monte Carlo simulations are performed to evaluate the small sample behavior of the techniques proposed. A real data set is analyzed to illustrate the practical application of the proposed methods.  相似文献   

19.
We introduce and study a class of rank-based estimators for the linear model. The estimate may be roughly described as being calculated in the same manner as a generalized M-estimate, but with the residual being replaced by a function of its signed rank. The influence function can thus be bounded, both as a function of the residual and as a function of the carriers. Subject to such a bound, the efficiency at a particular model distribution can be optimized by appropriate choices of rank scores and carrier weights. Such choices are given, with respect to a variety of optimality criteria. We compare our estimates with several others, in a Monte Carlo study and on a real data set from the literature.  相似文献   

20.
In this paper, we use a smoothed empirical likelihood method to investigate the difference of quantiles under censorship. An empirical log-likelihood ratio is derived and its asymptotic distribution is shown to be chi-squared. Approximate confidence regions based on this method are constructed. Simulation studies are used to compare the empirical likelihood and the normal approximation method in terms of its coverage accuracy. It is found that the empirical likelihood method provides a much better performance. The research is supported by NSFC (10231030) and RFDP.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号