首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We consider a regression of yy on xx given by a pair of mean and variance functions with a parameter vector θθ to be estimated that also appears in the distribution of the regressor variable xx. The estimation of θθ is based on an extended quasi-score (QS) function. We show that the QS estimator is optimal within a wide class of estimators based on linear-in-yy unbiased estimating functions. Of special interest is the case where the distribution of xx depends only on a subvector αα of θθ, which may be considered a nuisance parameter. In general, αα must be estimated simultaneously together with the rest of θθ, but there are cases where αα can be pre-estimated. A major application of this model is the classical measurement error model, where the corrected score (CS) estimator is an alternative to the QS estimator. We derive conditions under which the QS estimator is strictly more efficient than the CS estimator.  相似文献   

2.
We consider a linear regression model with regression parameter β=(β1,…,βp)β=(β1,,βp) and independent and identically N(0,σ2)N(0,σ2) distributed errors. Suppose that the parameter of interest is θ=aTβθ=aTβ where aa is a specified vector. Define the parameter τ=cTβ-tτ=cTβ-t where the vector cc and the number tt are specified and aa and cc are linearly independent. Also suppose that we have uncertain prior information that τ=0τ=0. We present a new frequentist 1-α1-α confidence interval for θθ that utilizes this prior information. We require this confidence interval to (a) have endpoints that are continuous functions of the data and (b) coincide with the standard 1-α1-α confidence interval when the data strongly contradict this prior information. This interval is optimal in the sense that it has minimum weighted average expected length where the largest weight is given to this expected length when τ=0τ=0. This minimization leads to an interval that has the following desirable properties. This interval has expected length that (a) is relatively small when the prior information about ττ is correct and (b) has a maximum value that is not too large. The following problem will be used to illustrate the application of this new confidence interval. Consider a 2×22×2 factorial experiment with 20 replicates. Suppose that the parameter of interest θθ is a specified simple   effect and that we have uncertain prior information that the two-factor interaction is zero. Our aim is to find a frequentist 0.95 confidence interval for θθ that utilizes this prior information.  相似文献   

3.
This article considers sample size determination methods based on Bayesian credible intervals for θθ, an unknown real-valued parameter of interest. We consider clinical trials and assume that θθ represents the difference in the effects of a new and a standard therapy. In this context, it is typical to identify an interval of parameter values that imply equivalence of the two treatments (range of equivalence). Then, an experiment designed to show superiority of the new treatment is successful if it yields evidence that θθ is sufficiently large, i.e. if an interval estimate of θθ lies entirely above the superior limit of the range of equivalence. Following a robust Bayesian approach, we model uncertainty on prior specification with a class ΓΓ of distributions for θθ and we assume that the data yield robust evidence   if, as the prior varies in ΓΓ, the lower bound of the credible set inferior limit is sufficiently large. Sample size criteria in the article consist in selecting the minimal number of observations such that the experiment is likely to yield robust evidence. These criteria are based on summaries of the predictive distributions of lower bounds of the random inferior limits of credible intervals. The method is developed for the conjugate normal model and applied to a trial for surgery of gastric cancer.  相似文献   

4.
In common with other non-linear models, the optimal design for a limiting dilution assay (LDA) depends on the value of the unknown parameter, θθ, in the model. Consequently optimal designs cannot be specified unless some assumptions are made about the possible values of θθ. If a prior distribution can be specified then a Bayesian approach can be adopted. A proper specification of the Bayesian approach requires the aim of the experiment to be described and quantified through an appropriate utility function. This paper addresses the problem of finding optimal designs for LDAs when the aim is to determine whether θθ is above or below a specified threshold, θ0θ0.  相似文献   

5.
We determine a credible set A   that is the “best” with respect to the variation of the prior distribution in a neighborhood ΓΓ of the starting prior π0(θ)π0(θ). Among the class of sets with credibility γγ under π0π0, the “optimally robust” set will be the one which maximizes the minimum probability of including θθ as the prior varies over ΓΓ. This procedure is also Γ-minimaxΓ-minimax with respect to the risk function, probability of non-inclusion. We find the optimally robust credible set for three neighborhood classes ΓΓ, the ε-contaminationε-contamination class, the density ratio class and the density bounded class. A consequence of this investigation is that the maximum likelihood set is seen to be an optimal credible set from a robustness perspective.  相似文献   

6.
In this paper, a k  -step-stress accelerated life-testing is considered with an equal step duration ττ. For small to moderate sample sizes, a practical modification is made to the model previously considered by Gouno et al. [2004. Optimal step-stress test under progressive Type-I censoring. IEEE Trans. Reliability 53, 383–393] in order to guarantee a feasible k  -step-stress test under progressive Type-I censoring, and the optimal ττ is determined under this model. Next, we discuss the determination of optimal ττ under the condition that the step-stress test proceeds to the k  -th stress level, and the efficiency of this conditional inference is compared to that of the previous case. In all cases considered, censoring is allowed at each point of stress change (viz., iτiτ, i=1,2,…,ki=1,2,,k). The determination of optimal ττ is discussed under C-optimality, D-optimality, and A-optimality criteria. We investigate in detail the case of progressively Type-I right censored data from an exponential distribution with a single stress variable.  相似文献   

7.
We consider the problem of estimating the scale parameter θθ of the shifted exponential distribution with unknown location based on a type II progressively censored sample. Under a large class of bowl-shaped loss functions, a smooth estimator, that dominates the minimum risk equivariant estimator of θθ, is proposed. A numerical study is performed and shows that the improved estimator yields significant risk reduction over the MRE.  相似文献   

8.
In this paper, we study a random field U?(t,x)U?(t,x) governed by some type of stochastic partial differential equations with an unknown parameter θθ and a small noise ??. We construct an estimator of θθ based on the continuous observation of N   Fourier coefficients of U?(t,x)U?(t,x), and prove the strong convergence and asymptotic normality of the estimator when the noise ?? tends to zero.  相似文献   

9.
Suppose all events occurring in an unknown number (ν)(ν) of iid renewal processes, with a common renewal distribution F  , are observed for a fixed time ττ, where both νν and F   are unknown. The individual processes are not known a priori, but for each event, the process that generated it is identified. For example, in software reliability application, the errors (or bugs) in a piece of software are not known a priori, but whenever the software fails, the error causing the failure is identified. We present a nonparametric method for estimating νν and investigate its properties. Our results show that the proposed estimator performs well in terms of bias and asymptotic normality, while the MLE of νν derived assuming that the common renewal distribution is exponential may be seriously biased if that assumption does not hold.  相似文献   

10.
This paper discusses a new perspective in fitting spatial point process models. Specifically the spatial point process of interest is treated as a marked point process where at each observed event xx a stochastic process M(x;t)M(x;t), 0<t<r0<t<r, is defined. Each mark process M(x;t)M(x;t) is compared with its expected value, say F(t;θ)F(t;θ), to produce a discrepancy measure at xx, where θθ is a set of unknown parameters. All individual discrepancy measures are combined to define an overall measure which will then be minimized to estimate the unknown parameters. The proposed approach can be easily applied to data with sample size commonly encountered in practice. Simulations and an application to a real data example demonstrate the efficacy of the proposed approach.  相似文献   

11.
We consider methods for reducing the effect of fitting nuisance parameters on a general estimating function, when the estimating function depends on not only a vector of parameters of interest, θθ, but also on a vector of nuisance parameters, λλ. We propose a class of modified profile estimating functions with plug-in bias reduced by two orders. A robust version of the adjustment term does not require any information about the probability mechanism beyond that required by the original estimating function. An important application of this method is bias correction for the generalized estimating equation in analyzing stratified longitudinal data, where the stratum-specific intercepts are considered as fixed nuisance parameters, the dependence of the expected outcome on the covariates is of interest, and the intracluster correlation structure is unknown. Furthermore, when the quasi-scores for θθ and λλ are available, we propose an additional multiplicative adjustment term such that the modified profile estimating function is approximately information unbiased. This multiplicative adjustment term can serve as an optimal weight in the analysis of stratified studies. A brief simulation study shows that the proposed method considerably reduces the impact of the nuisance parameters.  相似文献   

12.
A survey of research by Emanuel Parzen on how quantile functions provide elegant and applicable formulas that unify many statistical methods, especially frequentist and Bayesian confidence intervals and prediction distributions. Section 0: In honor of Ted Anderson's 90th birthday; Section 1: Quantile functions, endpoints of prediction intervals; Section 2: Extreme value limit distributions; Sections 3, 4: Confidence and prediction endpoint function: Uniform(0,θ)(0,θ), exponential; Sections: 5, 6: Confidence quantile and Bayesian inference normal parameters μμ, σσ; Section 7: Two independent samples confidence quantiles; Section 8: Confidence quantiles for proportions, Wilson's formula. We propose ways that Bayesians and frequentists can be friends!  相似文献   

13.
The generalized order-restricted information criterion (goric) is a model selection criterion which can, up to now, solely be applied to the analysis of variance models and, so far, only evaluate restrictions of the form Rθ≤0Rθ0, where θθ is a vector of k group means and R   a cm×kcm×k matrix. In this paper, we generalize the goric in two ways: (i) such that it can be applied to t  -variate normal linear models and (ii) such that it can evaluate a more general form of order restrictions: Rθ≤rRθr, where θθ is a vector of length tk, r a vector of length cm, and R   a cm×tkcm×tk matrix of full rank (when r≠0r0). At the end, we illustrate that the goric is easy to implement in a multivariate regression model.  相似文献   

14.
We consider the problem of estimating the mean θθ of an Np(θ,Ip)Np(θ,Ip) distribution with squared error loss ∥δ−θ∥2δθ2 and under the constraint ∥θ∥≤mθm, for some constant m>0m>0. Using Stein's identity to obtain unbiased estimates of risk, Karlin's sign change arguments, and conditional risk analysis, we compare the risk performance of truncated linear estimators with that of the maximum likelihood estimator δmleδmle. We obtain for fixed (m,p)(m,p) sufficient conditions for dominance. An asymptotic framework is developed, where we demonstrate that the truncated linear minimax estimator dominates δmleδmle, and where we obtain simple and accurate measures of relative improvement in risk. Numerical evaluations illustrate the effectiveness of the asymptotic framework for approximating the risks for moderate or large values of p.  相似文献   

15.
For a loss distribution belonging to a location–scale family, Fμ,σFμ,σ, the risk measures, Value-at-Risk and Expected Shortfall are linear functions of the parameters: μ+τσμ+τσ where ττ is the corresponding risk measure of the mean-zero and unit-variance member of the family. For each risk measure, we consider a natural estimator by replacing the unknown parameters μμ and σσ by the sample mean and (bias corrected) sample standard deviation, respectively. The large-sample parametric confidence intervals for the risk measures are derived, relying on the asymptotic joint distribution of the sample mean and sample standard deviation. Simulation studies with the Normal, Laplace and Gumbel families illustrate that the derived asymptotic confidence intervals for Value-at-Risk and Expected Shortfall outperform those of Bahadur (1966) and Brazauskas et al. (2008), respectively. The method can also be effectively applied to Log-location-scale families whose supports are positive reals; an illustrative example is given in the area of financial credit risk.  相似文献   

16.
We develop and study in the framework of Pareto-type distributions a general class of kernel estimators for the second order parameter ρρ, a parameter related to the rate of convergence of a sequence of linearly normalized maximum values towards its limit. Inspired by the kernel goodness-of-fit statistics introduced in Goegebeur et al. (2008), for which the mean of the normal limiting distribution is a function of ρρ, we construct estimators for ρρ using ratios of ratios of differences of such goodness-of-fit statistics, involving different kernel functions as well as power transformations. The consistency of this class of ρρ estimators is established under some mild regularity conditions on the kernel function, a second order condition on the tail function 1−F of the underlying model, and for suitably chosen intermediate order statistics. Asymptotic normality is achieved under a further condition on the tail function, the so-called third order condition. Two specific examples of kernel statistics are studied in greater depth, and their asymptotic behavior illustrated numerically. The finite sample properties are examined by means of a simulation study.  相似文献   

17.
A representation of the transient probability functions of finite birth–death processes (with or without catastrophes) as a linear combination of exponential functions is derived using a recursive, Cayley–Hamilton approach. This method of solution allows practitioners to solve for these transient probability functions by reducing the problem to three calculations: determining eigenvalues of the QQ-matrix, raising the QQ-matrix to an integer power and solving a system of linear equations. The approach avoids Laplace transforms and permits solution of a particular transition probability function from state ii to jj without determining all such functions.  相似文献   

18.
Studying the right tail of a distribution, one can classify the distributions into three classes based on the extreme value index γγ. The class γ>0γ>0 corresponds to Pareto-type or heavy tailed distributions, while γ<0γ<0 indicates that the underlying distribution has a finite endpoint. The Weibull-type distributions form an important subgroup within the Gumbel class with γ=0γ=0. The tail behaviour can then be specified using the Weibull tail index. Classical estimators of this index show severe bias. In this paper we present a new estimation approach based on the mean excess function, which exhibits improved bias and mean squared error. The asserted properties are supported by simulation experiments and asymptotic results. Illustrations with real life data sets are provided.  相似文献   

19.
20.
Consider the model where there are II independent multivariate normal treatment populations with p×1p×1 mean vectors μiμi, i=1,…,Ii=1,,I, and covariance matrix ΣΣ. Independently the (I+1)(I+1)st population corresponds to a control and it too is multivariate normal with mean vector μI+1μI+1 and covariance matrix ΣΣ. Now consider the following two multiple testing problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号