首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we study the estimation of the unbalanced panel data partially linear models with a one-way error components structure. A weighted semiparametric least squares estimator (WSLSE) is developed using polynomial spline approximation and least squares. We show that the WSLSE is asymptotically more efficient than the corresponding unweighted estimator for both parametric and nonparametric components of the model. This is a significant improvement over previous results in the literature which showed that the simply weighting technique can only improve the estimation of the parametric component. The asymptotic normalities of the proposed WSLSE are also established.  相似文献   

2.
This paper presents a new Laplacian approximation to the posterior density of η = g(θ). It has a simpler analytical form than that described by Leonard et al. (1989). The approximation derived by Leonard et al. requires a conditional information matrix Rη to be positive definite for every fixed η. However, in many cases, not all Rη are positive definite. In such cases, the computations of their approximations fail, since the approximation cannot be normalized. However, the new approximation may be modified so that the corresponding conditional information matrix can be made positive definite for every fixed η. In addition, a Bayesian procedure for contingency-table model checking is provided. An example of cross-classification between the educational level of a wife and fertility-planning status of couples is used for explanation. Various Laplacian approximations are computed and compared in this example and in an example of public school expenditures in the context of Bayesian analysis of the multiparameter Fisher-Behrens problem.  相似文献   

3.
In this paper, we use a smoothed empirical likelihood method to investigate the difference of quantiles under censorship. An empirical log-likelihood ratio is derived and its asymptotic distribution is shown to be chi-squared. Approximate confidence regions based on this method are constructed. Simulation studies are used to compare the empirical likelihood and the normal approximation method in terms of its coverage accuracy. It is found that the empirical likelihood method provides a much better performance. The research is supported by NSFC (10231030) and RFDP.  相似文献   

4.
In this paper, we consider simple random sampling without replacement from a dichotomous finite population. We investigate accuracy of the Normal approximation to the Hypergeometric probabilities for a wide range of parameter values, including the nonstandard cases where the sampling fraction tends to one and where the proportion of the objects of interest in the population tends to the boundary values, zero and one. We establish a non-uniform Berry–Esseen theorem for the Hypergeometric distribution which shows that in the nonstandard cases, the rate of Normal approximation to the Hypergeometric distribution can be considerably slower than the rate of Normal approximation to the Binomial distribution. We also report results from a moderately large numerical study and provide some guidelines for using the Normal approximation to the Hypergeometric distribution in finite samples.  相似文献   

5.
The POT (peaks-over-threshold) approach consists in using the generalized Pareto distribution (GPD) to approximate the distribution of excesses over a threshold. In this paper, we consider this approximation using a generalized probability-weighted moments (GPWM) method. We study the asymptotic behaviour of our new estimators and also the functional bias of the GPD as an estimate of the distribution function of the excesses. A simulation study is provided in order to appreciate the efficiency of our approach.  相似文献   

6.
Parametric mixture models are commonly used in the analysis of clustered data. Parametric families are specified for the conditional distribution of the response variable given a cluster-specific effect, and for the marginal distribution of the cluster-specific effects. This latter distribution is referred to as the mixing distribution. If the form of the mixing distribution is misspecified, then Bayesian and maximum-likelihood estimators of parameters associated with either distribution may be inconsistent. The magnitude of the asymptotic bias is investigated, using an approximation based on infinitesimal contamination of the mixing distribution. The approximation is useful when there is a closed-form expression for the marginal distribution of the response under the assumed mixing distribution, but not under the true mixing distribution. Typically this occurs when the assumed mixing distribution is conjugate, meaning that the conditional distribution of the cluster-specific parameter given the response variable belongs to the same parametric family as the mixing distribution.  相似文献   

7.
The estimation problem of a permutation parameter on the basis of a random sample of increasing size is considered. A necessary and sufficient condition for the existence of an estimator, asymptotically fully efficient for two different distributions families, is derived. We also study the application of this result to cyclic groups of order two and three.  相似文献   

8.
The number of components is an important feature in finite mixture models. Because of the irregularity of the parameter space, the log-likelihood-ratio statistic does not have a chi-square limit distribution. It is very difficult to find a test with a specified significance level, and this is especially true for testing k — 1 versus k components. Most of the existing work has concentrated on finding a comparable approximation to the limit distribution of the log-likelihood-ratio statistic. In this paper, we use a statistic similar to the usual log likelihood ratio, but its null distribution is asymptotically normal. A simulation study indicates that the method has good power at detecting extra components. We also discuss how to improve the power of the test, and some simulations are performed.  相似文献   

9.
In this paper, we propose a partially A-optimal criterion for block designs where multiple factors are arranged. The number of levels of each factor is assumed to be arbitrary and unequal block sizes are allowed. A sufficient condition is derived for a design to be partially A-optimal among all feasible designs. Then the properties of the selected design and its relation with orthogonal arrays are studied. Methods of constructing designs satisfying the sufficient condition are also given.  相似文献   

10.
It is shown that Strawderman's [1974. Minimax estimation of powers of the variance of a normal population under squared error loss. Ann. Statist. 2, 190–198] technique for estimating the variance of a normal distribution can be extended to estimating a general scale parameter in the presence of a nuisance parameter. Employing standard monotone likelihood ratio-type conditions, a new class of improved estimators for this scale parameter is derived under quadratic loss. By imposing an additional condition, a broader class of improved estimators is obtained. The dominating procedures are in form analogous to those in Strawderman [1974. Minimax estimation of powers of the variance of a normal population under squared error loss. Ann. Statist. 2, 190–198]. Application of the general results to the exponential distribution yields new sufficient conditions, other than those of Brewster and Zidek [1974. Improving on equivariant estimators. Ann. Statist. 2, 21–38] and Kubokawa [1994. A unified approach to improving equivariant estimators. Ann. Statist. 22, 290–299], for improving the best affine equivariant estimator of the scale parameter. A class of estimators satisfying the new conditions is constructed. The results shed new light on Strawderman's [1974. Minimax estimation of powers of the variance of a normal population under squared error loss. Ann. Statist. 2, 190–198] technique.  相似文献   

11.
We consider simulation-based methods for exploration and maximization of expected utility in sequential decision problems. We consider problems which require backward induction with analytically intractable expected utility integrals at each stage. We propose to use forward simulation to approximate the integral expressions, and a reduction of the allowable action space to avoid problems related to an increasing number of possible trajectories in the backward induction. The artificially reduced action space allows strategies to depend on the full history of earlier observations and decisions only indirectly through a low dimensional summary statistic. The proposed rule provides a finite-dimensional approximation to the unrestricted infinite-dimensional optimal decision rule. We illustrate the proposed approach with an application to an optimal stopping problem in a clinical trial.  相似文献   

12.
A stochastic approximation procedure of the Robbins-Monro type is considered. The original idea behind the Newton-Raphson method is used as follows. Given n approximations X1,…, Xn with observations Y1,…, Yn, a least squares line is fitted to the points (Xm, Ym),…, (Xn, Yn) where m<n may depend on n. The (n+1)st approximation is taken to be the intersection of the least squares line with y=0. A variation of the resulting process is studied. It is shown that this process yields a strongly consistent sequence of estimates which is asymptotically normal with minimal asymptotic variance.  相似文献   

13.
A harmonic new better than used in expectation (HNBUE) variable is a random variable which is dominated by an exponential distribution in the convex stochastic order. We use a recently obtained condition on stochastic equality under convex domination to derive characterizations of the exponential distribution and bounds for HNBUE variables based on the mean values of the order statistics of the variable. We apply the results to generate discrepancy measures to test if a random variable is exponential against the alternative that is HNBUE, but not exponential.  相似文献   

14.
In this article the author investigates the application of the empirical‐likelihood‐based inference for the parameters of varying‐coefficient single‐index model (VCSIM). Unlike the usual cases, if there is no bias correction the asymptotic distribution of the empirical likelihood ratio cannot achieve the standard chi‐squared distribution. To this end, a bias‐corrected empirical likelihood method is employed to construct the confidence regions (intervals) of regression parameters, which have two advantages, compared with those based on normal approximation, that is, (1) they do not impose prior constraints on the shape of the regions; (2) they do not require the construction of a pivotal quantity and the regions are range preserving and transformation respecting. A simulation study is undertaken to compare the empirical likelihood with the normal approximation in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. A real data example is given to illustrate the proposed approach. The Canadian Journal of Statistics 38: 434–452; 2010 © 2010 Statistical Society of Canada  相似文献   

15.
A Bayesian approach is presented for detecting influential observations using general divergence measures on the posterior distributions. A sampling-based approach using a Gibbs or Metropolis-within-Gibbs method is used to compute the posterior divergence measures. Four specific measures are proposed, which convey the effects of a single observation or covariate on the posterior. The technique is applied to a generalized linear model with binary response data, an overdispersed model and a nonlinear model. An asymptotic approximation using Laplace method to obtain the posterior divergence is also briefly discussed.  相似文献   

16.
We develop a saddle-point approximation for the marginal density of a real-valued function p(), where is a general M-estimator of a p-dimensional parameter, that is, the solution of the system {n-1ljl (Yl,) = 0}j=1,…,p. The approximation is applied to several regression problems and yields very good accuracy for small samples. This enables us to compare different classes of estimators according to their finite-sample properties and to determine when asymptotic approximations are useful in practice.  相似文献   

17.
A frequency domain bootstrap (FDB) is a common technique to apply Efron’s independent and identically distributed resampling technique (Efron, 1979) to periodogram ordinates – especially normalized periodogram ordinates – by using spectral density estimates. The FDB method is applicable to several classes of statistics, such as estimators of the normalized spectral mean, the autocorrelation (but not autocovariance), the normalized spectral density function, and Whittle parameters. While this FDB method has been extensively studied with respect to short-range dependent time processes, there is a dearth of research on its use with long-range dependent time processes. Therefore, we propose an FDB methodology for ratio statistics under long-range dependence, using semi- and nonparametric spectral density estimates as a normalizing factor. It is shown that the FDB approximation allows for valid distribution estimation for a broad class of stationary, long-range (or short-range) dependent linear processes, without any stringent assumptions on the distribution of the underlying process. The results of a large simulation study show that the FDB approximation using a semi- or nonparametric spectral density estimator is often robust for various values of a long-memory parameter reflecting magnitude of dependence. We apply the proposed procedure to two data examples.  相似文献   

18.
The phenotype of a quantitative trait locus (QTL) is often modeled by a finite mixture of normal distributions. If the QTL effect depends on the number of copies of a specific allele one carries, then the mixture model has three components. In this case, the mixing proportions have a binomial structure according to the Hardy–Weinberg equilibrium. In the search for QTL, a significance test of homogeneity against the Hardy–Weinberg normal mixture model alternative is an important first step. The LOD score method, a likelihood ratio test used in genetics, is a favored choice. However, there is not yet a general theory for the limiting distribution of the likelihood ratio statistic in the presence of unknown variance. This paper derives the limiting distribution of the likelihood ratio statistic, which can be described by the supremum of a quadratic form of a Gaussian process. Further, the result implies that the distribution of the modified likelihood ratio statistic is well approximated by a chi-squared distribution. Simulation results show that the approximation has satisfactory precision for the cases considered. We also give a real-data example.  相似文献   

19.
A necessary and sufficient condition, in terms of its parameters, is established for a two-associate-class PBIB design to be connected.  相似文献   

20.
The effects of applying the normal classificatory rule to a nonnormal population are studied here. These are assessed through the distribution of the misclassification errors in the case of the Edgeworth type distribution. Both theoretical and empirical results are presented. An examination of the latter shows that the effects of this type of nonnormality are marginal. The probability of misclassification of an observation from ∏1, using the appropriate LR rule, is always larger than one using the normal approximation (μ1<μ2). Converse condition holds for the misclassification of an observation from ∏2. Overall error rates are not affected by the skewness factor to any great extent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号