首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
A precise estimator for the log-normal mean   总被引:2,自引:0,他引:2  
The log-normal distribution is frequently encountered in applications. The uniformly minimum variance unbiased (UMVU) estimator for the log-normal mean is given explicitly by a formula found by Finney in 1941. In contrast to this the most commonly used estimator for a log-normal mean is the sample mean. This is possibly due to the complexity of the formula given by Finney. A modified maximum likelihood estimator which approximates the UMVU estimator is derived here. It is sufficiently simple to be implemented in elementary spreadsheet applications. An elementary approximate formula for the root-mean-square error of the suggested estimator and the UMVU estimator is presented. The suggested estimator is compared with the sample mean, the maximum likelihood, and the UMVU estimators by Monte Carlo simulation in terms of root-mean-square error.  相似文献   

2.
This paper examines the problem of assessing local influence on the optimal bandwidth estimation in kernel smoothing based on cross validation. The bandwidth for kernel smoothing plays an important role in the model fitting and is often estimated using the cross-validation criterion. Following the argument of the second-order approach to local influence suggested by Wu and Luo (1993), we develop a new diagnostic statistic to examine the local influence of the observations on the estimation of the optimal bandwidth, where the perturbation may belong to one of three schemes. These are the response perturbation, the perturbation in the explanatory variable, and the case-weight

perturbation. The proposed diagnostic is nonparametric and is capable of identifying influential observations with strong influence on the bandwidth estimation. An example is presented to illustrate the application of the proposed diagnostic, and the usefulness of the nonparametric approach is illustrated in comparison with some other approaches to the assessment of local influence  相似文献   

3.
We address the problem of estimating the edge of a bounded set in ? d given a random set of points drawn from the interior. Our method is based on a transformation of estimators dedicated to uniform point processes and obtained by smoothing some of its bias corrected extreme points. An application to the estimation of star-shaped supports is presented.  相似文献   

4.
Recently, Knautz and Trenkler (1993) considered Christensen’s (1987) equicorrelated linear regression model as an example to show that S2 and are independent even though the disturbances are equicorrelated. This paper addresses the issue of testing for the equicorrelation coefficient in the linear regression model based on survey data. It computes exact and approximate critical values using Point optimal and F-test statistics, respectively. An empirical comparison of these critical values at five percent nominal level are presented to demonstrate the performance of the new tests.  相似文献   

5.
Current statistical methods for analyzing epidemiological data with disease subtype information allow us to acquire knowledge not only for risk factor-disease subtype association but also, on a more profound account, heterogeneity in these associations by multiple disease characteristics (so-called etiologic heterogeneity of the disease). Current interest, particularly in cancer epidemiology, lies in obtaining a valid p-value for testing the hypothesis whether a particular cancer is etiologically heterogeneous. We consider the two-stage logistic regression model along with pseudo-conditional likelihood estimation method and design a testing strategy based on Rao's score test. An extensive Monte Carlo simulation study is carried out, false discovery rate and statistical power of the suggested test are investigated. Simulation results indicate that applying the proposed testing strategy, even a small degree of true etiologic heterogeneity can be recovered with a large statistical power from the sampled data. The strategy is then applied on a breast cancer data set to illustrate its use in practice where there are multiple risk factors and multiple disease characteristics of simultaneous concern.  相似文献   

6.
In this paper, we examine the sampling performance of a two-stage test which consists of a pre-test for a linear hypothesis on regression coeffiecients followed by a main-test for a disturbance variance in a linear regression. It is shown that the actual size of the two-stage test can be well-controlled around the normal size if the suggested sizes presented in this paper are used in the pre-test. It is also shown that the two-stage test when the suggested sizes are used in the preferable to the usual test for the disturbance variable which incorporates no pre-test in terms of the power.  相似文献   

7.
It is suggested that in some situations, observations for random variables should be collected in the form of intervals. In this paper, the unknown parameters in a bivariate normal model are estimated based on a set of point and interval observations via the maximum likelihood approach. The Newton-Raphson algorithm is used to find the estimates, and asymptotic properties of the estimator are provided. Monte Carlo studies are conducted to study the performance of the estimator. An example based on real-life data is presented to demonstrate the practical applicability of the method.  相似文献   

8.
Abstract

A method for obtaining bootstrapping replicates for one-dimensional point processes is presented. The method involves estimating the conditional intensity of the process and computing residuals. The residuals are bootstrapped using a block bootstrap and used, together with the conditional intensity, to define the bootstrap realizations. The method is applied to the estimation of the cross-intensity function for data arising from a reaction time experiment.  相似文献   

9.
We are concerned with nested case-control studies in this article. For proportional hazards model, a class of over-all estimators of hazard ratios is presented when simple samples are drawn from risk sets. These estimators have the form of the Mantel-Haenszel estimator of odds ratio, and are consistent not only for large strata, but also for sparse data. Consistent estimators of the variances of the proposed hazard ratio estimators are also developed. An example is given to illustrate the proposed estimators.  相似文献   

10.
The performance of computationally inexpensive model selection criteria in the context of tree structured prediction is discussed. It is shown through a simulation study that no one model selection criterion exhibits a uniformly superior performance over a wide range of scenarios. Therefore, a two-stage approach for model selection is suggested and shown to perform satisfactorily. A computationally efficient method of tree-growing within the RECursive Partition and AMalgamation (RECPAM) framework is suggested. The computationally efficient algorithm gives identical results as the original RECPAM tree-growing algorithm. An example of medical data analysis for developing prognostic classification is presented.  相似文献   

11.
This paper studies the construction of a Bayesian confidence interval for the risk ratio (RR) in a 2 × 2 table with structural zero. Under a Dirichlet prior distribution, the exact posterior distribution of the RR is derived, and tail-based interval is suggested for constructing Bayesian confidence interval. The frequentist performance of this confidence interval is investigated by simulation and compared with the score-based interval in terms of the mean coverage probability and mean expected width of the interval. An advantage of the Bayesian confidence interval is that it is well defined for all data structure and has shorter expected width. Our simulation shows that the Bayesian tail-based interval under Jeffreys’ prior performs as well as or better than the score-based confidence interval.  相似文献   

12.
Optimal design under a cost constraint is considered, with a scalar coefficient setting the compromise between information and cost. It is shown that for suitable cost functions, by increasing the value of the coefficient one can force the support points of an optimal design measure to concentrate around points of minimum cost. An example of adaptive design in a dose-finding problem with a bivariate binary model is presented, showing the effectiveness of the approach.  相似文献   

13.
The proportional hazards assumption of the Cox model does sometimes not hold in practise. An example is a treatment effect that decreases with time. We study a general multiplicative intensity model allowing the influence of each covariate to vary non-parametrically with time. An efficient estimation procedure for the cumulative parameter functions is developed. Its properties are studied using the martingale structure of the problem. Furthermore, we introduce a partly parametric version of the general non-parametric model in which the influence of some of the covariates varies with time while the effects of the remaining covariates are constant. This semiparametric model has not been studied in detail before. An efficient procedure for estimating the parametric as well as the non-parametric components of this model is developed. Again the martingale structure of the model allows us to describe the asymptotic properties of the suggested estimators. The approach is applied to two different data sets, and a Monte Carlo simulation is presented.  相似文献   

14.
Appreciating the desirability of simultaneously using both the criteria of goodness of fitted model and clustering of estimates around true parameter values, an extended version of the balanced loss function is presented and the Bayesian estimation of regression coefficients is discussed. The thus obtained optimal estimator is then compared with the least squares estimator and posterior mean vector with respect to the criteria like posterior expected loss, Bayes risk, bias vector, mean squared error matrix and risk function.  相似文献   

15.
ABSTRACT

The purpose of this study is to approximate and identify infinite scale mixtures of normals, SMN. A new method for approximating any infinite SMN with a known mixing measure by a finite SMN is presented. In the new method, the modulus of continuity of the normal family as a function of the scale is used to discretize the mixing measure. This method will be used to approximate univariate and multivariate SMN with mean 0. In the multivariate case, two different methods are used to approximate the infinite SMN. Several results related to SMN are proved and other known ones are presented. For example, SMN are characterized by their corresponding Laplace transforms.  相似文献   

16.
Abstract

It is widely acknowledged that the biomedical literature suffers from a surfeit of false positive results. Part of the reason for this is the persistence of the myth that observation of p?<?0.05 is sufficient justification to claim that you have made a discovery. It is hopeless to expect users to change their reliance on p-values unless they are offered an alternative way of judging the reliability of their conclusions. If the alternative method is to have a chance of being adopted widely, it will have to be easy to understand and to calculate. One such proposal is based on calculation of false positive risk(FPR). It is suggested that p-values and confidence intervals should continue to be given, but that they should be supplemented by a single additional number that conveys the strength of the evidence better than the p-value. This number could be the minimum FPR (that calculated on the assumption of a prior probability of 0.5, the largest value that can be assumed in the absence of hard prior data). Alternatively one could specify the prior probability that it would be necessary to believe in order to achieve an FPR of, say, 0.05.  相似文献   

17.
Although still modest, non response rates in multipurpose household surveys have recently increased, especially in some metropolitan areas. Previous analyses have shown that refusal risk depends on the interviewers' characteristics. The aim of this paper is to explain the difference in refusal risk among metropolitan areas by analysing the strategies adopted in the recruitment of interviewers through a multilevel approach. The Annual Survey on Living conditions is a PAPI survey of the "Multipurpose" integrated system of social surveys and it represents our data base. For non responding household, data on non response by reason, municipality and characteristics of the interviewer are available. The results highlight that those cities recruiting interviewers mainly among young students have a higher refusal risk. These results are particularly important as they indicate that recruitment strategies may have a substantial impact on non sampling errors. Acknowledgements An earlier version of this article was presented at the International Conference on Improving Survey, University of Copenhagen, Denmark, August 25-28, 2002. We would like to thank the participants to the presentation for their useful comments and suggestions. Opinions expressed are those of the authors and do not necessarily represent the official position of any of the institutions they work for.  相似文献   

18.
We study the persistence of intertrade durations, counts (number of transactions in equally spaced intervals of clock time), squared returns and realized volatility in 10 stocks trading on the New York Stock Exchange. A semiparametric analysis reveals the presence of long memory in all of these series, with potentially the same memory parameter. We introduce a parametric latent-variable long-memory stochastic duration (LMSD) model which is shown to better fit the data than the autoregressive conditional duration model (ACD) in a variety of ways. The empirical evidence we present here is in agreement with theoretical results on the propagation of memory from durations to counts and realized volatility presented in Deo et al. (2009).  相似文献   

19.
A method of maximum likelihood estimation of gross flows from overlapping stratified sample data is developed. The approach taken is model-based and the EM algorithm is used to solve the estimation problem. Inference is thus based on information from the total sample at each time period. This can be contrasted with the conventional approach to gross flows estimation which only uses information from the overlapping sub-sample. An application to estimation of flows of Australian cropping and livestock industries farms into and out of an “at risk” situation over the period 1979–84 is presented, as well as a discussion of extensions to more complex sampling situations.  相似文献   

20.
As known, the ordinary least-squares estimator (OLSE) is unbiased and also, has the minimum variance among all the linear unbiased estimators. However, under multicollinearity the estimator is generally unstable and poor in the sense that variance of the regression coefficients may be inflated and absolute values of the estimates may be too large. There are several classes of biased estimators in statistical literature to decrease the effect of multicollinearity in the design matrix. Here, based on the Cholesky decomposition, we propose such an estimator which makes the data to be slightly distorted. The exact risk expressions as well as the biases are derived for the proposed estimator. Also, some results demonstrating superiority of the suggested estimator over OLSE are obtained. Finally, a Monté-Carlo simulation study and a real data application related to acetylene data are presented to support our theoretical discussions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号