首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In observational studies for the interaction between exposures on a dichotomous outcome of a certain population, usually one parameter of a regression model is used to describe the interaction, leading to one measure of the interaction. In this article we use the conditional risk of an outcome given exposures and covariates to describe the interaction and obtain five different measures of the interaction, that is, difference between the marginal risk differences, ratio of the marginal risk ratios, ratio of the marginal odds ratios, ratio of the conditional risk ratios, and ratio of the conditional odds ratios. These measures reflect different aspects of the interaction. By using only one regression model for the conditional risk, we obtain the maximum-likelihood (ML)-based point and interval estimates of these measures, which are most efficient due to the nature of ML. We use the ML estimates of the model parameters to obtain the ML estimates of these measures. We use the approximate normal distribution of the ML estimates of the model parameters to obtain approximate non-normal distributions of the ML estimates of these measures and then confidence intervals of these measures. The method can be easily implemented and is presented via a medical example.  相似文献   

2.
A new method for estimating a set of odds ratios under an order restriction based on estimating equations is proposed. The method is applied to those of the conditional maximum likelihood estimators and the Mantel-Haenszel estimators. The estimators derived from the conditional likelihood estimating equations are shown to maximize the conditional likelihoods. It is also seen that the restricted estimators converge almost surely to the respective odds ratios when the respective sample sizes become large regularly. The restricted estimators are compared with the unrestricted maximum likelihood estimators by a Monte Carlo simulation. The simulation studies show that the restricted estimates improve the mean squared errors remarkably, while the Mantel-Haenszel type estimates are competitive with the conditional maximum likelihood estimates, being slightly worse.  相似文献   

3.
Maximum likelihood (ML) estimation with spatial econometric models is a long-standing problem that finds application in several areas of economic importance. The problem is particularly challenging in the presence of missing data, since there is an implied dependence between all units, irrespective of whether they are observed or not. Out of the several approaches adopted for ML estimation in this context, that of LeSage and Pace [Models for spatially dependent missing data. J Real Estate Financ Econ. 2004;29(2):233–254] stands out as one of the most commonly used with spatial econometric models due to its ability to scale with the number of units. Here, we review their algorithm, and consider several similar alternatives that are also suitable for large datasets. We compare the methods through an extensive empirical study and conclude that, while the approximate approaches are suitable for large sampling ratios, for small sampling ratios the only reliable algorithms are those that yield exact ML or restricted ML estimates.  相似文献   

4.
The present study proposes a method to estimate the yield of a crop. The proposed Gaussian quadrature (GQ) method makes it possible to estimate the crop yield from a smaller subsample. Identification of plots and corresponding weights to be assigned to the yield of plots comprising a subsample is done with the help of information about the full sample on certain auxiliary variables relating to biometrical characteristics of the plant. Computational experience reveals that the proposed method leads to about 78% reduction in sample size with absolute percentage error of 2.7%. Performance of the proposed method has been compared with that of random sampling on the basis of the values of average absolute percentage error and standard deviation of yield estimates obtained from 40 samples of comparable size. Interestingly, average absolute percentage error as well as standard deviation is considerably smaller for the GQ estimates than for the random sample estimates. The proposed method is quite general and can be applied for other crops as well-provided information on auxiliary variables relating to yield contributing biometrical characteristics is available.  相似文献   

5.
In this article, a Bayesian approach is proposed for the estimation of log odds ratios and intraclass correlations over a two-way contingency table, including intraclass correlated cells. Required likelihood functions of log odds ratios are obtained, and determination of prior structures is discussed. Hypothesis testing for log odds ratios and intraclass correlations by using the posterior simulations is outlined. Because the proposed approach includes no asymptotic theory, it is useful for the estimation and hypothesis testing of log odds ratios in the presence of certain intraclass correlation patterns. A family health status and limitations data set is analyzed by using the proposed approach in order to figure out the impact of intraclass correlations on the estimates and hypothesis tests of log odds ratios. Although intraclass correlations are small in the data set, we obtain that even small intraclass correlations can significantly affect the estimates and test results, and our approach is useful for the estimation and testing of log odds ratios in the presence of intraclass correlations.  相似文献   

6.
Jennlson and Turnbull (1984,1989) proposed procedures for repeated confidence intervals for parameters of interest In a clinical trial monitored with group sequential methods. These methods are extended for use with stochastic curtailment procedures for two samples in the estimation of differences of means, differences of proportions, odds ratios, and hazard ratios. Methods are described for constructing 1) confidence intervals for these estimates at repeated times In the course of a trial, and 2) prediction intervals for predicted estimates at the end of a trial. Specific examples from several clinical trials are presented.  相似文献   

7.
Summary.  For rare diseases the observed disease count may exhibit extra Poisson variability, particularly in areas with low or sparse populations. Hence the variance of the estimates of disease risk, the standardized mortality ratios, may be highly unstable. This overdispersion must be taken into account otherwise subsequent maps based on standardized mortality ratios will be misleading and, rather than displaying the true spatial pattern of disease risk, the most extreme values will be highlighted. Neighbouring areas tend to exhibit spatial correlation as they may share more similarities than non-neighbouring areas. The need to address overdispersion and spatial correlation has led to the proposal of Bayesian approaches for smoothing estimates of disease risk. We propose a new model for investigating the spatial variation of disease risks in conjunction with an alternative specification for estimates of disease risk in geographical areas—the multivariate Poisson–gamma model. The main advantages of this new model lie in its simplicity and ability to account naturally for overdispersion and spatial auto-correlation. Exact expressions for important quantities such as expectations, variances and covariances can be easily derived.  相似文献   

8.
If a subgroup of a population is of particular interest in a survey, researchers may wish to increase the yield of this special subgroup by oversampling. One procedure for oversampling through households, or other clusters, is to divide the households into two segments: the main sample and the oversample (for which only members of the special group are eligible). Members of the oversampled special group come from both segments. This paper describes three methods for weighting the members of the special group. The household method treats the segments as strata and weights according to the proportion of households in each segment. The yield method uses weights according to the yield of special-group members in the two segments. The combined probability method provides a Horvitz-Thompson estimator using the sum of the probabilities that a person will be selected through either segment. Simulations show that the yield method produces estimates with variance lower than those of the household method. The combined probability method appears to be even more efficient. The difference in precision between the methods is small for estimates from the total sample but the household method can be markedly worse than the other two methods for estimates from the oversampled special group (over 40% greater variance in one scenario). Results from a community sample illustrate the comparisons. Because the household method can be much less efficient it should not be used.  相似文献   

9.
The affine dynamic term structure model (DTSM) is the canonical empirical finance representation of the yield curve. However, the possibility that DTSM estimates may be distorted by small-sample bias has been largely ignored. We show that conventional estimates of DTSM coefficients are indeed severely biased, and this bias results in misleading estimates of expected future short-term interest rates and of long-maturity term premia. We provide a variety of bias-corrected estimates of affine DTSMs, for both maximally flexible and overidentified specifications. Our estimates imply interest rate expectations and term premia that are more plausible from a macrofinance perspective. This article has supplementary material online.  相似文献   

10.
This paper describes an application of small area estimation (SAE) techniques under area-level spatial random effect models when only area (or district or aggregated) level data are available. In particular, the SAE approach is applied to produce district-level model-based estimates of crop yield for paddy in the state of Uttar Pradesh in India using the data on crop-cutting experiments supervised under the Improvement of Crop Statistics scheme and the secondary data from the Population Census. The diagnostic measures are illustrated to examine the model assumptions as well as reliability and validity of the generated model-based small area estimates. The results show a considerable gain in precision in model-based estimates produced applying SAE. Furthermore, the model-based estimates obtained by exploiting spatial information are more efficient than the one obtained by ignoring this information. However, both of these model-based estimates are more efficient than the direct survey estimate. In many districts, there is no survey data and therefore it is not possible to produce direct survey estimates for these districts. The model-based estimates generated using SAE are still reliable for such districts. These estimates produced by using SAE will provide invaluable information to policy-analysts and decision-makers.  相似文献   

11.
A method based on estimating the coefficients of a generating function is used to approximate the distribution of the maximum term of a stationary dependent sequence. In a numerical comparison of our approximation with other apporoximations, our method yielded uniformly closer estimates to the exact distribution. In the examples we considered, statisfactory estimates of the distribution were obtained by our method based on a knowledge of the tri-variate distribution of the underlying random sequence. Knowledge of higher variate distributions can be incorporated to yield even more accurate estimates.  相似文献   

12.
For constructing simultaneous confidence intervals for ratios of means for lognormal distributions, two approaches using a two-step method of variance estimates recovery are proposed. The first approach proposes fiducial generalized confidence intervals (FGCIs) in the first step followed by the method of variance estimates recovery (MOVER) in the second step (FGCIs–MOVER). The second approach uses MOVER in the first and second steps (MOVER–MOVER). Performance of proposed approaches is compared with simultaneous fiducial generalized confidence intervals (SFGCIs). Monte Carlo simulation is used to evaluate the performance of these approaches in terms of coverage probability, average interval width, and time consumption.  相似文献   

13.
"Population estimates from the 1990 Post-Enumeration Survey (PES), used to measure decennial census undercount, were obtained from dual system estimates (DSE's) that assumed independence within strata defined by age-race-sex-geography and other variables. We make this independence assumption for females, but develop methods to avoid the independence assumption for males within strata by using national level sex ratios from demographic analysis (DA).... We consider several...alternative DSE's, and use DA results for 1990 to apply them to data from the 1990 U.S. census and PES."  相似文献   

14.
Abstract.  A blockwise shrinkage is a popular adaptive procedure for non-parametric series estimates. It possesses an impressive range of asymptotic properties, and there is a vast pool of blocks and shrinkage procedures used. Traditionally these estimates are studied via upper bounds on their risks. This article suggests the study of these adaptive estimates via non-asymptotic lower bounds established for a spike underlying function that plays a pivotal role in the wavelet and minimax statistics. While upper-bound inequalities help the statistician to find sufficient conditions for a desirable estimation, the non-asymptotic lower bounds yield necessary conditions and shed a new light on the popular method of adaptation. The suggested method complements and knits together two traditional techniques used in the analysis of adaptive estimates: a numerical study and an asymptotic minimax inference.  相似文献   

15.
We consider designs for which the treatment association matrices for the row design and for the column design commute. For these designs it is shown that the usual procedures of combined estimation yield unbiased estimates of treatment differences. For an important special class of designs a procedure of combined estimation is proposed which assures improvement over the estimates obtained from the interaction analysis.  相似文献   

16.
Cubic spline smoothing of hazard rate functions is evaluated through a simulation study. The smoothing algorithm requires unsmoothed time-point estimates of a hazard rate, variances of the estimators, and a smoothing parameter. Two unsmoothed estimators were compared (Kaplan-Meier and Nelson based) as well as variations in the number of time-point estimates input to the algorithm. A cross-validated likelihood approach automated the selection of the smoothing parameter and the number of time-point estimates. The results indicated that, for a simple hazard shape, a wide range of smoothing parameter values and number of time-points will yield mean squared errors not much larger than parametric maximum likelihood estimators. However, for peaked hazards, it seems advisable to use the cross-validated likelihood approach in order to avoid oversmoothing.  相似文献   

17.
We consider methods for analysing matched case–control data when some covariates ( W ) are completely observed but other covariates ( X ) are missing for some subjects. In matched case–control studies, the complete-record analysis discards completely observed subjects if none of their matching cases or controls are completely observed. We investigate an imputation estimate obtained by solving a joint estimating equation for log-odds ratios of disease and parameters in an imputation model. Imputation estimates for coefficients of W are shown to have smaller bias and mean-square error than do estimates from the complete-record analysis.  相似文献   

18.
This article introduces principal component analysis for multidimensional sparse functional data, utilizing Gaussian basis functions. Our multidimensional model is estimated by maximizing a penalized log-likelihood function, while previous mixed-type models were estimated by maximum likelihood methods for one-dimensional data. The penalized estimation performs well for our multidimensional model, while maximum likelihood methods yield unstable parameter estimates and some of the parameter estimates are infinite. Numerical experiments are conducted to investigate the effectiveness of our method for some types of missing data. The proposed method is applied to handwriting data, which consist of the XY coordinates values in handwritings.  相似文献   

19.
Conventional approaches for inference about efficiency in parametric stochastic frontier (PSF) models are based on percentiles of the estimated distribution of the one-sided error term, conditional on the composite error. When used as prediction intervals, coverage is poor when the signal-to-noise ratio is low, but improves slowly as sample size increases. We show that prediction intervals estimated by bagging yield much better coverages than the conventional approach, even with low signal-to-noise ratios. We also present a bootstrap method that gives confidence interval estimates for (conditional) expectations of efficiency, and which have good coverage properties that improve with sample size. In addition, researchers who estimate PSF models typically reject models, samples, or both when residuals have skewness in the “wrong” direction, i.e., in a direction that would seem to indicate absence of inefficiency. We show that correctly specified models can generate samples with “wrongly” skewed residuals, even when the variance of the inefficiency process is nonzero. Both our bagging and bootstrap methods provide useful information about inefficiency and model parameters irrespective of whether residuals have skewness in the desired direction.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号