首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 398 毫秒
1.
Nonlinear structural equation modeling provides many advantages over analyses based on manifest variables only. Several approaches for the analysis of latent interaction effects have been developed within the last 15 years, including the partial least squares product indicator approach (PLS-PI), the constrained product indicator approach using the LISREL software (LISREL-PI), and the distribution-analytic latent moderated structural equations approach (LMS) using the Mplus program. An assumed advantage of PLS-PI is that it is able to deal with very large numbers of indicators, while LISREL-PI and LMS have not been investigated under such conditions. In a Monte Carlo study, the performance of LISREL-PI and LMS was compared to PLS-PI results previously reported in Chin et al. (2003) and Goodhue et al. (2007) for identical conditions. The latent interaction model included six indicator variables for the measurement of each latent predictor variable and the latent criterion, and sample size was N=100. The results showed that PLS-PI’s linear and interaction parameter estimates were downward biased, while parameter estimates were unbiased for LISREL-PI and LMS. True standard errors were smallest for PLS-PI, while the power to detect the latent interaction effect was higher for LISREL-PI and LMS. Compared to the symmetric distributions of interaction parameter estimates for LISREL-PI and LMS, PLS-PI showed a distribution that was symmetric for positive values, but included outlying negative estimates. Possible explanations for these findings are discussed.  相似文献   

2.
Despite the popularity of high dimension, low sample size data analysis, there has not been enough attention to the sample integrity issue, in particular, a possibility of outliers in the data. A new outlier detection procedure for data with much larger dimensionality than the sample size is presented. The proposed method is motivated by asymptotic properties of high-dimensional distance measures. Empirical studies suggest that high-dimensional outlier detection is more likely to suffer from a swamping effect rather than a masking effect, thus yields more false positives than false negatives. We compare the proposed approaches with existing methods using simulated data from various population settings. A real data example is presented with a consideration on the implication of found outliers.  相似文献   

3.
Applying spatiotemporal scan statistics is an effective method to detect the clustering of mean shifts in many application fields. Although several exponentially weighted moving average (EWMA) based scan statistics have been proposed, the existing methods generally require a fixed scan window size or apply the weighting technique across the temporal axis only. However, the size of shift coverage is often unavailable in practical problems. Using a mismatching scan radius may mislead the size of cluster coverage in space or delay the time to detection. This research proposed an stEWMA method by applying the weighting technique across both temporal and spatial axes with variable scan radius. The simulation analysis showed that the stEWMA method can have a significantly shorter time to detection than the likelihood ratio-based scan statistic using variable scan radius, especially when cluster coverage size is small. The application to detecting the increase of male thyroid cancer in the New Mexico state also showed the effectiveness of the proposed method.  相似文献   

4.
Adaptive sample size redetermination (SSR) for clinical trials consists of examining early subsets of on‐trial data to adjust prior estimates of statistical parameters and sample size requirements. Blinded SSR, in particular, while in use already, seems poised to proliferate even further because it obviates many logistical complications of unblinded methods and it generally introduces little or no statistical or operational bias. On the other hand, current blinded SSR methods offer little to no new information about the treatment effect (TE); the obvious resulting problem is that the TE estimate scientists might simply ‘plug in’ to the sample size formulae could be severely wrong. This paper proposes a blinded SSR method that formally synthesizes sample data with prior knowledge about the TE and the within‐treatment variance. It evaluates the method in terms of the type 1 error rate, the bias of the estimated TE, and the average deviation from the targeted power. The method is shown to reduce this average deviation, in comparison with another established method, over a range of situations. The paper illustrates the use of the proposed method with an example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non‐inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
A study on the robustness of the adaptation of the sample size for a phase III trial on the basis of existing phase II data is presented—when phase III is lower than phase II effect size. A criterion of clinical relevance for phase II results is applied in order to launch phase III, where data from phase II cannot be included in statistical analysis. The adaptation consists in adopting the conservative approach to sample size estimation, which takes into account the variability of phase II data. Some conservative sample size estimation strategies, Bayesian and frequentist, are compared with the calibrated optimal γ conservative strategy (viz. COS) which is the best performer when phase II and phase III effect sizes are equal. The Overall Power (OP) of these strategies and the mean square error (MSE) of their sample size estimators are computed under different scenarios, in the presence of the structural bias due to lower phase III effect size, for evaluating the robustness of the strategies. When the structural bias is quite small (i.e., the ratio of phase III to phase II effect size is greater than 0.8), and when some operating conditions for applying sample size estimation hold, COS can still provide acceptable results for planning phase III trials, even if in bias absence the OP was higher.

Main results concern the introduction of a correction, which affects just sample size estimates and not launch probabilities, for balancing the structural bias. In particular, the correction is based on a postulation of the structural bias; hence, it is more intuitive and easier to use than those based on the modification of Type I or/and Type II errors. A comparison of corrected conservative sample size estimation strategies is performed in the presence of a quite small bias. When the postulated correction is right, COS provides good OP and the lowest MSE. Moreover, the OPs of COS are even higher than those observed without bias, thanks to higher launch probability and a similar estimation performance. The structural bias can therefore be exploited for improving sample size estimation performances. When the postulated correction is smaller than necessary, COS is still the best performer, and it also works well. A higher than necessary correction should be avoided.  相似文献   

7.
A metaanalytic estimator of the proportion of positives in a sequence of screening experiments is proposed. The distribution-free estimator is based on the empirical distribution of P-values from individual experiments, which is uniform under the global null hypotheses of no positives in the sequence of experiments performed. Under certain regularity conditions, the proportion of positives corresponds to the derivative of this distribution under the alternative hypothesis of the existence of some positives. The statistical properties of the estimator are established, including its bias, variance, and rate of convergence to normality. Optimal estimators with minimum mean squared error are also developed under specific alternative hypotheses. The application of the proposed methods is illustrated using data from a sequence of screening experiments with chemicals to determine their carcinogenic potential.  相似文献   

8.
An alternative to using acceptance-rejection methods to generate a sample of points distributed uniformly over a region is to employ a random walk over that region. The sequence of points generated by a random walk has been shown, under certain easily satisfied conditions, to be a realization of a vector-valued discrete parameter Markov process, and to have the uniform distribution as its limiting distribution. The purpose of this paper is to point out that even if the marginal distribution of each point is actually uniform, rather than merely being asymptotically uniform, small samples may exhibit nonuniform characteristics due to serial autocorrelation within the sample.  相似文献   

9.
We consider in this article the problem of numerically approximating the quantiles of a sample statistic for a given population, a problem of interest in many applications, such as bootstrap confidence intervals. The proposed Monte Carlo method can be routinely applied to handle complex problems that lack analytical results. Furthermore, the method yields estimates of the quantiles of a sample statistic of any sample size though Monte Carlo simulations for only two optimally selected sample sizes are needed. An analysis of the Monte Carlo design is performed to obtain the optimal choices of these two sample sizes and the number of simulated samples required for each sample size. Theoretical results are presented for the bias and variance of the numerical method proposed. The results developed are illustrated via simulation studies for the classical problem of estimating a bivariate linear structural relationship. It is seen that the size of the simulated samples used in the Monte Carlo method does not have to be very large and the method provides a better approximation to quantiles than those based on an asymptotic normal theory for skewed sampling distributions.  相似文献   

10.
The extent of bias due to measurement errors is an important problem in the context of regression and survival analysis. While research in these areas has been extensive and fruitful, investigations into the effect of measurement errors in capture–recapture models have been very limited. The contributions of this paper are to understand the effects of measurement errors in continuous-time capture–recapture models and to propose new methods to circumvent their impact. We derive asymptotic variance formulas for each method and assess their finite sample properties via simulation studies.  相似文献   

11.
This paper compares methods of estimation for the parameters of a Pareto distribution of the first kind to determine which method provides the better estimates when the observations are censored, The unweighted least squares (LS) and the maximum likelihood estimates (MLE) are presented for both censored and uncensored data. The MLE's are obtained using two methods, In the first, called the ML method, it is shown that log-likelihood is maximized when the scale parameter is the minimum sample value. In the second method, called the modified ML (MML) method, the estimates are found by utilizing the maximum likelihood value of the shape parameter in terms of the scale parameter and the equation for the mean of the first order statistic as a function of both parameters. Since censored data often occur in applications, we study two types of censoring for their effects on the methods of estimation: Type II censoring and multiple random censoring. In this study we consider different sample sizes and several values of the true shape and scale parameters.

Comparisons are made in terms of bias and the mean squared error of the estimates. We propose that the LS method be generally preferred over the ML and MML methods for estimating the Pareto parameter γ for all sample sizes, all values of the parameter and for both complete and censored samples. In many cases, however, the ML estimates are comparable in their efficiency, so that either estimator can effectively be used. For estimating the parameter α, the LS method is also generally preferred for smaller values of the parameter (α ≤4). For the larger values of the parameter, and for censored samples, the MML method appears superior to the other methods with a slight advantage over the LS method. For larger values of the parameter α, for censored samples and all methods, underestimation can be a problem.  相似文献   

12.
Measurement error, the difference between a measured (observed) value of quantity and its true value, is perceived as a possible source of estimation bias in many surveys. To correct for such bias, a validation sample can be used in addition to the original sample for adjustment of measurement error. Depending on the type of validation sample, we can either use the internal calibration approach or the external calibration approach. Motivated by Korean Longitudinal Study of Aging (KLoSA), we propose a novel application of fractional imputation to correct for measurement error in the analysis of survey data. The proposed method is to create imputed values of the unobserved true variables, which are mis-measured in the main study, by using validation subsample. Furthermore, the proposed method can be directly applicable when the measurement error model is a mixture distribution. Variance estimation using Taylor linearization is developed. Results from a limited simulation study are also presented.  相似文献   

13.
Recently, several new robust multivariate estimators of location and scatter have been proposed that provide new and improved methods for detecting multivariate outliers. But for small sample sizes, there are no results on how these new multivariate outlier detection techniques compare in terms of p n , their outside rate per observation (the expected proportion of points declared outliers) under normality. And there are no results comparing their ability to detect truly unusual points based on the model that generated the data. Moreover, there are no results comparing these methods to two fairly new techniques that do not rely on some robust covariance matrix. It is found that for an approach based on the orthogonal Gnanadesikan–Kettenring estimator, p n can be very unsatisfactory with small sample sizes, but a simple modification gives much more satisfactory results. Similar problems were found when using the median ball algorithm, but a modification proved to be unsatisfactory. The translated-biweights (TBS) estimator generally performs well with a sample size of n≥20 and when dealing with p-variate data where p≤5. But with p=8 it can be unsatisfactory, even with n=200. A projection method as well the minimum generalized variance method generally perform best, but with p≤5 conditions where the TBS method is preferable are described. In terms of detecting truly unusual points, the methods can differ substantially depending on where the outliers happen to be, the number of outliers present, and the correlations among the variables.  相似文献   

14.
Restricted factor analysis can be used to investigate measurement bias. A prerequisite for the detection of measurement bias through factor analysis is the correct specification of the measurement model. We applied restricted factor analysis to two subtests of a Dutch cognitive ability test. These two examples serve to illustrate the relationship between multidimensionality and measurement bias. We conclude that measurement bias implies multidimensionality, whereas multidimensionality shows up as measurement bias only if multidimensionality is not properly accounted for in the measurement model.  相似文献   

15.
Non-parametric estimators for interquantile differences in moderate sized samples are identified, including a simple method based on sample quantiles, an extension of the Harrell and Davis (1982) method, an extension of the Kaigh and Lachenbruch (1982) approach, and methods based on Yang (1985). For those estimators compared via a simulation study (Harrell-Davis and sample quantile), using relative bias and relative efficiency as judgment criteria, the Harrell-Davis method is judged to be preferable for interquantile estimation with data sets of moderate size.  相似文献   

16.
Count data with structural zeros are common in public health applications. There are considerable researches focusing on zero-inflated models such as zero-inflated Poisson (ZIP) and zero-inflated Negative Binomial (ZINB) models for such zero-inflated count data when used as response variable. However, when such variables are used as predictors, the difference between structural and random zeros is often ignored and may result in biased estimates. One remedy is to include an indicator of the structural zero in the model as a predictor if observed. However, structural zeros are often not observed in practice, in which case no statistical method is available to address the bias issue. This paper is aimed to fill this methodological gap by developing parametric methods to model zero-inflated count data when used as predictors based on the maximum likelihood approach. The response variable can be any type of data including continuous, binary, count or even zero-inflated count responses. Simulation studies are performed to assess the numerical performance of this new approach when sample size is small to moderate. A real data example is also used to demonstrate the application of this method.  相似文献   

17.
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.  相似文献   

18.
In this article, we introduce two monitoring schemes to (sequentially) detect structural changes in generalized linear models and develop asymptotic theories for them. The first method is based on cumulative sums (CUSUM) of weighted residuals, in which the unknown in-control parameters have been replaced by its maximum likelihood (ML) estimate from the training sample, whereas the second scheme makes use of moving sums (MOSUM) of weighted residuals. We characterize the limit distribution of the test statistic and show that these tests are consistent. Moreover, we also obtain and tabulate the asymptotic critical values of the tests. Finally, we study the speed of detection under different conditions. The methods are illustrated and compared in several simulations.  相似文献   

19.
The authors discuss the bias of the estimate of the variance of the overall effect synthesized from individual studies by using the variance weighted method. This bias is proven to be negative. Furthermore, the conditions, the likelihood of underestimation and the bias from this conventional estimate are studied based on the assumption that the estimates of the effect are subject to normal distribution with common mean. The likelihood of underestimation is very high (e.g. it is greater than 85% when the sample sizes in two combined studies are less than 120). The alternative less biased estimates for the cases with and without the homogeneity of the variances are given in order to adjust for the sample size and the variation of the population variance. In addition, the sample size weight method is suggested if the consistence of the sample variances is violated Finally, a real example is presented to show the difference by using the above three estimate methods.  相似文献   

20.
In earlier work, Kirchner [An estimation procedure for the Hawkes process. Quant Financ. 2017;17(4):571–595], we introduced a nonparametric estimation method for the Hawkes point process. In this paper, we present a simulation study that compares this specific nonparametric method to maximum-likelihood estimation. We find that the standard deviations of both estimation methods decrease as power-laws in the sample size. Moreover, the standard deviations are proportional. For example, for a specific Hawkes model, the standard deviation of the branching coefficient estimate is roughly 20% larger than for MLE – over all sample sizes considered. This factor becomes smaller when the true underlying branching coefficient becomes larger. In terms of runtime, our method clearly outperforms MLE. The present bias of our method can be well explained and controlled. As an incidental finding, we see that also MLE estimates seem to be significantly biased when the underlying Hawkes model is near criticality. This asks for a more rigorous analysis of the Hawkes likelihood and its optimization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号