首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 991 毫秒
1.
Determining the effectiveness of different treatments from observational data, which are characterized by imbalance between groups due to lack of randomization, is challenging. Propensity matching is often used to rectify imbalances among prognostic variables. However, there are no guidelines on how appropriately to analyze group matched data when the outcome is a zero-inflated count. In addition, there is debate over whether to account for correlation of responses induced by matching and/or whether to adjust for variables used in generating the propensity score in the final analysis. The aim of this research is to compare covariate unadjusted and adjusted zero-inflated Poisson models that do and do not account for the correlation. A simulation study is conducted, demonstrating that it is necessary to adjust for potential residual confounding, but that accounting for correlation is less important. The methods are applied to a biomedical research data set.  相似文献   

2.
Summary. The paper develops methods for the design of experiments for mechanistic models when the response must be transformed to achieve symmetry and constant variance. The power transformation that is used is partially justified by a rule in analytical chemistry. Because of the nature of the relationship between the response and the mechanistic model, it is necessary to transform both sides of the model. Expressions are given for the parameter sensitivities in the transformed model and examples are given of optimum designs, not only for single-response models, but also for experiments in which multivariate responses are measured and for experiments in which the model is defined by a set of differential equations which cannot be solved analytically. The extension to designs for checking models is discussed.  相似文献   

3.
The procedure suggested by DerSimonian and Laird is the simplest and most commonly used method for fitting the random effects model for meta-analysis. Here it is shown that, unless all studies are of similar size, this is inefficient when estimating the between-study variance, but is remarkably efficient when estimating the treatment effect. If formal inference is restricted to statements about the treatment effect, and the sample size is large, there is little point in implementing more sophisticated methodology. However, it is further demonstrated, for a simple special case, that use of the profile likelihood results in actual coverage probabilities for 95% confidence intervals that are closer to nominal levels for smaller sample sizes. Alternative methods for making inferences for the treatment effect may therefore be preferable if the sample size is small, but the DerSimonian and Laird procedure retains its usefulness for larger samples.  相似文献   

4.
The good performance of logit confidence intervals for the odds ratio with small samples is well known. This is true unless the actual odds ratio is very large. In single capture–recapture estimation the odds ratio is equal to 1 because of the assumption of independence of the samples. Consequently, a transformation of the logit confidence intervals for the odds ratio is proposed in order to estimate the size of a closed population under single capture–recapture estimation. It is found that the transformed logit interval, after adding .5 to each observed count before computation, has actual coverage probabilities near to the nominal level even for small populations and even for capture probabilities near to 0 or 1, which is not guaranteed for the other capture–recapture confidence intervals proposed in statistical literature. Thus, given that the .5 transformed logit interval is very simple to compute and has a good performance, it is appropriate to be implemented by most users of the single capture–recapture method.  相似文献   

5.
An estimator, λ is proposed for the parameter λ of the log-zero-Poisson distribution. While it is not a consistent estimator of λ in the usual statistical sense, it is shown to be quite close to the maximum likelihood estimates for many of the 35 sets of data on which it is tried. Since obtaining maximum likelihood estimates is extremely difficult for this and other contagious distributions, this estimate will act at least as an initial estimate in solving the likelihood equations iteratively. A lesson learned from this experience is that in the area of contagious distributions, variability is so large that attention should be focused directly on the mean squared error and not on consistency or unbiasedness, whether for small samples or for the asymptotic case. Sample sizes for some of the data considered in the paper are in hundreds. The fact that the estimator which is not a consistent estimator of λ is closer to the maximum likeli-hood estimator than the consistent moment estimator shows that the variability is large enough to not permit consistency to materialize even for such large sample sizes usually available in actual practice.  相似文献   

6.
Summary.  In health sciences, medicine and social sciences linear mixed effects models are often used to analyse time-structured data. The search for optimal designs for these models is often hampered by two problems. The first problem is that these designs are only locally optimal. The second problem is that an optimal design for one model may not be optimal for other models. In this paper the maximin principle is adopted to handle both problems, simultaneously. The maximin criterion is formulated by means of a relative efficiency measure, which gives an indication of how much efficiency is lost when the uncertainty about the models over a prior domain of parameters is taken into account. The procedure is illustrated by means of three growth studies. Results are presented for a vocabulary growth study from education, a bone gain study from medical research and an epidemiological decline in height study. It is shown that, for the mixed effects polynomial models that are applied to these studies, the maximin designs remain highly efficient for different sets of models and combinations of parameter values.  相似文献   

7.
The Birnbaum-Saunders distribution is a fatigue life distribution that was derived from a model assuming that failure is due to the development and growth of a dominant crack. This distribution has been shown to be applicable not only for fatigue analysis but also in other areas of engineering science. Because of its increasing use, it would be desirable to obtain expressions for the expected value of different powers of this distribution.

In this article, the moment-generating function for the sinh-normal distribution is derived. It is shown that this moment-generating function can be used to obtain both integer and fractional moments for the Birnbaum-Saunders distribution. Thus it is now possible to obtain an expression for the expected value of the square root of a Birnbaum-Saunders random variable. A general expression for integer noncentral moments for the Birnbaum-Saunders distribution is derived using the moment-generating function of the sinh-normal distribution. Also included is an approximation of the moment-generating function that can be used fcx small values of the shape parameter.  相似文献   

8.
Power analysis for cluster randomized control trials is difficult to perform when a binary response is modeled using the generalized linear mixed-effects model (GLMM). Although methods for clustered binary responses exist such as the generalized estimating equations, they do not apply to the context of GLMM. Also, because popular statistical packages such as R and SAS do not provide correct estimates of parameters for the GLMM for binary responses, Monte Carlo simulation, a popular ad-hoc method for estimating power when the power function is too complex to evaluate analytically or numerically, fails to provide correct power estimates within the current context as well. In this paper, a new approach is developed to estimate power for cluster randomized control trials when a binary response is modeled by the GLMM. The approach is easy to implement and seems to work quite well, as assessed by simulation studies. The approach is illustrated with a real intervention study to reduce suicide reattempt rates among US Veterans.  相似文献   

9.
A local likelihood method with constraints is developed for the case-control sample for estimating the unknown relative risk function and odds ratio. Our estimates can be reduced to simply solving two systems of estimating equations. One system of estimating equations is for estimating the relative risk function, and is identical to that based on the locally weighted logistic regression analysis under prospective sampling. Another system of estimating equations is for estimating the odds ratio, and is identical to that used in the traditional linear logistic regression analysis. Asymptotic properties of the estimators are presented. Two real examples and simulations are given to illustrate our method. The results confirm that our approach is useful for estimating the relative risk function and odds ratio in case-control studies.  相似文献   

10.
In this paper, tests for the skewness parameter of the two-piece double exponential distribution are derived when the location parameter is unknown. Classical tests like Neyman structure test and likelihood ratio test (LRT), that are generally used to test hypotheses in the presence of nuisance parameters, are not feasible for this distribution since the exact distributions of the test statistics become very complicated. As an alternative, we identify a set of statistics that are ancillary for the location parameter. When the scale parameter is known, Neyman–Pearson's lemma is used, and when the scale parameter is unknown, the LRT is applied to the joint density function of ancillary statistics, in order to obtain a test for the skewness parameter of the distribution. Test for symmetry of the distribution can be deduced as a special case. It is found that power of the proposed tests for symmetry is only marginally less than the power of corresponding classical optimum tests when the location parameter is known, especially for moderate and large sample sizes.  相似文献   

11.
In this paper, the beta-binomial model is introduced as a Markov chain. It is shown that the correlated binomial model of Kupper and Haseman (1978) is identical to the additive binomial model of AItham(1978) and both are a first order approximation of the beta-binomial model. For small γ, the local efficiency of the moment estimators for the mean ρ and the extra-binomial variation γ is examined analytically. It is shown that, locally, the moment estimator for p is efficient up to the second order of y. Exact formulae for the relative efficiency are obtained for both the cases with γ known and unknown. Generalization to the unequal sample size case is also carried out. In particular, the gain in efficiency by using the quasi-likelihood estimator instead of the ratio estimator for p is studied when γ is known. These results are in agreement with the Monte Carlo results of Kleinman(1973) and Crowder(1985).  相似文献   

12.
A parametric modelling for interval data is proposed, assuming a multivariate Normal or Skew-Normal distribution for the midpoints and log-ranges of the interval variables. The intrinsic nature of the interval variables leads to special structures of the variance–covariance matrix, which is represented by five different possible configurations. Maximum likelihood estimation for both models under all considered configurations is studied. The proposed modelling is then considered in the context of analysis of variance and multivariate analysis of variance testing. To access the behaviour of the proposed methodology, a simulation study is performed. The results show that, for medium or large sample sizes, tests have good power and their true significance level approaches nominal levels when the constraints assumed for the model are respected; however, for small samples, sizes close to nominal levels cannot be guaranteed. Applications to Chinese meteorological data in three different regions and to credit card usage variables for different card designations, illustrate the proposed methodology.  相似文献   

13.
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail.  相似文献   

14.
Stochastic simulation is widely used to validate procedures and provide guidance for both theoretical and practical problems. Random variate generation is the basis of stochastic simulation. Applying the ratio-of-uniforms method to generate random vectors requires the ability to generate points uniformly in a suitable region of the space. Starting from the observation that, for many multivariate distributions, the multidimensional objective region can be covered by a hyper-ellipsoid more tightly than by a hyper-rectangle, a new algorithm to generate from multivariate distributions is proposed. Due to the computational saving it can produce, this method becomes an appealing statistical tool to generate random vectors from families of standard and nonstandard multivariate distributions. It is particularly interesting to generate from densities known up to a multiplicative constant, for example, from those arising in Bayesian computation. The proposed method is applied and its efficiency is shown for some classes of distributions.  相似文献   

15.
The union-intersection approach to multivariate test construction is used to develop an alternative to Wilks' likelihood ratio test statistic for testing for two or more outliers in multivariate normal data. It is shown that critical values of both statistics are poorly approximated by Bonferroni bounds. Simulated critical values are presented for both statistics for significance levels 1% and 5%, for sample sizes 10(5)30, 40, 50, 75 and 100 for 2, 3, 4 and 5 dimensions. A power comparison of the two tests in the slippage of the mean model for generating outliers indicates that the union-intersection test is the more powerful when the slippages are close to collinear. Although Wilks' test remains the preference for general use, the union-intersection test could be valuable when such special structure in the data is suspected.  相似文献   

16.
In this article, two new powerful tests for cointegration are proposed. The general idea is based on an intuitively appealing extension of the traditional, rather restrictive cointegration concept. In this article, we allow for a nonlinear, but most importantly a different, asymmetric convergence process to account for negative and positive changes in our cointegration approach. Using Monte Carlo simulations we verify, that the estimated size of the first test depends on the unknown value of a signal-to-noise ratio q. However, our second test—which is based on the original ideas of Kanioura and Turner—is more successful and robust in the sense that it works in all of the different evaluated situations. Furthermore it is shown to be more powerful than the traditional residual based Enders and Siklos method. The new optimal test is also applied in an empirical example in order to test for potential nonlinear asymmetric price transmission effects on the Swedish power market. We find that there is a higher propensity for power retailers to rapidly and systematically increase their retail electricity prices subsequent to increases in Nordpool's wholesale prices, than there is for them to reduce their prices subsequent to a drop in wholesale spot prices.  相似文献   

17.
Abstract.  Methodology for Bayesian inference is considered for a stochastic epidemic model which permits mixing on both local and global scales. Interest focuses on estimation of the within- and between-group transmission rates given data on the final outcome. The model is sufficiently complex that the likelihood of the data is numerically intractable. To overcome this difficulty, an appropriate latent variable is introduced, about which asymptotic information is known as the population size tends to infinity. This yields a method for approximate inference for the true model. The methods are applied to real data, tested with simulated data, and also applied to a simple epidemic model for which exact results are available for comparison.  相似文献   

18.
In this article, a semiparametric approach is proposed for the regression analysis of panel count data. Panel count data commonly arise in clinical trials and demographical studies where the response variable is the number of multiple recurrences of the event of interest and observation times are not fixed, varying from subject to subject. It is assumed that two processes exist in this data: the first is for a recurrent event and the second is for observation time. Many studies have been done to estimate mean function and regression parameters under the independency between recurrent event process and observation time process. In this article, the same statistical inference is studied, but the situation where these two processes may be related is also considered. The mixed Poisson process is applied for the recurrent event processes, and a frailty intensity function for the observation time is also used, respectively. Simulation studies are conducted to study the performance of the suggested methods. The bladder tumor data are applied to compare previous studie' results.  相似文献   

19.
Let(X, A) be a measurable space and P a family of probability measures on A. Let B and C be sub -algebras of A and B0 a sub -algebra of B. It is shown that if B0 is prediction sufficient (adequate) for B with respect to C and P, and Y is sufficient for B0vC with respect to P then Y is sufficient for BvC with respect to P; that if P is homogeneous and (B0; B, C) is Markov for P, and B0vC is sufficient for Bvc with respect to P, then B0 is sufficient for B with respect to P; and by example that the Markov property is necessary for the latter proposition to hold.  相似文献   

20.
The analysis of compositional data using the log-ratio approach is based on ratios between the compositional parts. Zeros in the parts thus cause serious difficulties for the analysis. This is a particular problem in case of structural zeros, which cannot be simply replaced by a non-zero value as it is done, e.g. for values below detection limit or missing values. Instead, zeros to be incorporated into further statistical processing. The focus is on exploratory tools for identifying outliers in compositional data sets with structural zeros. For this purpose, Mahalanobis distances are estimated, computed either directly for subcompositions determined by their zero patterns, or by using imputation to improve the efficiency of the estimates, and then proceed to the subcompositional and subgroup level. For this approach, new theory is formulated that allows to estimate covariances for imputed compositional data and to apply estimations on subgroups using parts of this covariance matrix. Moreover, the zero pattern structure is analyzed using principal component analysis for binary data to achieve a comprehensive view of the overall multivariate data structure. The proposed tools are applied to larger compositional data sets from official statistics, where the need for an appropriate treatment of zeros is obvious.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号