首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper compares the ordinary unweighted average, weighted average, and maximum likelihood methods for estimating a common bioactivity from multiple parallel line bioassays. Some of these or similar methods are also used in meta‐analysis. Based on a simulation study, these methods are assessed by comparing coverage probabilities of the true relative bioactivity and the length of the confidence intervals computed for these methods. The ordinary unweighted average method outperforms all statistical methods by consistently giving the best coverage probability but with somewhat wider confidence intervals. The weighted average methods give good coverage and smaller confidence intervals when combining homogeneous bioactivities. For heterogeneous bioactivities, these methods work well when a liberal significance level for testing homogeneity of bioactivities is used. The maximum likelihood methods gave good coverage when homogeneous bioactivities were considered. Overall, the preferred methods are the ordinary unweighted average and two weighted average methods that were specifically developed for bioassays. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
In this study, we reconsider weighted distribution from the perspective of missing mechanism since weighted distribution instead of being the distribution of the whole population of interest is only the distribution of respondents (sub-population). After defining some weighted distributions by different mechanisms for indicator of response, we show, by some simulation studies, that using weighted distributions may lead to biased estimates of parameters under the non-ignorable missing mechanism. On the other hand, joint modeling of the response and selection mechanism could result in more efficient and valid estimates of parameters. The lower root of mean squared errors of estimates from the joint modeling approach than those of the weighted distribution is a warranty to the statement that the joint modeling method is more efficient than weighted distribution; this is proved by diverse simulation studies along the article. However, these two methods of the weighted approach and joint modeling give similar results if the selection mechanism is at random. Finally, the methods are applied and compared in the analysis of one well-used real dataset.  相似文献   

3.
In this paper, we are interested in the weighted distributions of a bivariate three parameter logarithmic series distribution studied by Kocherlakota and Kocherlakota (1990). The weighted versions of the model are derived with weight W(x,y) = x[r] y[s]. Explicit expressions for the probability mass function and probability generating functions are derived in the case r = s = l. The marginal and conditional distributions are derived in the general case. The maximum likelihood estimation of the parameters, in both two parameter and three parameter cases, is studied. A procedure for computer generation of bivariate data from a discrete distribution is described. This enables us to present two examples, in order to illustrate the methods developed, for finding the maximum likelihood estimates.  相似文献   

4.

We discuss the multivariate (2L-variate) correlation structure and the asymptotic distribution for the group-sequential weighted logrank statistics formulated when monitoring two correlated event-time outcomes in clinical trials. The asymptotic distribution and the variance–covariance for the 2L-variate weighted logrank statistic are derived as available in various group-sequential trial designs. These methods are used to determine a group-sequential testing procedure based on calendar times or information fractions. We apply the theoretical results to a group-sequential method for monitoring a clinical trial with early stopping for efficacy when the trial is designed to evaluate the joint effect on two correlated event-time outcomes. We illustrate the method with application to a clinical trial and describe how to calculate the required sample sizes and numbers of events.

  相似文献   

5.
ABSTRACT

Weighted k-out-of-n system has been widely used in various engineering areas. Performance of such system is characterized by the total capacity of the components. Therefore, capacity evaluation is of great importance for research on the behavior of the system over time. Capacity evaluation for binary weighted k-out-of-n system has been reported in the literature. In this paper, to shorten computational time, we first develop a multiplication method for capacity evaluation of binary weighted k-out-of-n system. We then generalize capacity evaluation to multi-state weighted k-out-of-n system. Recursive algorithm and multiplication algorithm are developed for capacity evaluation for such system. Comparison is made of the two methods in different aspects. An illustrative example of an oil transmission system is presented to demonstrate the implementation of the proposed methods.  相似文献   

6.
Confidence intervals for the difference of two binomial proportions are well known, however, confidence intervals for the weighted sum of two binomial proportions are less studied. We develop and compare seven methods for constructing confidence intervals for the weighted sum of two independent binomial proportions. The interval estimates are constructed by inverting the Wald test, the score test and the Likelihood ratio test. The weights can be negative, so our results generalize those for the difference between two independent proportions. We provide a numerical study that shows that these confidence intervals based on large‐sample approximations perform very well, even when a relatively small amount of data is available. The intervals based on the inversion of the score test showed the best performance. Finally, we show that as for the difference of two binomial proportions, adding four pseudo‐outcomes to the Wald interval for the weighted sum of two binomial proportions improves its coverage significantly, and we provide a justification for this correction.  相似文献   

7.
Weighted methods are an important feature of multiplicity control methods. The weights must usually be chosen a priori, on the basis of experimental hypotheses. Under some conditions, however, they can be chosen making use of information from the data (therefore a posteriori) while maintaining multiplicity control. In this paper we provide: (1) a review of weighted methods for familywise type I error rate (FWE) (both parametric and nonparametric) and false discovery rate (FDR) control; (2) a review of data-driven weighted methods for FWE control; (3) a new proposal for weighted FDR control (data-driven weights) under independence among variables; (4) under any type of dependence; (5) a simulation study that assesses the performance of procedure of point 4 under various conditions.  相似文献   

8.
ABSTRACT

Weighted distributions, as an example of informative sampling, work appropriately under the missing at random mechanism since they neglect missing values and only completely observed subjects are used in the study plan. However, length-biased distributions, as a special case of weighted distributions, remove the subjects with short length deliberately, which surely meet the missing not at random mechanism. Accordingly, applying length-biased distributions jeopardizes the results by producing biased estimates. Hence, an alternate method has to be used such that the results are improved by means of valid inferences. We propose methods that are based on weighted distributions and joint modelling procedure and compare them in analysing longitudinal data. After introducing three methods in use, a set of simulation studies and analysis of two real longitudinal datasets affirm our claim.  相似文献   

9.
For testing the equality of two survival functions, the weighted logrank test and the weighted Kaplan–Meier test are the two most widely used methods. Actually, each of these tests has advantages and defects against various alternatives, while we cannot specify in advance the possible types of the survival differences. Hence, how to choose a single test or combine a number of competitive tests for indicating the diversities of two survival functions without suffering a substantial loss in power is an important issue. Instead of directly using a particular test which generally performs well in some situations and poorly in others, we further consider a class of tests indexed by a weighted parameter for testing the equality of two survival functions in this paper. A delete-1 jackknife method is implemented for selecting weights such that the variance of the test is minimized. Some numerical experiments are performed under various alternatives for illustrating the superiority of the proposed method. Finally, the proposed testing procedure is applied to two real-data examples as well.  相似文献   

10.
In this paper we consider Goodman's association models and weighted log ratio analysis (LRA). In particular, by combining these two methods, we obtain different weighted log ratio analyses that we can extend to analyse a rates matrix, obtained by calculating the ratio between two initial multidimensional contingency tables. Our approach is illustrated by an empirical study. The selection of the model, to be analysed through the weighted LRA plot, is carried out by means of Poisson regression on rates.  相似文献   

11.
Jun Shao 《Statistics》2013,47(3-4):203-237
This article reviews the applications of three resampling methods, the jackknife, the balanced repeated replication, and the bootstrap, in sample surveys. The sampling design under consideration is a stratified multistage sampling design. We discuss the implementation of the resampling methods; for example, the construction of balanced repeated replications and approximated balanced repeated replication estimators; four modified bootstrap algorithms to generate bootstrap samples; and three different ways of applying the resampling methods in the presence of imputed missing values. Asymptotic properties of the resampling estimators are discussed for two types of important survey estimators, functions of weighted averages and sample quantiles.  相似文献   

12.
In socioeconomic areas, functional observations may be collected with weights, called weighted functional data. In this paper, we deal with a general linear hypothesis testing (GLHT) problem in the framework of functional analysis of variance with weighted functional data. With weights taken into account, we obtain unbiased and consistent estimators of the group mean and covariance functions. For the GLHT problem, we obtain a pointwise F-test statistic and build two global tests, respectively, via integrating the pointwise F-test statistic or taking its supremum over an interval of interest. The asymptotic distributions of test statistics under the null and some local alternatives are derived. Methods for approximating their null distributions are discussed. An application of the proposed methods to density function data is also presented. Intensive simulation studies and two real data examples show that the proposed tests outperform the existing competitors substantially in terms of size control and power.  相似文献   

13.
In this article, we introduce a new weighted quantile regression method. Traditionally, the estimation of the parameters involved in quantile regression is obtained by minimizing a loss function based on absolute distances with weights independent of explanatory variables. Specifically, we study a new estimation method using a weighted loss function with the weights associated with explanatory variables so that the performance of the resulting estimation can be improved. In full generality, we derive the asymptotic distribution of the weighted quantile regression estimators for any uniformly bounded positive weight function independent of the response. Two practical weighting schemes are proposed, each for a certain type of data. Monte Carlo simulations are carried out for comparing our proposed methods with the classical approaches. We also demonstrate the proposed methods using two real-life data sets from the literature. Both our simulation study and the results from these examples show that our proposed method outperforms the classical approaches when the relative efficiency is measured by the mean-squared errors of the estimators.  相似文献   

14.
ABSTRACT

In longitudinal studies, subjects may potentially undergo a series of sequentially ordered events. The gap times, which are the times between two serial events, are often the outcome variables of interest. This study considers quantile regression models of gap times for censored serial-event data and adapts a weighted version of the estimating equation for regression coefficients. The resulting estimators are uniformly consistent and asymptotically normal. Extensive simulation studies are presented to evaluate the finite-sample performance of the proposed methods. An analysis of the tumor recurrence data for bladder cancer patients is also provided to illustrate our proposed methods.  相似文献   

15.
We propose forecasting functional time series using weighted functional principal component regression and weighted functional partial least squares regression. These approaches allow for smooth functions, assign higher weights to more recent data, and provide a modeling scheme that is easily adapted to allow for constraints and other information. We illustrate our approaches using age-specific French female mortality rates from 1816 to 2006 and age-specific Australian fertility rates from 1921 to 2006, and show that these weighted methods improve forecast accuracy in comparison to their unweighted counterparts. We also propose two new bootstrap methods to construct prediction intervals, and evaluate and compare their empirical coverage probabilities.  相似文献   

16.
The randomized response technique (RRT) is an important tool, commonly used to avoid biased answers in survey on sensitive issues by preserving the respondents’ privacy. In this paper, we introduce a data collection method for survey on sensitive issues combining both the unrelated-question RRT and the direct question design. The direct questioning method is utilized to obtain responses to a non sensitive question that is related to the innocuous question from the unrelated-question RRT. These responses serve as additional information that can be used to improve the estimation of the prevalence of the sensitive behavior. Furthermore, we propose two new methods for the estimation of the proportion of respondents possessing the sensitive attribute under a missing data setup. More specifically, we develop the weighted estimator and the weighted conditional likelihood estimator. The performances of our estimators are studied numerically and compared with that of an existing one. Both proposed estimators are more efficient than the Greenberg's estimator. We illustrate our methods using real data from a survey study on illegal use of cable TV service in Taiwan.  相似文献   

17.
In many clinical studies more than one observer may be rating a characteristic measured on an ordinal scale. For example, a study may involve a group of physicians rating a feature seen on a pathology specimen or a computer tomography scan. In clinical studies of this kind, the weighted κ coefficient is a popular measure of agreement for ordinally scaled ratings. Our research stems from a study in which the severity of inflammatory skin disease was rated. The investigators wished to determine and evaluate the strength of agreement between a variable number of observers taking into account patient-specific (age and gender) as well as rater-specific (whether board certified in dermatology) characteristics. This suggested modelling κ as a function of these covariates. We propose the use of generalized estimating equations to estimate the weighted κ coefficient. This approach also accommodates unbalanced data which arise when some subjects are not judged by the same set of observers. Currently an estimate of overall κ for a simple unbalanced data set without covariates involving more than two observers is unavailable. In the inflammatory skin disease study none of the covariates were significantly associated with κ, thus enabling the calculation of an overall weighted κ for this unbalanced data set. In the second motivating example (multiple sclerosis), geographic location was significantly associated with κ. In addition we also compared the results of our method with current methods of testing for heterogeneity of weighted κ coefficients across strata (geographic location) that are available for balanced data sets.  相似文献   

18.
This article deals with the estimation of the lognormal-Pareto and the lognormal-generalized Pareto distributions, for which a general result concerning asymptotic optimality of maximum likelihood estimation cannot be proved. We develop a method based on probability weighted moments, showing that it can be applied straightforwardly to the first distribution only. In the lognormal-generalized Pareto case, we propose a mixed approach combining maximum likelihood and probability weighted moments. Extensive simulations analyze the relative efficiencies of the methods in various setups. Finally, the techniques are applied to two real datasets in the actuarial and operational risk management fields.  相似文献   

19.
A divergence measure between discrete probability distributions introduced by Csiszar (1967) generalizes the Kullback-Leibler information and several other information measures considered in the literature. We introduce a weighted divergence which generalizes the weighted Kullback-Leibler information considered by Taneja (1985). The weighted divergence between an empirical distribution and a fixed distribution and the weighted divergence between two independent empirical distributions are here investigated for large simple random samples, and the asymptotic distributions are shown to be either normal or equal to the distribution of a linear combination of independent X2-variables  相似文献   

20.
Reuse of controls in a nested case-control (NCC) study has not been considered feasible since the controls are matched to their respective cases. However, in the last decade or so, methods have been developed that break the matching and allow for analyses where the controls are no longer tied to their cases. These methods can be divided into two groups; weighted partial likelihood (WPL) methods and full maximum likelihood methods. The weights in the WPL can be estimated in different ways and four estimation procedures are discussed. In addition, we address modifications needed to accommodate left truncation. A full likelihood approach is also presented and we suggest an aggregation technique to decrease the computation time. Furthermore, we generalize calibration for case-cohort designs to NCC studies. We consider a competing risks situation and compare WPL, full likelihood and calibration through simulations and analyses on a real data example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号