首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Proportion differences are often used to estimate and test treatment effects in clinical trials with binary outcomes. In order to adjust for other covariates or intra-subject correlation among repeated measures, logistic regression or longitudinal data analysis models such as generalized estimating equation or generalized linear mixed models may be used for the analyses. However, these analysis models are often based on the logit link which results in parameter estimates and comparisons in the log-odds ratio scale rather than in the proportion difference scale. A two-step method is proposed in the literature to approximate the calculation of confidence intervals for the proportion difference using a concept of effective sample sizes. However, the performance of this two-step method has not been investigated in their paper. On this note, we examine the properties of the two-step method and propose an adjustment to the effective sample size formula based on Bayesian information theory. Simulations are conducted to evaluate the performance and to show that the modified effective sample size improves the coverage property of the confidence intervals.  相似文献   

2.
In many reliability applications, there may not be a unique plausible scale in which to measure time to failure or assess performance. This is especially the case when several measures of usage are available on each unit. For example, the age, the total number of flight hours, and the number of landings are usage measures that are often considered important in aircraft reliability. Similarly, in medical or biological applications of survival analysis there are often alternative scales (e.g., Oakes, 1995). This paper considers the definition of a "good" time scale, along with methods of determining a time scale.  相似文献   

3.
This paper overviews some recent developments in panel data asymptotics, concentrating on the nonstationary panel case and gives a new result for models with individual effects. Underlying recent theory are asymptotics for multi-indexed processes in which both indexes may pass to infinity. We review some of the new limit theory that has been developed, show how it can be applied and give a new interpretation of individual effects in nonstationary panel data. Fundamental to the interpretation of much of the asymptotics is the concept of a panel regression coefficient which measures the long run average relation across a section of the panel. This concept is analogous to the statistical interpretation of the coefficient in a classical regression relation. A variety of nonstationary panel data models are discussed and the paper reviews the asymptotic properties of estimators in these various models. Some recent developments in panel unit root tests and stationary dynamic panel regression models are also reviewed.  相似文献   

4.
Nonstationary panel data analysis: an overview of some recent developments   总被引:2,自引:0,他引:2  
This paper overviews some recent developments in panel data asymptotics, concentrating on the nonstationary panel case and gives a new result for models with individual effects. Underlying recent theory are asymptotics for multi-indexed processes in which both indexes may pass to infinity. We review some of the new limit theory that has been developed, show how it can be applied and give a new interpretation of individual effects in nonstationary panel data. Fundamental to the interpretation of much of the asymptotics is the concept of a panel regression coefficient which measures the long run average relation across a section of the panel. This concept is analogous to the statistical interpretation of the coefficient in a classical regression relation. A variety of nonstationary panel data models are discussed and the paper reviews the asymptotic properties of estimators in these various models. Some recent developments in panel unit root tests and stationary dynamic panel regression models are also reviewed.  相似文献   

5.
We consider several generic geometric properties of location measures in the multivariate setting. Our study addresses the representativity of location measures by considering possible generalizations of the so-called Cauchy mean value property in a general framework mainly based on Φ-means.Our study shows that some caution is needed when using intuitive arguments because the mean is, in some sense, the only “intuitively well-behaved” multivariate location measure.  相似文献   

6.
In this article we propose a novel framework for the modelling of non-stationary multivariate lattice processes. Our approach extends the locally stationary wavelet paradigm into the multivariate two-dimensional setting. As such the framework we develop permits the estimation of a spatially localised spectrum within a channel of interest and, more importantly, a localised cross-covariance which describes the localised coherence between channels. Associated estimation theory is also established which demonstrates that this multivariate spatial framework is properly defined and has suitable convergence properties. We also demonstrate how this model-based approach can be successfully used to classify a range of colour textures provided by an industrial collaborator, yielding superior results when compared against current state-of-the-art statistical image processing methods.  相似文献   

7.
The combined model accounts for different forms of extra-variability and has traditionally been applied in the likelihood framework, or in the Bayesian setting via Markov chain Monte Carlo. In this article, integrated nested Laplace approximation is investigated as an alternative estimation method for the combined model for count data, and compared with the former estimation techniques. Longitudinal, spatial, and multi-hierarchical data scenarios are investigated in three case studies as well as a simulation study. As a conclusion, integrated nested Laplace approximation provides fast and precise estimation, while avoiding convergence problems often seen when using Markov chain Monte Carlo.  相似文献   

8.
It has often been complained that the standard framework of decision theory is insufficient. In most applications, neither the maximin paradigm (relving on complete ignorance on the states of natures) nor the classical Bayesian paradigm (assuming perfect probabilistic information on the states of nature) reflect the situation under consideration adequately. Typically one possesses some, but incomplete, knowledge on the stochastic behaviour of the states of nature. In this paper first steps towards a comprehensive framework for decision making under such complex uncertainty will be provided. Common expected utility theory will be extended to interval probability, a generalized probabilistic setting which has the power to express incomplete stochastic knowledge and to take the extent of ambiguity (non-stochastic uncertainty) into account. Since two-monotone and totally monotone capacities are special cases of general interval probatility, wher Choquet integral and interval-valued expectation correspond to one another, the results also show, as a welcome by-product, how to deal efficiently with Choquet Expected Utility and how to perform a neat decision analysis in the case of belief functions. Received: March 2000; revised version: July 2001  相似文献   

9.
There is current interest in the development of new or improved outcome measures for rheumatological diseases. In the early stages of development, attention is usually directed to how well the measure distinguishes between patients and whether different observers attach similar values of the measure to the same patient. An approach, based on variance components, to the assessment of outcome measures is presented. The need to assess different aspects of variation associated with a measure is stressed. The terms ‘observer reliability’ and ‘agreement’ are frequently used in the evaluation of measurement instruments, and are often used interchangeably. In this paper, we use the terms to refer to different concepts assessing different aspects of variation. They are likely to correspond well in heterogeneous populations, but not in homogeneous populations where reliability will generally be low but agreement may well be high. Results from a real patient exercise, designed to study a set of tools for assessing myositis outcomes, are used to illustrate the approach that examines both reliability and agreement, and the need to evaluate both is demonstrated. A new measure of agreement, based on the ratio of standard deviations, is presented and inference procedures are discussed. To facilitate the interpretation of the combination of measures of reliability and agreement, a classification system is proposed that provides a summary of the performance of the tools. The approach is demonstrated for discrete ordinal and continuous outcomes.  相似文献   

10.
In this paper, we study the relationships between the weighted distributions and the parent distributions in the context of Lorenz curve, Lorenz ordering and inequality measures. These relationships depend on the nature of the weight functions and give rise to interesting connections. The properties of weighted distributions for general weight functions are also investigated. It is shown how to derive and to determine characterizations related to Lorenz curve and other inequality measures for the cases weight functions are increasing or decreasing. Some of the results are applied for special cases of the weighted distributions. We represent the reliability measures of weighted distributions by the inequality measures to obtain some results. Length-biased and equilibrium distributions have been discussed as weighted distributions in the reliability context by concentration curves. We also review and extend the problem of stochastic orderings and aging classes under weighting. Finally, the relationships between the weighted distribution and transformations are discussed.  相似文献   

11.
Abstract. In this article we consider a problem from bone marrow transplant (BMT) studies where there is interest on assessing the effect of haplotype match for donor and patient on the overall survival. The BMT study we consider is based on donors and patients that are genotype matched, and this therefore leads to a missing data problem. We show how Aalen's additive risk model can be applied in this setting with the benefit that the time‐varying haplomatch effect can be easily studied. This problem has not been considered before, and the standard approach where one would use the expected‐maximization (EM) algorithm cannot be applied for this model because the likelihood is hard to evaluate without additional assumptions. We suggest an approach based on multivariate estimating equations that are solved using a recursive structure. This approach leads to an estimator where the large sample properties can be developed using product‐integration theory. Small sample properties are investigated using simulations in a setting that mimics the motivating haplomatch problem.  相似文献   

12.
The recurrent-event setting, where the subjects experience multiple occurrences of the event of interest, are encountered in many biomedical applications. In analyzing recurrent event data, non informative censoring is often assumed for the implementation of statistical methods. However, when a terminating event such as death serves as part of the censoring mechanism, validity of the censoring assumption may be violated because recurrence can be a powerful risk factor for death. We consider joint modeling of recurrent event process and terminating event under a Bayesian framework in which a shared frailty is used to model the association between the intensity of the recurrent event process and the hazard of the terminating event. Our proposed model is implemented on data from a well-known cancer study.  相似文献   

13.
High dimensional multivariate mixed models for binary questionnaire data   总被引:1,自引:0,他引:1  
Summary.  Questionnaires that are used to measure the effect of an intervention often consist of different sets of items, each set possibly measuring another concept. Mixed models with set-specific random effects are a flexible tool to model the different sets of items jointly. However, computational problems typically arise as the number of sets increases. This is especially true when the random-effects distribution cannot be integrated out analytically, as with mixed models for binary data. A pairwise modelling strategy, in which all possible bivariate mixed models are fitted and where inference follows from pseudolikelihood theory, has been proposed as a solution. This approach has been applied to assess the effect of physical activity on psychocognitive functioning, the latter measured by a battery of questionnaires.  相似文献   

14.
Modelling and simulation (M&S) is increasingly being applied in (clinical) drug development. It provides an opportune area for the community of pharmaceutical statisticians to pursue. In this article, we highlight useful principles behind the application of M&S. We claim that M&S should be focussed on decisions, tailored to its purpose and based in applied sciences, not relying entirely on data-driven statistical analysis. Further, M&S should be a continuous process making use of diverse information sources and applying Bayesian and frequentist methodology, as appropriate. In addition to forming a basis for analysing decision options, M&S provides a framework that can facilitate communication between stakeholders. Besides the discussion on modelling philosophy, we also describe how standard simulation practice can be ineffective and how simulation efficiency can often be greatly improved.  相似文献   

15.
The classical Metropolis-Hastings (MH) algorithm can be extended to generate non-reversible Markov chains. This is achieved by means of a modification of the acceptance probability, using the notion of vorticity matrix. The resulting Markov chain is non-reversible. Results from the literature on asymptotic variance, large deviations theory and mixing time are mentioned, and in the case of a large deviations result, adapted, to explain how non-reversible Markov chains have favorable properties in these respects. We provide an application of NRMH in a continuous setting by developing the necessary theory and applying, as first examples, the theory to Gaussian distributions in three and nine dimensions. The empirical autocorrelation and estimated asymptotic variance for NRMH applied to these examples show significant improvement compared to MH with identical stepsize.  相似文献   

16.
Let toxicity to treatment be a Bernoulli random variable for which the probability of failure increases with dose. Consider the problem of identifying a dose μ having pre-specified probability of failure using data from groups of subjects who arrive sequentially for treatment. There is considerable theory available in this setting for fully sequential up-and-down procedures. This paper presents asymptotic and finite theoretical results for Markovian up-and-down procedures when subjects are treated in groups. Practical instructions are given on how to select the design parameters so as to cause the treatments to cluster around the unknown dose μ. Examples are given to illustrate how this group procedure behaves for small sample sizes.  相似文献   

17.
Abstract

The method of tail functions is applied to confidence estimation of the exponential mean in the presence of prior information. It is shown how the “ordinary” confidence interval can be generalized using a class of tail functions and then engineered for optimality, in the sense of minimizing prior expected length over that class, whilst preserving frequentist coverage. It is also shown how to derive the globally optimal interval, and how to improve on this using tail functions when criteria other than length are taken into consideration. Probabilities of false coverage are reported for some of the intervals under study, and the theory is illustrated by application to confidence estimation of a reliability coefficient based on some survival data.  相似文献   

18.
Since efficiency represents a measure of the average welfare level of the society it is often used as a synonym of “mean income” in (traditional) welfare measurement theory and neglects price information. In this note, we introduce a rather general concept of efficiency taking price information into account. Requiring some reasonable properties for efficiency judgements we look for real indicators, namely efficiency measures. By this axiomatic approach, we easily characterize a special class of (sequences of) functions that will turn out to be the real average income.  相似文献   

19.
ABSTRACT

An alternative approach is applied for reliability analysis of standby systems on the basis of matrix renewal function. In this regard, a single-server, two identical unit cold standby systems with an imperfect switch is considered as a three-state semi-Markov process. Several important reliability measures such as availability, mean time to failure, expected number of failures, etc., are obtained for general lifetime distributions. Also, the main results have been treated to the case of exponential lifetimes and explicit formulas obtained for this case in addition of some numerical illustrations. This approach can easily be extended to more general standby systems with different configurations.  相似文献   

20.
In a two-treatment trial, a two-sided test is often used to reach a conclusion, Usually we are interested in doing a two-sided test because of no prior preference between the two treatments and we want a three-decision framework. When a standard control is just as good as the new experimental treatment (which has the same toxicity and cost), then we will accept both treatments. Only when the standard control is clearly worse or better than the new experimental treatment, then we choose only one treatment. In this paper, we extend the concept of a two-sided test to the multiple treatment trial where three or more treatments are involved. The procedure turns out to be a subset selection procedure; however, the theoretical framework and performance requirement are different from the existing subset selection procedures. Two procedures (exclusion or inclusion) are developed here for the case of normal data with equal known variance. If the sample size is large, they can be applied with unknown variance and with the binomial data or survival data with random censoring.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号