共查询到20条相似文献,搜索用时 31 毫秒
1.
Daniel Bruce 《统计学通讯:理论与方法》2013,42(16):2606-2616
This article proposes a simplification of the model for dependent binary variables presented in Cox and Snell (1989). The new model referred to as the simplified Cox model is developed for identically distributed and dependent binary variables. Properties of the model are presented, including expressions for the log-likelihood function and the Fisher information. Under mutual independence, a general expression for the restrictions of the parameters are derived. The simplified Cox model is illustrated using a data set from a clinical trial. 相似文献
3.
Kashinath Chatterjee Angshuman Sarkar Dennis K.J. Lin 《Journal of statistical planning and inference》2008
A supersaturated design is essentially a fractional factorial design whose number of experimental variables is greater than or equal to its number of experimental runs. Under the effect sparsity assumption, a supersaturated design can be very cost-effective. In this paper, our prime objective is to compare the existing two-level supersaturated designs for the noisy case through the probability of correct searching—a powerful criterion proposed by Shirakura et al. [1996. Searching probabilities for nonzeroeffects in search designs for the noisy case. Ann. Statist. 24, 2560–2568]. An algorithm is proposed to construct supersaturated designs with high probability of correct searching. Examples are given for illustration. 相似文献
4.
The aim of this paper is to present new likelihood based goodness-of-fit tests for the two-parameter Weibull distribution. These tests consist in nesting the Weibull distribution in three-parameter generalized Weibull families and testing the value of the third parameter by using the Wald, score, and likelihood ratio procedures. We simplify the usual likelihood based tests by getting rid of the nuisance parameters, using three estimation methods. The proposed tests are not asymptotic. A comprehensive comparison study is presented. Among a large range of possible GOF tests, the best ones are identified. The results depend strongly on the shape of the underlying hazard rate. 相似文献
5.
6.
This article addresses the issue of parameter estimation in linear system in the presence of Gaussian noises, under which the random number searching algorithm (LJ (Luus and Jaakola) algorithm) is combined with the Rao-Blackwellised particle filter (RBPF) algorithm. This yields the so-called RBPF algorithm based on LJ (RBPF-LJ). Unlike the mature alternatives of generic particle filter, the parameter particles of RBPF-LJ are set as random numbers that search in the parameter value scope, which is regulated based on the estimation result to track the changes of the unknown parameter. The contrasting simulations show that the proposed RBPF-LJ outperform the RBPF as well as the particle filter based on kernel smoothing contraction algorithm on the estimation of the dynamically linear or nonlinear parameter and it can obtain the similar estimation results on the static parameter if some coefficients are regulated. 相似文献
7.
8.
《Journal of Statistical Computation and Simulation》2012,82(9):643-658
The linear mixed-effects model (Verbeke and Molenberghs, 2000) has become a standard tool for the analysis of continuous hierarchical data such as, for example, repeated measures or data from meta-analyses. However, in certain situations the model does pose insurmountable computational problems. Precisely this has been the experience of Buyse et al. (2000a) who proposed an estimation- and prediction-based approach for evaluating surrogate endpoints. Their approach requires fitting linear mixed models to data from several clinical trials. In doing so, these authors built on the earlier, single-trial based, work by Prentice (1989), Freedman et al. (1992), and Buyse and Molenberghs (1998). While Buyse et al. (2000a) claim their approach has a number of advantages over the classical single-trial methods, a solution needs to be found for the computational complexity of the corresponding linear mixed model. In this paper, we propose and study a number of possible simplifications. This is done by means of a simulation study and by applying the various strategies to data from three clinical studies: Pharmacological Therapy for Macular Degeneration Study Group (1977), Ovarian Cancer Meta-analysis Project (1991) and Corfu-A Study Group (1995). 相似文献
9.
Simplified Estimating Functions for Diffusion Models with a High-dimensional Parameter 总被引:2,自引:0,他引:2
We consider estimating functions for discretely observed diffusion processes of the following type: for one part of the parameter of interest we propose to use a simple and explicit estimating function of the type studied by Kessler (2000); for the remaining part of the parameter we use a martingale estimating function. Such an approach is particularly useful in practical applications when the parameter is high-dimensional. It is also often necessary to supplement a simple estimating function by another type of estimating function because only the part of the parameter on which the invariant measure depends can be estimated by a simple estimating function. Under regularity conditions the resulting estimators are consistent and asymptotically normal. Several examples are considered in order to demonstrate the idea of the estimating procedure. The method is applied to two data sets comprising wind velocities and stock prices. In one example we also propose a general method for constructing diffusion models with a prescribed marginal distribution which have a flexible dependence structure. 相似文献
10.
In this paper, we propose an estimation method when sample data are incomplete. We decompose the likelihood according to missing patterns and combine the estimators based on each likelihood weighting by the Fisher information ratio. This approach provides a simple way of estimating parameters, especially for non‐monotone missing data. Numerical examples are presented to illustrate this method. 相似文献
11.
12.
Sara Randall 《Serials Review》2013,39(3):181-182
With the ubiquity of federated search tools as viable solutions for students and researchers to search across external sources and internal repositories, users have become more sophisticated in their application of this technology. With the goal of optimizing the user experience of its newly introduced federated search offering, Endeavor Information Systems employed usability testing to gain insights into the critical requirements of librarians and information professionals for this solution. The results of this analysis, conducted in concert with market research and customer focus groups, and their relevance to Endeavor's federated search technology are the focus of this article. 相似文献
13.
Gronnesby and Borgan (1996) propose an overall goodness-of-fit test for the Cox proportional hazards model. The basis of their test is a grouping of subjects by their estimated risk score. We show that the Gronnesby and Borgan test is algebraically identical to one obtained from adding group indicator variables to the model and testing the hypothesis the coefficients of the group indicator variables are zero via the score test. Thus showing that the test can be calculated using existing software. We demonstrate that the table of observed and estimated expected number of events within each group of the risk score is a useful adjunct to the test to help identify potential problems in fit. 相似文献
14.
We consider robust permutation tests for a location shift in the two sample case based on estimating equations, comparing the test statistics based on a score function and an M-estimate. First we obtain a form for both tests so that the exact tests may be carried out using the same algorithms as used for permutation tests based on the mean. Then we obtain the Bahadur slopes of the tests in these two statistics, giving numerical results for two cases equivalent to a test based on Huber scores and a particular case of this related to a median test. We show that they have different Bahadur slopes with neither exceeding the other over the whole range. Finally, we give some numerical results illustrating the robustness properties of the tests and confirming the theoretical results on Bahadur slopes. 相似文献
15.
A. J. Hayter 《统计学通讯:理论与方法》2013,42(20):5966-5976
ABSTRACTThis article considers the problem of choosing between two possible treatments which are each modeled with a Poisson distribution. Win-probabilities are defined as the probabilities that a single potential future observation from one of the treatments will be better than, or at least as good as, a potential future observation from the other treatment. Using historical data from the two treatments, it is shown how estimates and confidence intervals can be constructed for the win-probabilities. Extensions to situations with three or more treatments are also discussed. Some examples and illustrations are provided, and the relationship between this methodology and standard inference procedures on the Poisson parameters is discussed. 相似文献
16.
This article considers the problem of choosing between two treatments that have binary outcomes with unknown success probabilities p1 and p2. The choice is based upon the information provided by two observations X1 ~ B(n1, p1) and X2 ~ B(n2, p2) from independent binomial distributions. Standard approaches to this problem utilize basic statistical inference methodologies such as hypothesis tests and confidence intervals for the difference p1 ? p2 of the success probabilities. However, in this article the analysis of win-probabilities is considered. If X*1 represents a potential future observation from Treatment 1 while X*2 represents a potential future observation from Treatment 2, win-probabilities are defined in terms of the comparisons of X*1 and X*2. These win-probabilities provide a direct assessment of the relative advantages and disadvantages of choosing either treatment for one future application, and their interpretation can be combined with other factors such as costs, side-effects, and the availabilities of the two treatments. In this article, it is shown how confidence intervals for the win-probabilities can be constructed, and examples of their use are provided. Computer code for the implementation of this new methodology is available from the authors. 相似文献
17.
Paul R Rosenbaum 《统计学通讯:理论与方法》2013,42(11):2687-2698
In many experiments where data have been collected at two points in time (pre-treatment and post-treatment), investigators wish to determine if there is a difference between two treatment groups. In recent years it has been proposed that an appropriate statistical analysis to determine if treatment differences exist is to use the post-treatment values as the primary comparison variables and the pre-treatment values as covariates. When there are several outcome variables, we propose new tests based on residuals as alternatives to existing methods and investigate how the powers of the new and existing tests are affected by various choices of covariates. The limiting distribution of the test statistic of the new test based on residuals is given. Monte Carlo simulations are employed in the power comparisons. 相似文献
18.
Many goodness of fit tests for bivariate normality are not rigorous procedures because the distributions of the proposed statistics are unknown or too difficult to manipulate. Two familiar examples are the ring test and the line test. In both tests the statistic utilized generally is approximated by a chi-square distribution rather than compared to its known beta distribution. These two procedures are re-examined and re-evaluated in this paper. It is shown that the chi-square approximation can be too conservative and can lead to unnecessary rejection of normality. 相似文献
19.
This paper presents a selection procedure that combines Bechhofer's indifference zone selection and Gupta's subset selection approaches, by using a preference threshold. For normal populations with common known variance, a subset is selected of all populations that have sample sums within the distance of this threshold from the largest sample sum. We derive the minimal necessary sample size and the value for the preference threshold, in order to satisfy two probability requirements for correct selection, one related to indifference zone selection, the other to subset selection. Simulation studies are used to illustrate the method. 相似文献
20.
This paper considers confidence intervals for the difference of two binomial proportions. Some currently used approaches are discussed. A new approach is proposed. Under several generally used criteria, these approaches are thoroughly compared. The widely used Wald confidence interval (CI) is far from satisfactory, while the Newcombe's CI, new recentered CI and score CI have very good performance. Recommendations for which approach is applicable under different situations are given. 相似文献