首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the search for the best of n candidates, two-stage procedures of the following type are in common use. In a first stage, weak candidates are removed, and the subset of promising candidates is then further examined. At a second stage, the best of the candidates in the subset is selected. In this article, optimization is not aimed at the parameter with largest value but rather at the best performance of the selected candidates at Stage 2. Under a normal model, a new procedure based on posterior percentiles is derived using a Bayes approach, where nonsymmetric normal (proper and improper) priors are applied. Comparisons are made with two other procedures frequently used in selection decisions. The three procedures and their performances are illustrated with data from a recent recruitment process at a Midwestern university.  相似文献   

2.
Mixed model selection is quite important in statistical literature. To assist the mixed model selection, we employ the adaptive LASSO penalized term to propose a two-stage selection procedure for the purpose of choosing both the random and fixed effects. In the first stage, we utilize the penalized restricted profile log-likelihood to choose the random effects; in the second stage, after the random effects are determined, we apply the penalized profile log-likelihood to select the fixed effects. In each stage, the Newton–Raphson algorithm is performed to complete the parameter estimation. We prove that the proposed procedure is consistent and possesses the oracle properties. The simulations and a real data application are conducted for demonstrating the effectiveness of the proposed selection procedure.  相似文献   

3.
In many clinical trials, the assessment of the response to interventions can include a large variety of outcome variables which are generally correlated. The use of multiple significance tests is likely to increase the chance of detecting a difference in at least one of the outcomes between two treatments. Furthermore, univariate tests do not take into account the correlation structure. A new test is proposed that uses information from the interim analysis in a two-stage design to form the rejection region boundaries at the second stage. Initially, the test uses Hotelling’s T2 at the end of the first stage allowing only, for early acceptance of the null hypothesis and an O’Brien ‘type’ procedure at the end of the second stage. This test allows one to ‘cheat’ and look at the data at the interim analysis to form rejection regions at the second stage, provided one uses the correct distribution of the final test statistic. This distribution is derived and the power of the new test is compared to the power of three common procedures for testing multiple outcomes: Bonferroni’s inequality, Hotelling’s T2and O’Brien’s test. O’Brien’s test has the best power to detect a difference when the outcomes are thought to be affected in exactly the same direction and the same magnitude or in exactly the same relative effects as those proposed prior to data collection. However, the statistic is not robust to deviations in the alternative parameters proposed a priori, especially for correlated outcomes. The proposed new statistic and the derivation of its distribution allows investigators to consider information from the first stage of a two-stage design and consequently base the final test on the direction observed at the first stage or modify the statistic if the direction differs significantly from what was expected a prior.  相似文献   

4.
In this paper, we consider a k-level step-stress accelerated life-testing (ALT) experiment with unequal duration steps τ=(τ1, …, τ k ). Censoring is allowed only at the change-stress point in the final stage. A general log-location-scale lifetime distribution with mean life which is a linear function of stress, along with a cumulative exposure model, is considered as the working model. Under this model, the determination of the optimal choice of τ for both Weibull and lognormal distributions are addressed using the variance–optimality criterion. Numerical results show that for a general log-location-scale distributions, the optimal k-step-stress ALT model with unequal duration steps reduces just to a 2-level step-stress ALT model.  相似文献   

5.
Non-linear renewal theory is used to derive second order asymptotic expansions for the coverage probability of a fixed-width sequential confidence interval for an unknown parameter xin the inverse linear regression model. These expansions are obtained for a two-stage sequential procedure, proposed by Perng and Tong (1974) for the construction of a confidence interval for x.  相似文献   

6.
Polygonal distributions are a class of distributions that can be defined via the mixture of triangular distributions over the unit interval. We demonstrate that the densities of polygonal distributions are dense in the class of continuous and concave densities with bounded second derivatives. Furthermore, we prove that polygonal density functions provide O(g? 2) approximations (where g is the number of triangular distribution components), in the supremum distance, to any density function from the hypothesized class. Parametric consistency and Hellinger consistency results for the maximum likelihood (ML) estimator are obtained. A result regarding model selection via penalized ML estimation is proved.  相似文献   

7.
A study of the distribution of a statistic involves two major steps: (a) working out its asymptotic, large n, distribution, and (b) making the connection between the asymptotic results and the distribution of the statistic for the sample sizes used in practice. This crucial second step is not included in many studies. In this article, the second step is applied to Durbin's (1951) well-known rank test of treatment effects in balanced incomplete block designs (BIB's). We found that asymptotic, χ2, distributions do not provide adequate approximations in most BIB's. Consequently, we feel that several of Durbin's recommendations should be altered.  相似文献   

8.
Numerous variable selection methods rely on a two-stage procedure, where a sparsity-inducing penalty is used in the first stage to predict the support, which is then conveyed to the second stage for estimation or inference purposes. In this framework, the first stage screens variables to find a set of possibly relevant variables and the second stage operates on this set of candidate variables, to improve estimation accuracy or to assess the uncertainty associated to the selection of variables. We advocate that more information can be conveyed from the first stage to the second one: we use the magnitude of the coefficients estimated in the first stage to define an adaptive penalty that is applied at the second stage. We give the example of an inference procedure that highly benefits from the proposed transfer of information. The procedure is precisely analyzed in a simple setting, and our large-scale experiments empirically demonstrate that actual benefits can be expected in much more general situations, with sensitivity gains ranging from 50 to 100 % compared to state-of-the-art.  相似文献   

9.
Based on a generalized cumulative damage approach with a stochastic process describing degradation, new accelerated life test models are presented in which both observed failures and degradation measures can be considered for parametric inference of system lifetime. Incorporating an accelerated test variable, we provide several new accelerated degradation models for failure based on the geometric Brownian motion or gamma process. It is shown that in most cases, our models for failure can be approximated closely by accelerated test versions of Birnbaum–Saunders and inverse Gaussian distributions. Estimation of model parameters and a model selection procedure are discussed, and two illustrative examples using real data for carbon-film resistors and fatigue crack size are presented.  相似文献   

10.
Abstract

Under an incomplete block crossover design with two periods, we derive the least-squares estimators for the period effect, treatment effects and carry-over effects in explicit formulae based on within-patient differences. Using the commonly-used strategy of searching a base model for making inferences in regression analysis, we define a two-stage test procedure in studying treatment effects. On the basis of Monte Carlo simulation, we evaluate the performance of the two-stage procedure for hypothesis testing, point and interval estimation of treatment effects in a variety of situations. We note that use of the two-stage procedure can be potentially misleading and hence one should not apply a test procedure to exclusively determine whether he/she needs to account for the carry-over effect in studying treatment effects. We use the double-blind crossover trial comparing two different doses of formoterol with placebo on the forced expiratory volume in 1 second (FEV1) readings to illustrate the use of the two-stage procedure, as well as the distinction between use of two-stage procedure and the approach with assuming no carry-over effects based on one's subjective knowledge.  相似文献   

11.
We consider the sequential point estimation problem of the mean of a normal distribution N(μ, σ2) when the loss function is squared error plus linear cost. It is shown that a two-stage procedure has the asymptotic efficiency of which the order is higher than second order, provided the standard deviation has a known lower bound. We also give a higher than second-order approximation to the risk.  相似文献   

12.
Abstract

In a quantitative linear model with errors following a stationary Gaussian, first-order autoregressive or AR(1) process, Generalized Least Squares (GLS) on raw data and Ordinary Least Squares (OLS) on prewhitened data are efficient methods of estimation of the slope parameters when the autocorrelation parameter of the error AR(1) process, ρ, is known. In practice, ρ is generally unknown. In the so-called two-stage estimation procedures, ρ is then estimated first before using the estimate of ρ to transform the data and estimate the slope parameters by OLS on the transformed data. Different estimators of ρ have been considered in previous studies. In this article, we study nine two-stage estimation procedures for their efficiency in estimating the slope parameters. Six of them (i.e., three noniterative, three iterative) are based on three estimators of ρ that have been considered previously. Two more (i.e., one noniterative, one iterative) are based on a new estimator of ρ that we propose: it is provided by the sample autocorrelation coefficient of the OLS residuals at lag 1, denoted r(1). Lastly, REstricted Maximum Likelihood (REML) represents a different type of two-stage estimation procedure whose efficiency has not been compared to the others yet. We also study the validity of the testing procedures derived from GLS and the nine two-stage estimation procedures. Efficiency and validity are analyzed in a Monte Carlo study. Three types of explanatory variable x in a simple quantitative linear model with AR(1) errors are considered in the time domain: Case 1, x is fixed; Case 2, x is purely random; and Case 3, x follows an AR(1) process with the same autocorrelation parameter value as the error AR(1) process. In a preliminary step, the number of inadmissible estimates and the efficiency of the different estimators of ρ are compared empirically, whereas their approximate expected value in finite samples and their asymptotic variance are derived theoretically. Thereafter, the efficiency of the estimation procedures and the validity of the derived testing procedures are discussed in terms of the sample size and the magnitude and sign of ρ. The noniterative two-stage estimation procedure based on the new estimator of ρ is shown to be more efficient for moderate values of ρ at small sample sizes. With the exception of small sample sizes, REML and its derived F-test perform the best overall. The asymptotic equivalence of two-stage estimation procedures, besides REML, is observed empirically. Differences related to the nature, fixed or random (uncorrelated or autocorrelated), of the explanatory variable are also discussed.  相似文献   

13.
The problem of selecting the normal population with the largest population mean when the populations have a common known variance is considered. A two-stage procedure is proposed which guarantees the same probability requirement using the indifference-zone approach as does the single-stage procedure of Bechhofer (1954). The two-stage procedure has the highly desirable property that the expected total number of observations required by the procedure is always less than the total number of observations required by the corresponding single-stage procedure, regardless of the configuration of the population means. The saving in expected total number of observations can be substantial, particularly when the configuration of the population means is favorable to the experimenter. The saving is accomplished by screening out “non-contending” populations in the first stage, and concentrating sampling only on “contending” populations in the second stage.

The two-stage procedure can be regarded as a composite one which uses a screening subset-type approach (Gupta (1956), (1965)) in the first stage, and an indifference-zone approach (Bechhofer (1954)) applied to all populations retained in the selected sub-set in the second stage. Constants to implement the procedure for various k and P? are provided, as are calculations giving the saving in expected total sample size if the two-stage procedure is used in place of the corresponding single-stage procedure.  相似文献   

14.
Optimal accelerated degradation test (ADT) plans are developed assuming that the constant-stress loading method is employed and the degradation characteristic follows a Wiener process. Unlike the previous works on planning ADTs based on stochastic process models, this article determines the test stress levels and the proportion of test units allocated to each stress level such that the asymptotic variance of the maximum-likelihood estimator of the qth quantile of the lifetime distribution at the use condition is minimized. In addition, compromise plans are also developed for checking the validity of the relationship between the model parameters and the stress variable. Finally, using an example, sensitivity analysis procedures are presented for evaluating the robustness of optimal and compromise plans against the uncertainty in the pre-estimated parameter value, and the importance of optimally determining test stress levels and the proportion of units allocated to each stress level are illustrated.  相似文献   

15.
ABSTRACT

This article considers nonparametric regression problems and develops a model-averaging procedure for smoothing spline regression problems. Unlike most smoothing parameter selection studies determining an optimum smoothing parameter, our focus here is on the prediction accuracy for the true conditional mean of Y given a predictor X. Our method consists of two steps. The first step is to construct a class of smoothing spline regression models based on nonparametric bootstrap samples, each with an appropriate smoothing parameter. The second step is to average bootstrap smoothing spline estimates of different smoothness to form a final improved estimate. To minimize the prediction error, we estimate the model weights using a delete-one-out cross-validation procedure. A simulation study has been performed by using a program written in R. The simulation study provides a comparison of the most well known cross-validation (CV), generalized cross-validation (GCV), and the proposed method. This new method is straightforward to implement, and gives reliable performances in simulations.  相似文献   

16.
In order to explore and compare a finite number T of data sets by applying functional principal component analysis (FPCA) to the T associated probability density functions, we estimate these density functions by using the multivariate kernel method. The data set sizes being fixed, we study the behaviour of this FPCA under the assumption that all the bandwidth matrices used in the estimation of densities are proportional to a common parameter h and proportional to either the variance matrices or the identity matrix. In this context, we propose a selection criterion of the parameter h which depends only on the data and the FPCA method. Then, on simulated examples, we compare the quality of approximation of the FPCA when the bandwidth matrices are selected using either the previous criterion or two other classical bandwidth selection methods, that is, a plug-in or a cross-validation method.  相似文献   

17.
The equivalence of some tests of hypothesis and confidence limits is well known. When, however, the confidence limits are computed only after rejection of a null hypothesis, the usual unconditional confidence limits are no longer valid. This refers to a strict two-stage inference procedure: first test the hypothesis of interest and if the test results in a rejection decision, then proceed with estimating the relevant parameter. Under such a situation, confidence limits should be computed conditionally on the specified outcome of the test under which estimation proceeds. Conditional confidence sets will be longer than unconditional confidence sets and may even contain values of the parameter previously rejected by the test of hypothesis. Conditional confidence limits for the mean of a normal population with known variance are used to illustrate these results. In many applications, these results indicate that conditional estimation is probably not good practice.  相似文献   

18.
In this paper we study the procedures of Dudewicz and Dalal ( 1975 ), and the modifications suggested by Rinott ( 1978 ), for selecting the largest mean from k normal populations with unknown variances. We look at the case k = 2 in detail, because there is an optimal allocation scheme here. We do not really allocate the total number of samples into two groups, but we estimate this optimal sample size, as well, so as to guarantee the probability of correct selection (written as P(CS)) at least P?, 1/2 < P? < 1 . We prove that the procedure of Rinott is “asymptotically in-efficient” (to be defined below) in the sense of Chow and Robbins ( 1965 ) for any k  2. Next, we propose two-stage procedures having all the properties of Rinott's procedure, together with the property of “asymptotic efficiency” - which is highly desirable.  相似文献   

19.
Abstract

We propose a new class of two-stage parameter estimation methods for semiparametric ordinary differential equation (ODE) models. In the first stage, state variables are estimated using a penalized spline approach; In the second stage, form of numerical discretization algorithms for an ODE solver is used to formulate estimating equations. Estimated state variables from the first stage are used to obtain more data points for the second stage. Asymptotic properties for the proposed estimators are established. Simulation studies show that the method performs well, especially for small sample. Real life use of the method is illustrated using Influenza specific cell-trafficking study.  相似文献   

20.
By running the life tests at higher stress levels than normal operating conditions, accelerated life testing quickly yields information on the lifetime distribution of a test unit. The lifetime at the design stress is then estimated through extrapolation using a regression model. In constant-stress testing, a unit is tested at a fixed stress level until failure or the termination time point of the test, while step-stress testing allows the experimenter to gradually increase the stress levels at some pre-fixed time points during the test. In this article, the optimal k-level constant-stress and step-stress accelerated life tests are compared for the exponential failure data under Type-I censoring. The objective is to quantify the advantage of using the step-stress testing relative to the constant-stress one. A log-linear relationship between the mean lifetime parameter and stress level is assumed and the cumulative exposure model holds for the effect of changing stress in step-stress testing. The optimal design point is then determined under C-optimality, D-optimality, and A-optimality criteria. The efficiency of step-stress testing compared to constant-stress testing is discussed in terms of the ratio of optimal objective functions based on the information matrix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号