首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper defends the fiducial argument. In particular, an interpretation of the fiducial argument is defended in which fiducial probability is treated as being subjective and the role taken by pivots in a more standard interpretation is taken by what are called primary random variables, which in fact form a special class of pivots. The resulting methodology, which is referred to as subjective fiducial inference, is outlined in the first part of the paper. This is followed by a defence of this methodology arranged in a series of criticisms and responses. These criticisms reflect objections that are often raised against standard fiducial inference and incorporate more specific concerns that are likely to exist with respect to subjective fiducial inference. It is hoped that the responses to these criticisms clarify the contribution that a system of fiducial reasoning can make to statistical inference.  相似文献   

2.
In early drug development, especially when studying new mechanisms of action or in new disease areas, little is known about the targeted or anticipated treatment effect or variability estimates. Adaptive designs that allow for early stopping but also use interim data to adapt the sample size have been proposed as a practical way of dealing with these uncertainties. Predictive power and conditional power are two commonly mentioned techniques that allow predictions of what will happen at the end of the trial based on the interim data. Decisions about stopping or continuing the trial can then be based on these predictions. However, unless the user of these statistics has a deep understanding of their characteristics important pitfalls may be encountered, especially with the use of predictive power. The aim of this paper is to highlight these potential pitfalls. It is critical that statisticians understand the fundamental differences between predictive power and conditional power as they can have dramatic effects on decision making at the interim stage, especially if used to re-evaluate the sample size. The use of predictive power can lead to much larger sample sizes than either conditional power or standard sample size calculations. One crucial difference is that predictive power takes account of all uncertainty, parts of which are ignored by standard sample size calculations and conditional power. By comparing the characteristics of each of these statistics we highlight important characteristics of predictive power that experimenters need to be aware of when using this approach.  相似文献   

3.
In this paper, we present several resampling methods for interval estimation for the common intraclass correlation coefficients. Comparisons are made on the coverage probabilities and average lengths with confidence intervals estimated by using the generalized pivots. Most of the methods proposed in this article produce confidence intervals with better probabilities and shorter average lengths than that produced by using generalized pivots.  相似文献   

4.
In pharmaceutical‐related research, we usually use clinical trials methods to identify valuable treatments and compare their efficacy with that of a standard control therapy. Although clinical trials are essential for ensuring the efficacy and postmarketing safety of a drug, conducting clinical trials is usually costly and time‐consuming. Moreover, to allocate patients to the little therapeutic effect treatments is inappropriate due to the ethical and cost imperative. Hence, there are several 2‐stage designs in the literature where, for reducing cost and shortening duration of trials, they use the conditional power obtained from interim analysis results to appraise whether we should continue the lower efficacious treatments in the next stage. However, there is a lack of discussion about the influential impacts on the conditional power of a trial at the design stage in the literature. In this article, we calculate the optimal conditional power via the receiver operating characteristic curve method to assess the impacts on the quality of a 2‐stage design with multiple treatments and propose an optimal design using the minimum expected sample size for choosing the best or promising treatment(s) among several treatments under an optimal conditional power constraint. In this paper, we provide tables of the 2‐stage design subject to optimal conditional power for various combinations of design parameters and use an example to illustrate our methods.  相似文献   

5.
A new function for the competing risks model, the conditional cumulative hazard function, is introduced, from which the conditional distribution of failure times of individuals failing due to cause  j  can be studied. The standard Nelson–Aalen estimator is not appropriate in this setting, as population membership (mark) information may be missing for some individuals owing to random right-censoring. We propose the use of imputed population marks for the censored individuals through fractional risk sets. Some asymptotic properties, including uniform strong consistency, are established. We study the practical performance of this estimator through simulation studies and apply it to a real data set for illustration.  相似文献   

6.
In this article, we propose an approach for estimating the confidence interval of the common intraclass correlation coefficient based on the profile likelihood. Comparisons are made with a procedure using the concept of generalized pivots. The method presented is less computationally demanding than the method using generalized pivots. The approach also provides better coverage, and shorter lengths of confidence intervals for the case when the value of the common intraclass correlation coefficient is low. The lengths of confidence intervals given by both methods are quite comparable for high but less realistic values of the common intraclass correlation coefficient.  相似文献   

7.
The main contribution of this paper is a proof of the asymptotic validity of the application of the bootstrap to AR(∞) processes with unmodelled conditional heteroskedasticity. We first derive the asymptotic properties of the least-squares estimator of the autoregressive sieve parameters when the data are generated by a stationary linear process with martingale difference errors that are possibly subject to conditional heteroskedasticity of unknown form. These results are then used in establishing that a suitably constructed bootstrap estimator will have the same limit distribution as the least-squares estimator. Our results provide theoretical justification for the use of either the conventional asymptotic approximation based on robust standard errors or the bootstrap approximation of the distribution of autoregressive parameters. A simulation study suggests that the bootstrap approach tends to be more accurate in small samples.  相似文献   

8.
A general methodology is presented for non-parametric testing of independence, location and dispersion in multiple regression. The proposed testing procedures are based on the concepts of conditional distribution function, conditional quantile, and conditional shortest t-fraction. Techniques involved come from empirical process and extreme-value theory. The asymptotic distributions are standard Gumbel.  相似文献   

9.
《Econometric Reviews》2007,26(6):609-641
The main contribution of this paper is a proof of the asymptotic validity of the application of the bootstrap to AR(∞) processes with unmodelled conditional heteroskedasticity. We first derive the asymptotic properties of the least-squares estimator of the autoregressive sieve parameters when the data are generated by a stationary linear process with martingale difference errors that are possibly subject to conditional heteroskedasticity of unknown form. These results are then used in establishing that a suitably constructed bootstrap estimator will have the same limit distribution as the least-squares estimator. Our results provide theoretical justification for the use of either the conventional asymptotic approximation based on robust standard errors or the bootstrap approximation of the distribution of autoregressive parameters. A simulation study suggests that the bootstrap approach tends to be more accurate in small samples.  相似文献   

10.
Confidence intervals for a single parameter are spanned by quantiles of a confidence distribution, and one‐sided p‐values are cumulative confidences. Confidence distributions are thus a unifying format for representing frequentist inference for a single parameter. The confidence distribution, which depends on data, is exact (unbiased) when its cumulative distribution function evaluated at the true parameter is uniformly distributed over the unit interval. A new version of the Neyman–Pearson lemma is given, showing that the confidence distribution based on the natural statistic in exponential models with continuous data is less dispersed than all other confidence distributions, regardless of how dispersion is measured. Approximations are necessary for discrete data, and also in many models with nuisance parameters. Approximate pivots might then be useful. A pivot based on a scalar statistic determines a likelihood in the parameter of interest along with a confidence distribution. This proper likelihood is reduced of all nuisance parameters, and is appropriate for meta‐analysis and updating of information. The reduced likelihood is generally different from the confidence density. Confidence distributions and reduced likelihoods are rooted in Fisher–Neyman statistics. This frequentist methodology has many of the Bayesian attractions, and the two approaches are briefly compared. Concepts, methods and techniques of this brand of Fisher–Neyman statistics are presented. Asymptotics and bootstrapping are used to find pivots and their distributions, and hence reduced likelihoods and confidence distributions. A simple form of inverting bootstrap distributions to approximate pivots of the abc type is proposed. Our material is illustrated in a number of examples and in an application to multiple capture data for bowhead whales.  相似文献   

11.
Summary: Commonly used standard statistical procedures for means and variances (such as the t–test for means or the F–test for variances and related confidence procedures) require observations from independent and identically normally distributed variables. These procedures are often routinely applied to financial data, such as asset or currency returns, which do not share these properties. Instead, they are nonnormal and show conditional heteroskedasticity, hence they are dependent. We investigate the effect of conditional heteroskedasticity (as modelled by GARCH(1,1)) on the level of these tests and the coverage probability of the related confidence procedures. It can be seen that conditional heteroskedasticity has no effect on procedures for means (at least in large samples). There is, however, a strong effect of conditional heteroskedasticity on procedures for variances. These procedures should therefore not be used if conditional heteroskedasticity is prevalent in the data.*We are grateful to the referees for their useful and constructive comments.  相似文献   

12.
We use the additive risk model of Aalen (Aalen, 1980) as a model for the rate of a counting process. Rather than specifying the intensity, that is the instantaneous probability of an event conditional on the entire history of the relevant covariates and counting processes, we present a model for the rate function, i.e., the instantaneous probability of an event conditional on only a selected set of covariates. When the rate function for the counting process is of Aalen form we show that the usual Aalen estimator can be used and gives almost unbiased estimates. The usual martingale based variance estimator is incorrect and an alternative estimator should be used. We also consider the semi-parametric version of the Aalen model as a rate model (McKeague and Sasieni, 1994) and show that the standard errors that are computed based on an assumption of intensities are incorrect and give a different estimator. Finally, we introduce and implement a test-statistic for the hypothesis of a time-constant effect in both the non-parametric and semi-parametric model. A small simulation study was performed to evaluate the performance of the new estimator of the standard error.  相似文献   

13.
14.
Summary. Models for multiple-test screening data generally require the assumption that the tests are independent conditional on disease state. This assumption may be unreasonable, especially when the biological basis of the tests is the same. We propose a model that allows for correlation between two diagnostic test results. Since models that incorporate test correlation involve more parameters than can be estimated with the available data, posterior inferences will depend more heavily on prior distributions, even with large sample sizes. If we have reasonably accurate information about one of the two screening tests (perhaps the standard currently used test) or the prevalences of the populations tested, accurate inferences about all the parameters, including the test correlation, are possible. We present a model for evaluating dependent diagnostic tests and analyse real and simulated data sets. Our analysis shows that, when the tests are correlated, a model that assumes conditional independence can perform very poorly. We recommend that, if the tests are only moderately accurate and measure the same biological responses, researchers use the dependence model for their analyses.  相似文献   

15.
The statistical literature on the analysis of discrete variate time series has concentrated mainly on parametric models, that is the conditional probability mass function is assumed to belong to a parametric family. Generally, these parametric models impose strong assumptions on the relationship between the conditional mean and variance. To generalize these implausible assumptions, this paper instead considers a more realistic semiparametric model, called random rounded integer-valued autoregressive conditional heteroskedastic (RRINARCH) model, where there are essentially no assumptions on the relationship between the conditional mean and variance. The new model has several advantages: (a) it provides a coherent semiparametric framework for discrete variate time series, in which the conditional mean and variance can be modeled separately; (b) it allows negative values both for the series and its autocorrelation function; (c) its autocorrelation structure is the same as that of a standard autoregressive (AR) process; (d) standard software for its estimation is directly applicable. For the new model, conditions for stationarity, ergodicity and the existence of moments are established and the consistency and asymptotic normality of the conditional least squares estimator are proved. Simulation experiments are carried out to assess the performance of the model. The analyses of real data sets illustrate the flexibility and usefulness of the RRINARCH model for obtaining more realistic forecast means and variances.  相似文献   

16.
The existing synthetic exponential control charts are based on the assumption of known in-control parameter. However, the in-control parameter has to be estimated from a Phase I dataset. In this article, we use the exact probability distribution, especially the percentiles, mean, and standard deviation of the conditional average run length (ARL) to evaluate the effect of parameter estimation on the performance of the Phase II synthetic exponential charts. This approach accounts for the variability in the conditional ARL values of the synthetic chart obtained by different practitioners. Since parameter estimation results in more false alarms than expected, we develop an exact method to design the adjusted synthetic charts with desired conditional in-control performance. Results of known and unknown in-control parameter cases show that the control limit of the conforming run length sub-chart of the synthetic chart should be as small as possible.  相似文献   

17.
In this paper we consider the problem of constructing exact confidence intervals for a common location of several truncated exponentials with unknown and unequal scale parameters. These inteivals are based on combining suitable pivots as well as the so-called P-values.  相似文献   

18.
Estimated associations between an outcome variable and misclassified covariates tend to be biased when the methods of estimation that ignore the classification error are applied. Available methods to account for misclassification often require the use of a validation sample (i.e. a gold standard). In practice, however, such a gold standard may be unavailable or impractical. We propose a Bayesian approach to adjust for misclassification in a binary covariate in the random effect logistic model when a gold standard is not available. This Markov Chain Monte Carlo (MCMC) approach uses two imperfect measures of a dichotomous exposure under the assumptions of conditional independence and non-differential misclassification. A simulated numerical example and a real clinical example are given to illustrate the proposed approach. Our results suggest that the estimated log odds of inpatient care and the corresponding standard deviation are much larger in our proposed method compared with the models ignoring misclassification. Ignoring misclassification produces downwardly biased estimates and underestimate uncertainty.  相似文献   

19.
We study the properties of the quasi-maximum likelihood estimator (QMLE) and related test statistics in dynamic models that jointly parameterize conditional means and conditional covariances, when a normal log-likelihood os maximized but the assumption of normality is violated. Because the score of the normal log-likelihood has the martingale difference property when the forst two conditional moments are correctly specified, the QMLE is generally Consistent and has a limiting normal destribution. We provide easily computable formulas for asymptotic standard errors that are valid under nonnormality. Further, we show how robust LM tests for the adequacy of the jointly parameterized mean and variance can be computed from simple auxiliary regressions. An appealing feature of these robyst inference procedures is that only first derivatives of the conditional mean and variance functions are needed. A monte Carlo study indicates that the asymptotic results carry over to finite samples. Estimation of several AR and AR-GARCH time series models reveals that in most sotuations the robust test statistics compare favorably to the two standard (nonrobust) formulations of the Wald and IM tests. Also, for the GARCH models and the sample sizes analyzed here, the bias in the QMLE appears to be relatively small. An empirical application to stock return volatility illustrates the potential imprtance of computing robust statistics in practice.  相似文献   

20.
We study the properties of the quasi-maximum likelihood estimator (QMLE) and related test statistics in dynamic models that jointly parameterize conditional means and conditional covariances, when a normal log-likelihood os maximized but the assumption of normality is violated. Because the score of the normal log-likelihood has the martingale difference property when the forst two conditional moments are correctly specified, the QMLE is generally Consistent and has a limiting normal destribution. We provide easily computable formulas for asymptotic standard errors that are valid under nonnormality. Further, we show how robust LM tests for the adequacy of the jointly parameterized mean and variance can be computed from simple auxiliary regressions. An appealing feature of these robyst inference procedures is that only first derivatives of the conditional mean and variance functions are needed. A monte Carlo study indicates that the asymptotic results carry over to finite samples. Estimation of several AR and AR-GARCH time series models reveals that in most sotuations the robust test statistics compare favorably to the two standard (nonrobust) formulations of the Wald and IM tests. Also, for the GARCH models and the sample sizes analyzed here, the bias in the QMLE appears to be relatively small. An empirical application to stock return volatility illustrates the potential imprtance of computing robust statistics in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号