首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An extension to the class of conventional numerical probability models for nondeterministic phenomena has been identified by Dempster and Shafer in the class of belief functions. We were originally stimulated by this work, but have since come to believe that the bewildering diversity of uncertainty and chance phenomena cannot be encompassed within either the conventional theory of probability, its relatively minor modifications (e.g., not requiring countable additivity), or the theory of belief functions. In consequence, we have been examining the properties of, and prospects for, the generalization of belief functions that is known as upper and lower, or interval-valued, probability. After commenting on what we deem to be problematic elements of common personalist/subjectivist/Bayesian positions that employ either finitely or countably additive probability to represent strength of belief and that are intended to be normative for rational behavior, we sketch some of the ways in which the set of lower envelopes, a subset of the set of lower probabilities that contains the belief functions, enables us to preserve the core of Bayesian reasoning while admitting a more realistic (e.g., in its reduced insistence upon an underlying precision in our beliefs) class of probability-like models. Particular advantages of lower envelopes are identified in the area of the aggregation of beliefs.

The focus of our own research is in the area of objective probabilistic reasoning about time series generated by physical or other empirical (e.g., societal) processes. As it is not the province of a general mathematical methodology such as probability theory to a priori rule out of existence empirical phenomena, we are concerned by the contraint imposed by conventional probability theory that an empirical process of bounded random variables that is believed to have a time- invariant generating mechanism must then exhibot long-run stable time averages. We have shown that lower probability models that allow for unstable time averages can only lie in the class of undominated lower probabilities, a subset of lower probability models disjoint from the lower envelopes and having the weakest relationship to conventional probability measures. Our research has been devoted to exploring and developing the theory of undominated lower probabilities so that it can be applied to model and understand nondeterministic phenomena, and we have also been interested in identifying actual physical processes (e.g., flicker noises) that exhibit behavior requiring such novel models.  相似文献   


2.
Identifiability is a primary assumption in virtually all classical statistical theory. However, such an assumption may be violated in a variety of statistical models. We consider parametric models where the assumption of identifiability is violated, but otherwise satisfy standard assumptions. We propose an analytic method for constructing new parameters under which the model will be at least locally identifiable. This method is based on solving a system of linear partial differential equations involving the Fisher information matrix. Some consequences and valid inference procedures under non-identifiability have been discussed. The method of reparametrization is illustrated with an example.  相似文献   

3.
The paper is intended to give non-initiates some idea of the nature of stochastic hydrology. After a brief historical review it concentrates on three recent examples of stochastic modelling procedures that have aroused interest among hydrologists, namely the Hurst Effect, Short-Term Runoff Models, and Stochastic Reservoir Theory.  相似文献   

4.
In the estimation of a population mean or total from a random sample, certain methods based on linear models are known to be automatically design consistent, regardless of how well the underlying model describes the population. A sufficient condition is identified for this type of robustness to model failure; the condition, which we call 'internal bias calibration', relates to the combination of a model and the method used to fit it. Included among the internally bias-calibrated models, in addition to the aforementioned linear models, are certain canonical link generalized linear models and nonparametric regressions constructed from them by a particular style of local likelihood fitting. Other models can often be made robust by using a suboptimal fitting method. Thus the class of model-based, but design consistent, analyses is enlarged to include more realistic models for certain types of survey variable such as binary indicators and counts. Particular applications discussed are the estimation of the size of a population subdomain, as arises in tax auditing for example, and the estimation of a bootstrap tail probability.  相似文献   

5.
The problem of the logical implication between two hierarchical log-linear models is proved to be equivalent to the problem of deciding whether their generating classes satisfy the graphtheoretic condition of hinging. Moreover, a polynomial-time procedure is worked out to test implication.  相似文献   

6.
Abstract

In this article, we consider the inverse probability weighted estimators for a single-index model with missing covariates when the selection probabilities are known or unknown. It is shown that the estimator for the index parameter by using estimated selection probabilities has a smaller asymptotic variance than that with true selection probabilities, thus is more efficient. Therefore, the important Horvitz-Thompson property is verified for the index parameter in single index model. However, this difference disappears for the estimators of the link function. Some numerical examples and a real data application are also conducted to illustrate the performances of the estimators.  相似文献   

7.
8.
Probability forecasting models can be estimated using weighted score functions that (by definition) capture the performance of the estimated probabilities relative to arbitrary “baseline” probability assessments, such as those produced by another model, by a bookmaker or betting market, or by a human probability assessor. Maximum likelihood estimation (MLE) is interpretable as just one such method of optimum score estimation. We find that when MLE-based probabilities are themselves treated as the baseline, forecasting models estimated by optimizing any of the proven families of power and pseudospherical economic score functions yield the very same probabilities as MLE. The finding that probabilities estimated by optimum score estimation respond to MLE-baseline probabilities by mimicking them supports reliance on MLE as the default form of optimum score estimation.  相似文献   

9.
10.
11.
The bivariate plane is symmetrically partitioned into fine rectangular regions, and a symmetric uniform association model is used to represent the resulting discretized bivariate normal probabilities. A new algorithm is developed by utilizing a quadrature and the above association model to approximate the diagonal probabilities. The off-diagonal probabilities are then approximated using the model. This method is an alternative to Wang's (1987) approach, computationally advantageous and relatively easy to extend to higher dimensions. Bivariate and trivariate normal probabilities approximated by our method are observed to agree very closely with the corresponding known results.  相似文献   

12.
In this paper, we introduce mixed Liu estimator (MLE) for the vector of parameters in linear measurement error models by unifying the sample and the prior information. The MLE is a generalization of the mixed estimator (ME) and Liu estimator (LE). In particular, asymptotic normality properties of the estimators are discussed, and the performance of the MLE over the LE and ME are compared based on mean squared error matrix (MSEM). Finally, a Monte Carlo simulation and a numerical example are also presented for analysis.  相似文献   

13.
Biomarkers play a key role in the monitoring of disease progression. The time taken for an individual to reach a biomarker exceeding or lower than a meaningful threshold is often of interest. Due to the inherent variability of biomarkers, persistence criteria are sometimes included in the definitions of progression, such that only two consecutive measurements above or below the relevant threshold signal that “true” progression has occurred. In previous work, a novel approach was developed, which allowed estimation of the time to threshold using the parameters from a linear mixed model where the residual variance was assumed to be pure measurement error. In this paper, we extend this methodology so that serial correlation can be accommodated. Assuming that the Markov property holds and applying the chain rule of probabilities, we found that the probability of progression at each timepoint can be expressed simply as the product of conditional probabilities. The methodology is applied to a cohort of HIV positive individuals, where the time to reach a CD4 count threshold is estimated. The second application we present is based on a study on abdominal aortic aneurysms, where the time taken for an individual to reach a diameter exceeding 55 mm is studied. We observed that erroneously ignoring the residual correlation when it is strong may result in substantial overestimation of the time to threshold. The estimated probability of the biomarker reaching a threshold of interest, expected time to threshold, and confidence intervals are presented for selected patients in both applications.  相似文献   

14.
When constructing models to summarize clinical data to be used for simulations, it is good practice to evaluate the models for their capacity to reproduce the data. This can be done by means of Visual Predictive Checks (VPC), which consist of several reproductions of the original study by simulation from the model under evaluation, calculating estimates of interest for each simulated study and comparing the distribution of those estimates with the estimate from the original study. This procedure is a generic method that is straightforward to apply, in general. Here we consider the application of the method to time-to-event data and consider the special case when a time-varying covariate is not known or cannot be approximated after event time. In this case, simulations cannot be conducted beyond the end of the follow-up time (event or censoring time) in the original study. Thus, the simulations must be censored at the end of the follow-up time. Since this censoring is not random, the standard KM estimates from the simulated studies and the resulting VPC will be biased. We propose to use inverse probability of censoring weighting (IPoC) method to correct the KM estimator for the simulated studies and obtain unbiased VPCs. For analyzing the Cantos study, the IPoC weighting as described here proved valuable and enabled the generation of VPCs to qualify PKPD models for simulations. Here, we use a generated data set, which allows illustration of the different situations and evaluation against the known truth.  相似文献   

15.
It is shown that, when exposure variables are continuous, the odds ratios are functions of exposure differences if and only if Cox's binary logistic models hold in a prospective framework, and if and only if the underlying distribution belongs to a family of exponential type distributions in a retrospective framework.  相似文献   

16.
17.
18.
19.
We investigate mixed models for repeated measures data from cross-over studies in general, but in particular for data from thorough QT studies. We extend both the conventional random effects model and the saturated covariance model for univariate cross-over data to repeated measures cross-over (RMC) data; the resulting models we call the RMC model and Saturated model, respectively. Furthermore, we consider a random effects model for repeated measures cross-over data previously proposed in the literature. We assess the standard errors of point estimates and the coverage properties of confidence intervals for treatment contrasts under the various models. Our findings suggest: (i) Point estimates of treatment contrasts from all models considered are similar; (ii) Confidence intervals for treatment contrasts under the random effects model previously proposed in the literature do not have adequate coverage properties; the model therefore cannot be recommended for analysis of marginal QT prolongation; (iii) The RMC model and the Saturated model have similar precision and coverage properties; both models are suitable for assessment of marginal QT prolongation; and (iv) The Akaike Information Criterion (AIC) is not a reliable criterion for selecting a covariance model for RMC data in the following sense: the model with the smallest AIC is not necessarily associated with the highest precision for the treatment contrasts, even if the model with the smallest AIC value is also the most parsimonious model.  相似文献   

20.
Summary. We use cumulants to derive Bayesian credible intervals for wavelet regression estimates. The first four cumulants of the posterior distribution of the estimates are expressed in terms of the observed data and integer powers of the mother wavelet functions. These powers are closely approximated by linear combinations of wavelet scaling functions at an appropriate finer scale. Hence, a suitable modification of the discrete wavelet transform allows the posterior cumulants to be found efficiently for any given data set. Johnson transformations then yield the credible intervals themselves. Simulations show that these intervals have good coverage rates, even when the underlying function is inhomogeneous, where standard methods fail. In the case where the curve is smooth, the performance of our intervals remains competitive with established nonparametric regression methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号