首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 170 毫秒
1.
A phenomenon that I call “adaptive percolation” commonly arises in biology, business, economics, defense, finance, manufacturing, and the social sciences. Here one wishes to select a handful of entities from a large pool of entities via a process of screening through a hierarchy of sieves. The process is not unlike the percolation of a liquid through a porous medium. The probability model developed here is based on a nested and adaptive Bayesian approach that results in the product of beta-binomial distributions with common parameters. The common parameters happen to be the observed data. I call this the percolated beta-binomial distribution . The model turns out to be a slight generalization of the probabilistic model used in percolation theory. The generalization is a consequence of using a subjectively specified likelihood function to construct a probability model. The notion of using likelihoods for constructing probability models is not a part of the conventional toolkit of applied probabilists. To the best of my knowledge, a use of the product of beta-binomial distributions as a probability model for Bernoulli trials appears to be new. The development of the material of this article is illustrated via data from the 2009 astronaut selection program, which motivated this work.  相似文献   

2.
The author is concerned with log‐linear estimators of the size N of a population in a capture‐recapture experiment featuring heterogeneity in the individual capture probabilities and a time effect. He also considers models where the first capture influences the probability of subsequent captures. He derives several results from a new inequality associated with a dispersive ordering for discrete random variables. He shows that in a log‐linear model with inter‐individual heterogeneity, the estimator N is an increasing function of the heterogeneity parameter. He also shows that the inclusion of a time effect in the capture probabilities decreases N in models without heterogeneity. He further argues that a model featuring heterogeneity can accommodate a time effect through a small change in the heterogeneity parameter. He demonstrates these results using an inequality for the estimators of the heterogeneity parameters and illustrates them in a Monte Carlo experiment  相似文献   

3.
□ A doubly nonstationary cylinder-based model is built to describe the dispersal of a population from a point source. In this model, each cylinder represents a fraction of the population, i.e., a group. Two contexts are considered: The dispersal can occur in a uniform habitat or in a fragmented habitat described by a conditional Boolean model. After the construction of the models, we investigate their properties: the first and second order moments, the probability that the population vanishes, and the distribution of the spatial extent of the population.  相似文献   

4.
We consider the problem of estimation of a finite population variance related to a sensitive character under a randomized response model and prove (i) the admissibility of an estimator for a given sampling design in a class of quadratic unbiased estimators and (ii) the admissibility of a sampling strategy in a class of comparable quadratic unbiased strategies.  相似文献   

5.
Summary. Standard goodness-of-fit tests for a parametric regression model against a series of nonparametric alternatives are based on residuals arising from a fitted model. When a parametric regression model is compared with a nonparametric model, goodness-of-fit testing can be naturally approached by evaluating the likelihood of the parametric model within a nonparametric framework. We employ the empirical likelihood for an α -mixing process to formulate a test statistic that measures the goodness of fit of a parametric regression model. The technique is based on a comparison with kernel smoothing estimators. The empirical likelihood formulation of the test has two attractive features. One is its automatic consideration of the variation that is associated with the nonparametric fit due to empirical likelihood's ability to Studentize internally. The other is that the asymptotic distribution of the test statistic is free of unknown parameters, avoiding plug-in estimation. We apply the test to a discretized diffusion model which has recently been considered in financial market analysis.  相似文献   

6.
Many research fields increasingly involve analyzing data of a complex structure. Models investigating the dependence of a response on a predictor have moved beyond the ordinary scalar-on-vector regression. We propose a regression model for a scalar response and a surface (or a bivariate function) predictor. The predictor has a random component and the regression model falls in the framework of linear random effects models. We estimate the model parameters via maximizing the log-likelihood with the ECME (Expectation/Conditional Maximization Either) algorithm. We use the approach to analyze a data set where the response is the neuroticism score and the predictor is the resting-state brain function image. In the simulations we tried, the approach has better performance than two other approaches, a functional principal component regression approach and a smooth scalar-on-image regression approach.  相似文献   

7.
We present a mathematical theory of objective, frequentist chance phenomena that uses as a model a set of probability measures. In this work, sets of measures are not viewed as a statistical compound hypothesis or as a tool for modeling imprecise subjective behavior. Instead we use sets of measures to model stable (although not stationary in the traditional stochastic sense) physical sources of finite time series data that have highly irregular behavior. Such models give a coarse-grained picture of the phenomena, keeping track of the range of the possible probabilities of the events. We present methods to simulate finite data sequences coming from a source modeled by a set of probability measures, and to estimate the model from finite time series data. The estimation of the set of probability measures is based on the analysis of a set of relative frequencies of events taken along subsequences selected by a collection of rules. In particular, we provide a universal methodology for finding a family of subsequence selection rules that can estimate any set of probability measures with high probability.  相似文献   

8.
This article studies the dispatch of consolidated shipments. Orders, following a batch Markovian arrival process, are received in discrete quantities by a depot at discrete time epochs. Instead of immediate dispatch, all outstanding orders are consolidated and shipped together at a later time. The decision of when to send out the consolidated shipment is made based on a “dispatch policy,” which is a function of the system state and/or the costs associated with that state. First, a tree structured Markov chain is constructed to record specific information about the consolidation process; the effectiveness of any dispatch policy can then be assessed by a set of long-run performance measures. Next, the effect on shipment consolidation of varying the order-arrival process is demonstrated through numerical examples and proved mathematically under some conditions. Finally, a heuristic algorithm is developed to determine a favorable parameter of a special set of dispatch policies, and the algorithm is proved to yield the overall optimal policy under certain conditions.  相似文献   

9.
This paper considers the problem of selecting a robust threshold of wavelet shrinkage. Previous approaches reported in literature to handle the presence of outliers mainly focus on developing a robust procedure for a given threshold; this is related to solving a nontrivial optimization problem. The drawback of this approach is that the selection of a robust threshold, which is crucial for the resulting fit is ignored. This paper points out that the best fit can be achieved by a robust wavelet shrinkage with a robust threshold. We propose data-driven selection methods for a robust threshold. These approaches are based on a coupling of classical wavelet thresholding rules with pseudo data. The concept of pseudo data has influenced the implementation of the proposed methods, and provides a fast and efficient algorithm. Results from a simulation study and a real example demonstrate the promising empirical properties of the proposed approaches.  相似文献   

10.
Abstract.  A Markov property associates a set of conditional independencies to a graph. Two alternative Markov properties are available for chain graphs (CGs), the Lauritzen–Wermuth–Frydenberg (LWF) and the Andersson–Madigan– Perlman (AMP) Markov properties, which are different in general but coincide for the subclass of CGs with no flags . Markov equivalence induces a partition of the class of CGs into equivalence classes and every equivalence class contains a, possibly empty, subclass of CGs with no flags itself containing a, possibly empty, subclass of directed acyclic graphs (DAGs). LWF-Markov equivalence classes of CGs can be naturally characterized by means of the so-called largest CGs , whereas a graphical characterization of equivalence classes of DAGs is provided by the essential graphs . In this paper, we show the existence of largest CGs with no flags that provide a natural characterization of equivalence classes of CGs of this kind, with respect to both the LWF- and the AMP-Markov properties. We propose a procedure for the construction of the largest CGs, the largest CGs with no flags and the essential graphs, thereby providing a unified approach to the problem. As by-products we obtain a characterization of graphs that are largest CGs with no flags and an alternative characterization of graphs which are largest CGs. Furthermore, a known characterization of the essential graphs is shown to be a special case of our more general framework. The three graphical characterizations have a common structure: they use two versions of a locally verifiable graphical rule. Moreover, in case of DAGs, an immediate comparison of three characterizing graphs is possible.  相似文献   

11.
A composition is a vector of positive components summing to a constant. The sample space of a composition is the simplex, and the sample space of two compositions, a bicomposition, is a Cartesian product of two simplices. We present a way of generating random variates from a bicompositional Dirichlet distribution defined on the Cartesian product of two simplices using the rejection method. We derive a general solution for finding a dominating density function and a rejection constant and also compare this solution to using a uniform dominating density function. Finally, some examples of generated bicompositional random variates, with varying number of components, are presented.  相似文献   

12.
Extreme Value Theory (EVT) aims to study the tails of probability distributions in order to measure and quantify extreme events of maximum and minimum. In river flow data, an extreme level of a river may be related to the level of a neighboring river that flows into it. In this type of data, it is very common for flooding of a location to have been caused by a very large flow from an affluent river that is tens or hundreds of kilometers from this location. In this sense, an interesting approach is to consider a conditional model for the estimation of a multivariate model. Inspired by this idea, we propose a Bayesian model to describe the dependence of exceedance between rivers, where we considered a conditionally independent structure. In this model, the dependence between rivers is captured by modeling the excess marginally of one river as a consequence of linear functions of the other rivers. The results showed that there is a strong and positive connection between excesses in one river caused by the excesses of the other rivers.  相似文献   

13.
Non-coding deoxyribonucleic acid (DNA) can typically be modelled by a sequence of Bernoulli random variables by coding one base, e.g. T, as 1 and other bases as 0. If a segment of a sequence is functionally important, the probability of a 1 will be different in this changed segment from that in the surrounding DNA. It is important to be able to see whether such a segment occurs in a particular DNA sequence and to pin-point it so that a molecular biologist can investigate its possible function. Here we discuss methods for testing the occurrence of such a changed segment and how to estimate the end points of it. Maximum-likelihood-based methods are not very tractable and so a nonparametric method based on the approach of Pettitt has been developed. The problem and its solution are illustrated by a specific DNA example.  相似文献   

14.
In estimating the shape parameter of a two-parameter Weibull distribution from a failure-censored sample, a recently popular procedure is to employ a testimator which is a shrinkage estimator based on a preliminary hypothesis test for a guessed value of the parameter. Such an adaptive testimator is a linear compound of the guessed value and a statistic. A new compounding coefficient is numerically shown to yield higher efficiency in many situations compared to some of the existing ones.  相似文献   

15.
We introduce and study a class of rank-based estimators for the linear model. The estimate may be roughly described as being calculated in the same manner as a generalized M-estimate, but with the residual being replaced by a function of its signed rank. The influence function can thus be bounded, both as a function of the residual and as a function of the carriers. Subject to such a bound, the efficiency at a particular model distribution can be optimized by appropriate choices of rank scores and carrier weights. Such choices are given, with respect to a variety of optimality criteria. We compare our estimates with several others, in a Monte Carlo study and on a real data set from the literature.  相似文献   

16.
A distribution function is estimated by a kernel method with

a poinrwise mean squared error criterion at a point x. Relation- ships between the mean squared error, the point x, the sample size and the required kernel smoothing parazeter are investigated for several distributions treated by Azzaiini (1981). In particular it is noted that at a centre of symmetry or near a mode of the distribution the kernei method breaks down. Point- wise estimation of a distribution function is motivated as a more useful technique than a reference range for preliminary medical diagnosis.  相似文献   

17.
In the case where non-experimental data are available from an industrial process and a directed graph for how various factors affect a response variable is known based on a substantive understanding of the process, we consider a problem in which a control plan involving multiple treatment variables is conducted in order to bring a response variable close to a target value with variation reduction. Using statistical causal analysis with linear (recursive and non-recursive) structural equation models, we configure an optimal control plan involving multiple treatment variables through causal parameters. Based on the formulation, we clarify the causal mechanism for how the variance of a response variable changes when the control plan is conducted. The results enable us to evaluate the effect of a control plan on the variance of a response variable from non-experimental data and provide a new application of linear structural equation models to engineering science.  相似文献   

18.
Flexible Class of Skew-Symmetric Distributions   总被引:2,自引:0,他引:2  
Abstract.  We propose a flexible class of skew-symmetric distributions for which the probability density function has the form of a product of a symmetric density and a skewing function. By constructing an enumerable dense subset of skewing functions on a compact set, we are able to consider a family of distributions, which can capture skewness, heavy tails and multimodality systematically. We present three illustrative examples for the fibreglass data, the simulated data from a mixture of two normal distributions and the Swiss bills data.  相似文献   

19.
Methods for a sequential test of a dose-response effect in pre-clinical studies are investigated. The objective of the test procedure is to compare several dose groups with a zero-dose control. The sequential testing is conducted within a closed family of one-sided tests. The procedures investigated are based on a monotonicity assumption. These closed procedures strongly control the familywise error rate while providing information about the shape of the dose-responce relationship. Performance of sequential testing procedures are compared via a Monte Carlo simulation study. We illustrae the procedures by application to a real data set.  相似文献   

20.
This article considers testing the significance of a regressor with a near unit root in a predictive regression model. The procedures discussed in this article are nonparametric, so one can test the significance of a regressor without specifying a functional form. The results are used to test the null hypothesis that the entire function takes the value of zero. We show that the standardized test has a normal distribution regardless of whether there is a near unit root in the regressor. This is in contrast to tests based on linear regression for this model where tests have a nonstandard limiting distribution that depends on nuisance parameters. Our results have practical implications in testing the significance of a regressor since there is no need to conduct pretests for a unit root in the regressor and the same procedure can be used if the regressor has a unit root or not. A Monte Carlo experiment explores the performance of the test for various levels of persistence of the regressors and for various linear and nonlinear alternatives. The test has superior performance against certain nonlinear alternatives. An application of the test applied to stock returns shows how the test can improve inference about predictability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号