首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary.  In New Testament studies, the synoptic problem is concerned with the relationships between the gospels of Matthew, Mark and Luke. In an earlier paper a careful specification in probabilistic terms was set up of Honoré's triple-link model. In the present paper, a modification of Honoré's model is proposed. As previously, counts of the numbers of verbal agreements between the gospels are examined to investigate which of the possible triple-link models appears to give the best fit to the data, but now using the modified version of the model and additional sets of data.  相似文献   

2.
A particular case of Jain and Consul's (1971) generalized neg-ative binomial distribution is studied. The name inverse binomial is suggested because of its close relation with the inverse Gaussian distribution. We develop statistical properties including conditional inference of a parameter. An application using real data is given.  相似文献   

3.
Summary.  The primary goal of multivariate statistical process performance monitoring is to identify deviations from normal operation within a manufacturing process. The basis of the monitoring schemes is historical data that have been collected when the process is running under normal operating conditions. These data are then used to establish confidence bounds to detect the onset of process deviations. In contrast with the traditional approaches that are based on the Gaussian assumption, this paper proposes the application of the infinite Gaussian mixture model (GMM) for the calculation of the confidence bounds, thereby relaxing the previous restrictive assumption. The infinite GMM is a special case of Dirichlet process mixtures and is introduced as the limit of the finite GMM, i.e. when the number of mixtures tends to ∞. On the basis of the estimation of the probability density function, via the infinite GMM, the confidence bounds are calculated by using the bootstrap algorithm. The methodology proposed is demonstrated through its application to a simulated continuous chemical process, and a batch semiconductor manufacturing process.  相似文献   

4.
A method is proposed to model individual patterns of growth over time by linear combinations of optimally chosen weighted orthogonal vectors. The goal is to distinguish individuals who track from nontrackers. Nontrackers are defined as those who follow different, usually more complex, growth patterns than trackers. Thus, nontrackers require more vectors than do trackers in modeling their longitudinal observations. A method of specifying the class-specific vectors and individual weights is demonstrated. When the proportion of nontrackers in the population is small, a modified form of the Akaike maximum entropy criterion is used to select the number of vectors appopriate for each person and also to classify each person into a tracking category. When the proportion of nontrackers is large, the modified Akaike criterion together with scatterplots of the growth curve weights are needed to distinguish trackers from nontrackers. The apprach is illustrated with longitudinal observations of height measured in an epidemiologic survey of children.  相似文献   

5.
The random pazking problem has been of interest to inoestigsccrs in seveal disciplines , Physical chemists have investigaced such models in two and three dimensions, Because of aralytical difficulties, one-dimensional analogacs have been explored and theseare referred to as the parking problem, A number of results areexplored and attempts are made to tie them together, Applicationsare also highlighted.  相似文献   

6.
Periodically, the pyramid or “chain letter” scheme is offered to Americans under the guise of a business dealership. Recently, the FTC ordered Glen Turner's “Dare to be Great” firm to repay 44 million dollars to participants. In order to demonstrate that the potential gains are misrepresented by promoters, a probability model of the pyramid scheme is developed. The major implications are that the vast majority of participants have less than a ten percent chance of recouping their initial investment when a small profit is achieved as soon as they recruit three people and that, on the average, half of the participants will recruit no one else and lose all their money.  相似文献   

7.
An appealing, but invalid, derivation of the probability that at least one of n events occurs is justified, using a particular definition of subtraction of events. The probabilities that exactly m and at least m of the n events occur are derived similarly.  相似文献   

8.
The problem of testing the equality of the medians of several populations is considered. Standard distribution-free procedures for this problem require that the populations have the same shape in order to maintain their nominal significance level, ever asymptotically, under the null hypothesis of equal medians , A modification of the Kruskal-Wallis test statistic is proposed which is exactly distribution-free under the usual nonparanetric asswnption that the continuous populations are identical with any shape. It is asymptotically distribution-free when the Continuous populations are asswned to be syrmnetric with equal medians.  相似文献   

9.
Self-reported income information particularly suffers from an intentional coarsening of the data, which is called heaping or rounding. If it does not occur completely at random – which is usually the case – heaping and rounding have detrimental effects on the results of statistical analysis. Conventional statistical methods do not consider this kind of reporting bias, and thus might produce invalid inference. We describe a novel statistical modeling approach that allows us to deal with self-reported heaped income data in an adequate and flexible way. We suggest modeling heaping mechanisms and the true underlying model in combination. To describe the true net income distribution, we use the zero-inflated log-normal distribution. Heaping points are identified from the data by applying a heuristic procedure comparing a hypothetical income distribution and the empirical one. To determine heaping behavior, we employ two distinct models: either we assume piecewise constant heaping probabilities, or heaping probabilities are considered to increase steadily with proximity to a heaping point. We validate our approach by some examples. To illustrate the capacity of the proposed method, we conduct a case study using income data from the German National Educational Panel Study.  相似文献   

10.
A variance components model with response variable depending on both fixed effects of explanatory variables and random components is specified to model longitudinal circular data, in order to study the directional behaviour of small animals, as insects, crustaceans, amphipods, etc. Unknown parameter estimators are obtained using a simulated maximum likelihood approach. Issues concerning log-likelihood variability and the related problems in the optimization algorithm are also addressed. The procedure is applied to the analysis of directional choices under full natural conditions ofTalitrus saltator from Castiglione della Pescaia (Italy) beaches.  相似文献   

11.
Asymptotic theory for the Cox semi-Markov illness-death model   总被引:1,自引:1,他引:0  
Irreversible illness-death models are used to model disease processes and in cancer studies to model disease recovery. In most applications, a Markov model is assumed for the multistate model. When there are covariates, a Cox (1972, J Roy Stat Soc Ser B 34:187–220) model is used to model the effect of covariates on each transition intensity. Andersen et al. (2000, Stat Med 19:587–599) proposed a Cox semi-Markov model for this problem. In this paper, we study the large sample theory for that model and provide the asymptotic variances of various probabilities of interest. A Monte Carlo study is conducted to investigate the robustness and efficiency of Markov/Semi-Markov estimators. A real data example from the PROVA (1991, Hepatology 14:1016–1024) trial is used to illustrate the theory.  相似文献   

12.
Numerical methods are needed to obtain maximum-likelihood estimates (MLEs) in many problems. Computation time can be an issue for some likelihoods even with modern computing power. We consider one such problem where the assumed model is a random-clumped multinomial distribution. We compute MLEs for this model in parallel using the Toolkit for Advanced Optimization software library. The computations are performed on a distributed-memory cluster with low latency interconnect. We demonstrate that for larger problems, scaling the number of processes improves wall clock time significantly. An illustrative example shows how parallel MLE computation can be useful in a large data analysis. Our experience with a direct numerical approach indicates that more substantial gains may be obtained by making use of the specific structure of the random-clumped model.  相似文献   

13.
14.
15.
Let X1,…,Xr?1,Xr,Xr+1,…,Xn be independent, continuous random variables such that Xi, i = 1,…,r, has distribution function F(x), and Xi, i = r+1,…,n, has distribution function F(x?Δ), with -∞ <Δ< ∞. When the integer r is unknown, this is refered to as a change point problem with at most one change. The unknown parameter Δ represents the magnitude of the change and r is called the changepoint. In this paper we present a general review discussion of several nonparametric approaches for making inferences about r and Δ.  相似文献   

16.
We consider here a class of test statistics based on exceeding observations and develop exceedance-type tests for the two-sample hypothesis testing problem. The exact distribution of the statistics are derived under the null hypothesis as well as under the Lehmann alternative, and then a comparative power study is carried out.  相似文献   

17.
ABSTRACT

We introduce a semi-parametric Bayesian approach based on skewed Dirichlet processes priors for location parameters in the ordinal calibration problem. This approach allows the modeling of asymmetrical error distributions. Conditional posterior distributions are implemented, thus allowing the use of Markov chains Monte Carlo to generate the posterior distributions. The methodology is applied to both simulated and real data.  相似文献   

18.
Sarjinder Singh 《Statistics》2013,47(3):566-574
In this note, a dual problem to the calibration of design weights of the Deville and Särndal [Calibration estimators in survey sampling, J. Amer. Statist. Assoc. 87 (1992), pp. 376–382] method has been considered. We conclude that the chi-squared distance between the design weights and the calibrated weights equals the square of the standardized Z-score obtained by the difference between the known population total of the auxiliary variable and its corresponding Horvitz and Thompson [A generalization of sampling without replacement from a finite universe, J. Amer. Statist. Assoc. 47 (1952), pp. 663–685] estimator divided by the sample standard deviation of the auxiliary variable to obtain the linear regression estimator in survey sampling.  相似文献   

19.
20.
The exponential failure model is studied from the hierarchical point of view. The parameter of the exponential is considered as a random variable with a gamma function as a prior. Futhermore, the scale parameter of the gamma prior isassumed to be a random variable with known hyperprior. Under these assumptions estimators are derived for the exponential parameter and reliability function. Monte Carlo simulation is utilized to compare the various estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号