首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 99 毫秒
1.
Process regression methodology is underdeveloped relative to the frequency with which pertinent data arise. In this article, the response-190 is a binary indicator process representing the joint event of being alive and remaining in a specific state. The process is indexed by time (e.g., time since diagnosis) and observed continuously. Data of this sort occur frequently in the study of chronic disease. A general area of application involves a recurrent event with non-negligible duration (e.g., hospitalization and associated length of hospital stay) and subject to a terminating event (e.g., death). We propose a semiparametric multiplicative model for the process version of the probability of being alive and in the (transient) state of interest. Under the proposed methods, the regression parameter is estimated through a procedure that does not require estimating the baseline probability. Unlike the majority of process regression methods, the proposed methods accommodate multiple sources of censoring. In particular, we derive a computationally convenient variant of inverse probability of censoring weighting based on the additive hazards model. We show that the regression parameter estimator is asymptotically normal, and that the baseline probability function estimator converges to a Gaussian process. Simulations demonstrate that our estimators have good finite sample performance. We apply our method to national end-stage liver disease data. The Canadian Journal of Statistics 48: 222–237; 2020 © 2019 Statistical Society of Canada  相似文献   

2.
Process monitoring in the presence of data correlation is one of the most discussed issues in statistical process control literature over the past decade. However, the attention to retrospective analysis in the presence of data correlation with various common cause sigma estimators is lacking in the literature. Maragah et al. (1992), in an early paper on the retrospective analysis in presence of data correlation, addresses only a single common cause sigma estimator. This paper studies the effect of data correlation on retrospective X-chart with various common cause sigma estimates in stable period of AR(1) Process. This study is carried out with the aim of identifying suitable standard deviation statistic/statistics which is/are robust to the data correlation. This paper also discusses the robustness of common cause sigma estimates for monitoring the data following other time series models, namely ARMA(1,1) and AR(p). Further, the bias characteristics of robust standard deviation estimates have been discussed for the above time-series models. This paper further studies the performance of retrospective X-chart on forecast residuals from various forecasting methods of AR(1) process. The above studies were carried out through simulating the stable period of AR(1), AR(2), stable and invertible period of ARMA(1,1) processes. The average number of false alarms have been considered as a measure of performance. The results of simulation studies have been discussed.  相似文献   

3.
《随机性模型》2013,29(2-3):799-820
ABSTRACT

We investigate the tail probability of the queue length of low-priority class for a discrete-time priority BMAP/PH/1 queue that consists of two priority classes, with BMAP (Batch Markovian Arrival Process) arrivals of high-priority class and MAP (Markovian Arrival Process) arrivals of low-priority class. A sufficient condition under which this tail probability has the asymptotically geometric property is derived. A method is designed to compute the asymptotic decay rate if the asymptotically geometric property holds. For the case when the BMAP for high-priority class is the superposition of a number of MAP's, though the parameter matrices representing the BMAP is huge in dimension, the sufficient condition is numerically easy to verify and the asymptotic decay rate can be computed efficiently.  相似文献   

4.
Point processes are the stochastic models most suitable for describing physical phenomena that appear at irregularly spaced times, such as the earthquakes. These processes are uniquely characterized by their conditional intensity, that is, by the probability that an event will occur in the infinitesimal interval (t, t+Δt), given the history of the process up tot. The seismic phenomenon displays different behaviours on different time and size scales; in particular, the occurrence of destructive shocks over some centuries in a seismogenic region may be explained by the elastic rebound theory. This theory has inspired the so-called stress release models: their conditional intensity translates the idea that an earthquake produces a sudden decrease in the amount of strain accumulated gradually over time along a fault, and the subsequent event occurs when the stress exceeds the strength of the medium. This study has a double objective: the formulation of these models in the Bayesian framework, and the assignment to each event of a mark, that is its magnitude, modelled through a distribution that depends at timet on the stress level accumulated up to that instant. The resulting parameter space is constrained and dependent on the data, complicating Bayesian computation and analysis. We have resorted to Monte Carlo methods to solve these problems.  相似文献   

5.
This paper considers a linear regression model with regression parameter vector β. The parameter of interest is θ= aTβ where a is specified. When, as a first step, a data‐based variable selection (e.g. minimum Akaike information criterion) is used to select a model, it is common statistical practice to then carry out inference about θ, using the same data, based on the (false) assumption that the selected model had been provided a priori. The paper considers a confidence interval for θ with nominal coverage 1 ‐ α constructed on this (false) assumption, and calls this the naive 1 ‐ α confidence interval. The minimum coverage probability of this confidence interval can be calculated for simple variable selection procedures involving only a single variable. However, the kinds of variable selection procedures used in practice are typically much more complicated. For the real‐life data presented in this paper, there are 20 variables each of which is to be either included or not, leading to 220 different models. The coverage probability at any given value of the parameters provides an upper bound on the minimum coverage probability of the naive confidence interval. This paper derives a new Monte Carlo simulation estimator of the coverage probability, which uses conditioning for variance reduction. For these real‐life data, the gain in efficiency of this Monte Carlo simulation due to conditioning ranged from 2 to 6. The paper also presents a simple one‐dimensional search strategy for parameter values at which the coverage probability is relatively small. For these real‐life data, this search leads to parameter values for which the coverage probability of the naive 0.95 confidence interval is 0.79 for variable selection using the Akaike information criterion and 0.70 for variable selection using Bayes information criterion, showing that these confidence intervals are completely inadequate.  相似文献   

6.
This study demonstrates that a location parameter of an exponential distribution significantly influences normalization of the exponential. The Kullback–Leibler information number is shown to be an appropriate index for measuring data normality using a location parameter. Control charts based on probability limits and transformation are compared for known and estimated location parameters. The probabilities of type II error (β-risks) and average run length (ARL) without a location parameter indicate an ability to detect an out-of-control signal of an individual chart using a power transformation similar to using probability limits. The β-risks and ARL of control charts with an estimated location parameter deviate significantly from their theoretical values when a small sample size of n≤50 is used. Therefore, without taking into account of the existence of a location parameter, the control charts result in inaccurate detection of an out-of-control signal regardless of whether a power or natural logarithmic transformation is used. The effects of a location parameter should be eliminated before transformation. Two examples are presented to illustrate these findings.  相似文献   

7.
Abstract

In this paper we suppose that the intensity parameter of the Pólya-Aeppli process is a function of time t and call the resulting process a non-homogeneous Pólya-Aeppli process (NHPAP). The NHPAP can be represented as a compound non-homogeneous Poisson process with geometric compounding distribution as well as a pure birth process. For this process we give two definitions and show their equivalence. Also, we derive some interesting properties of NHPAP and use simulation the illustrate the process for particular intensity functions. In addition, we introduce the standard risk model based on NHPAP, analyze the ruin probability for this model and include an example of the process under exponentially distributed claims.  相似文献   

8.
For a confidence interval (L(X),U(X)) of a parameter θ in one-parameter discrete distributions, the coverage probability is a variable function of θ. The confidence coefficient is the infimum of the coverage probabilities, inf  θ P θ (θ∈(L(X),U(X))). Since we do not know which point in the parameter space the infimum coverage probability occurs at, the exact confidence coefficients are unknown. Beside confidence coefficients, evaluation of a confidence intervals can be based on the average coverage probability. Usually, the exact average probability is also unknown and it was approximated by taking the mean of the coverage probabilities at some randomly chosen points in the parameter space. In this article, methodologies for computing the exact average coverage probabilities as well as the exact confidence coefficients of confidence intervals for one-parameter discrete distributions are proposed. With these methodologies, both exact values can be derived.  相似文献   

9.
In this paper, intervention time series models were developed to examine the effectiveness of the voluntary counselling and testing (VCT) programme in the northern and southern sectors of Ghana. Pre-intervention data of HIV reported cases in the northern and southern sectors were first modelled as Box–Jenkins univariate time series. Second, the adopted models from the pre-intervention data were extended to include the intervention variable. The intervention variable was coded as zero for the pre-intervention period (1 January 1996–31 December 2002) and one for the post-intervention period (1 January 2003–31 December 2007). The models developed were applied to the entire data for the two sectors to estimate the effect of the VCT programme. Our findings indicate that the VCT programme was found to be associated with detection of 20 and 40 new HIV infections per 100,000 persons per month in the northern and southern sectors (p?相似文献   

10.
In this article, an attempt has been made to settle the question of existence of unbiased estimator of the key parameter p of the quasi-binomial distributions of Type I (QBD I) and of Type II (QBD II), with/without any knowledge of the other parameter φ appearing in the expressions for probability functions of the QBD's. This is studied with reference to a single observation, a random sample of finite size m as also with samples drawn by suitably defined sequential sampling rules.  相似文献   

11.
This article considers fixed effects (FE) estimation for linear panel data models under possible model misspecification when both the number of individuals, n, and the number of time periods, T, are large. We first clarify the probability limit of the FE estimator and argue that this probability limit can be regarded as a pseudo-true parameter. We then establish the asymptotic distributional properties of the FE estimator around the pseudo-true parameter when n and T jointly go to infinity. Notably, we show that the FE estimator suffers from the incidental parameters bias of which the top order is O(T? 1), and even after the incidental parameters bias is completely removed, the rate of convergence of the FE estimator depends on the degree of model misspecification and is either (nT)? 1/2 or n? 1/2. Second, we establish asymptotically valid inference on the (pseudo-true) parameter. Specifically, we derive the asymptotic properties of the clustered covariance matrix (CCM) estimator and the cross-section bootstrap, and show that they are robust to model misspecification. This establishes a rigorous theoretical ground for the use of the CCM estimator and the cross-section bootstrap when model misspecification and the incidental parameters bias (in the coefficient estimate) are present. We conduct Monte Carlo simulations to evaluate the finite sample performance of the estimators and inference methods, together with a simple application to the unemployment dynamics in the U.S.  相似文献   

12.
In Statistical Process Control (SPC) there exists a need to model the run-length distribution of a Q-chart that monitors the process mean when measurements are from an exponential distribution with an unknown parameter. To develop exact expressions for the probabilities of run-lengths the joint distribution of the charting statistics is needed. This gives rise to a new distribution that can be regarded as a generalized multivariate beta distribution. An overview of the problem statement as identified in the field of SPC is given and the newly developed generalized multivariate beta distribution is proposed. Statistical properties of this distribution are studied and the effect of the parameters of this generalized multivariate beta distribution on the correlation between two variables is also discussed.  相似文献   

13.
A non-Bayesian predictive approach for statistical calibration is introduced. This is based on particularizing to the calibration setting the general definition of non-Bayesian (or frequentist) predictive probability density proposed by Harris [Predictive fit for natural exponential families, Biometrika 76 (1989), pp. 675–684]. The new method is elaborated in detail in case of Gaussian linear univariate calibration. Through asymptotic analysis and simulation results with moderate sample size, it is shown that the non-Bayesian predictive estimator of the unknown parameter of interest in calibration (commonly, a substance concentration) favourably compares with previous estimators such as the classical and inverse estimators, especially for extrapolation problems. A further advantage of the non-Bayesian predictive approach is that it provides not only point estimates but also a predictive likelihood function that allows the researcher to explore the plausibility of any possible parameter value, which is also briefly illustrated. Furthermore, the introduced approach offers a general framework that can be applied for calibrating on the basis of any parametric statistical model, so making it potentially useful for nonlinear and non-Gaussian calibration problems.  相似文献   

14.
Kh. Fazli 《Statistics》2013,47(5):407-428
We observe a realization of an inhomogeneous Poisson process whose intensity function depends on an unknown multidimensional parameter. We consider the asymptotic behaviour of the Rao score test for a simple null hypothesis against the multilateral alternative. By using the Edgeworth type expansion (under the null hypothesis) for a vector of stochastic integrals with respect to the Poisson process, we refine the (classic) threshold of the test (obtained by the central limit theorem), which improves the first type probability of error. The expansion allows us to describe the power of the test under the local alternative, i.e. a sequence of alternatives, which converge to the null hypothesis with a certain rate. The rates can be different for components of the parameter.  相似文献   

15.
《随机性模型》2013,29(4):541-554
In this paper, we show that the discrete GI/G/1 system can be analysed as a QBD process with infinite blocks. Most importantly, we show that Matrix–geometric method can be used for analyzing this general queue system including establishing its stability criterion and for obtaining the explicit stationary probability and the waiting time distributions. This also settles the unwritten myth that Matrix–geometric method is limited to cases with at least one Markov based characterizing parameter, i.e. either interarrival or service times, in the case of queueing systems.  相似文献   

16.
Abstract. The modelling process in Bayesian Statistics constitutes the fundamental stage of the analysis, since depending on the chosen probability laws the inferences may vary considerably. This is particularly true when conflicts arise between two or more sources of information. For instance, inference in the presence of an outlier (which conflicts with the information provided by the other observations) can be highly dependent on the assumed sampling distribution. When heavy‐tailed (e.g. t) distributions are used, outliers may be rejected whereas this kind of robust inference is not available when we use light‐tailed (e.g. normal) distributions. A long literature has established sufficient conditions on location‐parameter models to resolve conflict in various ways. In this work, we consider a location–scale parameter structure, which is more complex than the single parameter cases because conflicts can arise between three sources of information, namely the likelihood, the prior distribution for the location parameter and the prior for the scale parameter. We establish sufficient conditions on the distributions in a location–scale model to resolve conflicts in different ways as a single observation tends to infinity. In addition, for each case, we explicitly give the limiting posterior distributions as the conflict becomes more extreme.  相似文献   

17.
By representing fair betting odds according to one or more pairs of confidence set estimators, dual parameter distributions called confidence posteriors secure the coherence of actions without any prior distribution. This theory reduces to the maximization of expected utility when the pair of posteriors is induced by an exact or approximate confidence set estimator or when a reduction rule is applied to the pair. Unlike the p-value, the confidence posterior probability of an interval hypothesis is suitable as an estimator of the indicator of hypothesis truth since it converges to 1 if the hypothesis is true or to 0 otherwise.  相似文献   

18.
Multinomial logit (also termed multi-logit) models permit the analysis of the statistical relation between a categorical response variable and a set of explicative variables (called covariates or regressors). Although multinomial logit is widely used in both the social and economic sciences, the interpretation of regression coefficients may be tricky, as the effect of covariates on the probability distribution of the response variable is nonconstant and difficult to quantify. The ternary plots illustrated in this article aim at facilitating the interpretation of regression coefficients and permit the effect of covariates (either singularly or jointly considered) on the probability distribution of the dependent variable to be quantified. Ternary plots can be drawn both for ordered and for unordered categorical dependent variables, when the number of possible outcomes equals three (trinomial response variable); these plots allow not only to represent the covariate effects over the whole parameter space of the dependent variable but also to compare the covariate effects of any given individual profile. The method is illustrated and discussed through analysis of a dataset concerning the transition of master’s graduates of the University of Trento (Italy) from university to employment.  相似文献   

19.
We propose a new procedure for the multinomial selection problem to solve a real problem of any modern Air Force: the elaboration of better air-to-air tactics for Beyond Visual Range air-to-air combat that maximize its aircraft survival probability H(θ, ω), as well as enemy aircraft downing probability G(θ, ω). In this study, using a low-resolution simulator with generic parameters for the aircraft and missiles, we could increase an average success rate of 16.69% and 16.23% for H(θ, ω) and G(θ, ω), respectively, to an average success rate of 76.85% and 79.30%. We can assure with low probability of being wrong that the selected tactic has greater probability of yielding greater success rates in both H(θ, ω) and G(θ, ω) than any simulated tactic.  相似文献   

20.
Dimitrov and Khalil (1992) introduced a class of new probability distributions for modeling environmental evolution with periodic behavior. One of the key parameters in these distributions is α, the probability that the event being studied does not occur. In that article the authors derive an estimator for this parameter assuming a series of conditions. In this article it is shown that the estimator is valid under more general conditions, i.e. same of the assumptions are not necessary. It is shown that under the assumption that the elapsed time measured from the starting point of a period until the first occurrence time of the event given that the event occurred in this cycle is related to α, an approximate maximum likelihood estimator of a is proposed. The large sample properties of the estimator are discussed. Monte Carlo study is done for supporting the theoretical results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号