首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Consider an inhomogeneous Poisson process X on [0, T] whose unk-nown intensity function “switches” from a lower function g* to an upper function h* at some unknown point ?* that has to be identified. We consider two known continuous functions g and h such that g*(t) ? g(t) < h(t) ? h*(t) for 0 ? t ? T. We describe the behavior of the generalized likelihood ratio and Wald’s tests constructed on the basis of a misspecified model in the asymptotics of large samples. The power functions are studied under local alternatives and compared numerically with help of simulations. We also show the following robustness result: the Type I error rate is preserved even though a misspecified model is used to construct tests.  相似文献   

2.
The generalized AR(1) process y t = a t y t-1+ v t is considered, where the parameter a t follows the AR(1) process a t = Ga t-1+ w t.Assuming that V t and w t are Gaussian and independent, the first six exact predictors for future values of y t are derived. These exact predictors are compared with Box-Jenkins -type approximations. MACSYMA, a computer algebra program, is utilized in the derivation of the predictors.  相似文献   

3.
In partly linear models, the dependence of the response y on (x T, t) is modeled through the relationship y=x T β+g(t)+?, where ? is independent of (x T, t). We are interested in developing an estimation procedure that allows us to combine the flexibility of the partly linear models, studied by several authors, but including some variables that belong to a non-Euclidean space. The motivating application of this paper deals with the explanation of the atmospheric SO2 pollution incidents using these models when some of the predictive variables belong in a cylinder. In this paper, the estimators of β and g are constructed when the explanatory variables t take values on a Riemannian manifold and the asymptotic properties of the proposed estimators are obtained under suitable conditions. We illustrate the use of this estimation approach using an environmental data set and we explore the performance of the estimators through a simulation study.  相似文献   

4.
This article considers the problem of choosing between two treatments that have binary outcomes with unknown success probabilities p1 and p2. The choice is based upon the information provided by two observations X1B(n1, p1) and X2B(n2, p2) from independent binomial distributions. Standard approaches to this problem utilize basic statistical inference methodologies such as hypothesis tests and confidence intervals for the difference p1 ? p2 of the success probabilities. However, in this article the analysis of win-probabilities is considered. If X*1 represents a potential future observation from Treatment 1 while X*2 represents a potential future observation from Treatment 2, win-probabilities are defined in terms of the comparisons of X*1 and X*2. These win-probabilities provide a direct assessment of the relative advantages and disadvantages of choosing either treatment for one future application, and their interpretation can be combined with other factors such as costs, side-effects, and the availabilities of the two treatments. In this article, it is shown how confidence intervals for the win-probabilities can be constructed, and examples of their use are provided. Computer code for the implementation of this new methodology is available from the authors.  相似文献   

5.
The usual confidence set for p (p ≥ 3) coefficients of a linear model is known to be dominated by the James-Stein confidence sets under the assumption of spherical symmetric errors with known variance (Hwang and Chen 1986). For the same confidence-set problem but for the unknown-variance case, naturally one replaces the unknown variance by an estimator. For the normal case, many previous studies have shown numerically that the resultant James-Stein confidence sets dominate the resultant usual confidence sets, i.e., the F confidence sets. In this paper we provide a further asymptotic justification, and we discover the same advantage of the James-Stein confidence sets for normal error as well as spherically symmetric error.  相似文献   

6.
Consider the model yt = ρnyt ? 1 + ut, t = 1, …, n with ρn = 1 + c/kn and ut = σ1?tI{t ? k0} + σ2?tI{t > k0}, where c is a non-zero constant, σ1 and σ2 are two positive constants, I{ · } denotes the indicator function, kn is a sequence of positive constants increasing to ∞ such that kn = o(n), and {?t, t ? 1} is a sequence of i.i.d. random variables with mean zero and variance one. We derive the limiting distributions of the least squares estimator of ρn and the t-ratio of ρn for the above model in this paper. Some pivotal limit theorems are also obtained. Moreover, Monte Carlo experiments are conducted to examine the estimators under finite sample situations. Our theoretical results are supported by Monte Carlo experiments.  相似文献   

7.
We regard the simple linear calibration problem where only the response y of the regression line y = β0 + β1 t is observed with errors. The experimental conditions t are observed without error. For the errors of the observations y we assume that there may be some gross errors providing outlying observations. This situation can be modeled by a conditionally contaminated regression model. In this model the classical calibration estimator based on the least squares estimator has an unbounded asymptotic bias. Therefore we introduce calibration estimators based on robust one-step-M-estimators which have a bounded asymptotic bias. For this class of estimators we discuss two problems: The optimal estimators and their corresponding optimal designs. We derive the locally optimal solutions and show that the maximin efficient designs for non-robust estimation and robust estimation coincide.  相似文献   

8.
《随机性模型》2013,29(1):139-157
We consider the one-sided and the two-sided first-exit problem for a compound Poisson process with linear deterministic decrease between positive and negative jumps. This process (X(t)) t≥0 occurs as the workload process of a single-server queueing system with random workload removal, which we denote by M/G u /G d /1, where G u (G d ) stands for the distribution of the upward (downward) jumps; other applications are to cash management, dams, and several related fields. Under various conditions on G u and G d (assuming e.g. that one of them is hyperexponential, Erlang or Coxian), we derive the joint distribution of τ y =inf{t≥0|X(t)?(0,y)}, y>0, and X(τ y ) as well as that of T=inf{t≥0|X(t)≤0} and X(T). We also determine the distribution of sup{X(t)|0≤tT}.  相似文献   

9.
The bootstrap variance estimate is widely used in semiparametric inferences. However, its theoretical validity is a well‐known open problem. In this paper, we provide a first theoretical study on the bootstrap moment estimates in semiparametric models. Specifically, we establish the bootstrap moment consistency of the Euclidean parameter, which immediately implies the consistency of t‐type bootstrap confidence set. It is worth pointing out that the only additional cost to achieve the bootstrap moment consistency in contrast with the distribution consistency is to simply strengthen the L1 maximal inequality condition required in the latter to the Lp maximal inequality condition for p≥1. The general Lp multiplier inequality developed in this paper is also of independent interest. These general conclusions hold for the bootstrap methods with exchangeable bootstrap weights, for example, non‐parametric bootstrap and Bayesian bootstrap. Our general theory is illustrated in the celebrated Cox regression model.  相似文献   

10.
The equivalence of some tests of hypothesis and confidence limits is well known. When, however, the confidence limits are computed only after rejection of a null hypothesis, the usual unconditional confidence limits are no longer valid. This refers to a strict two-stage inference procedure: first test the hypothesis of interest and if the test results in a rejection decision, then proceed with estimating the relevant parameter. Under such a situation, confidence limits should be computed conditionally on the specified outcome of the test under which estimation proceeds. Conditional confidence sets will be longer than unconditional confidence sets and may even contain values of the parameter previously rejected by the test of hypothesis. Conditional confidence limits for the mean of a normal population with known variance are used to illustrate these results. In many applications, these results indicate that conditional estimation is probably not good practice.  相似文献   

11.
In this paper we consider the joint modeling of an observable time series yt and of an unobservable process St, capturing possible changes of regime, when (yt,St) is not necessarily jointly Markovian. For instance, models with moving-average components in the switching regime models, for which Hamilton's algorithm fails, are particular cases. We introduce a general switching state-space model and, in this framework, we propose a combination of the partial Kalman filter and of importance sampling techniques in order to compute the likelihood function. Moreover, various variance-reductions methods based on sequentially optimal approaches are given. These approaches are computationally simpler if st is a qualitative variable, i.e. it takes only a finite number of values, and in this setting Monte Carlo studies show the practical feasibility and the efficiency of the methods proposed. The filtering and smoothing problems are also dealt with.  相似文献   

12.
Stuart's (1953) measure of association in contingency tables, tC, based on Kendall's (1962) t, is compared with Goodman and Kruskal's (1954, 1959, 1963, 1972) measure G. First, it is proved that |G| ≥ |tC|; and then it is shown that the upper bound for the asymptotic variance of G is not necessarily always smaller than the upper bound for the asymptotic variance of tC. It is proved, however, that the upper bound for the coefficient of variation of G cannot be larger in absolute value than the upper bound for the coefficient of variation of tC. The asymptotic variance of tC is also derived and hence we obtain an upper bound for this asymptotic variance which is sharper than Stuart's (1953) upper bound.  相似文献   

13.
The basic model in this paper is an AR(1) model with a structural break in the AR parameter β at an unknown time k0. That is, yt = β1yt ? 1I{t ? k0} + β2yt ? 1I{t > k0} + ?t, t = 1, 2, ???, T, where I{ · } denotes the indicator function. Suppose |β1| < 1, |β2| < 1, and {?t, t ? 1} is a sequence of i.i.d. random variables which are in the domain of attraction of the normal law with zero mean and possibly infinite variance, then the limiting distributions for the least squares estimators of β1 and β2 are studied in the present paper, which extend some results in Chong (2001 Chong, T.L. (2001). Structural change in AR(1) models. Econometric Theory 17:87155.[Crossref], [Web of Science ®] [Google Scholar]).  相似文献   

14.
Let H(x, y) be a continuous bivariate distribution function with known marginal distribution functions F(x) and G(y). Suppose the values of H are given at several points, H(x i , y i ) = θ i , i = 1, 2,…, n. We first discuss conditions for the existence of a distribution satisfying these conditions, and present a procedure for checking if such a distribution exists. We then consider finding lower and upper bounds for such distributions. These bounds may be used to establish bounds on the values of Spearman's ρ and Kendall's τ. For n = 2, we present necessary and sufficient conditions for existence of such a distribution function and derive best-possible upper and lower bounds for H(x, y). As shown by a counter-example, these bounds need not be proper distribution functions, and we find conditions for these bounds to be (proper) distribution functions. We also present some results for the general case, where the values of H(x, y) are known at more than two points. In view of the simplification in notation, our results are presented in terms of copulas, but they may easily be expressed in terms of distribution functions.  相似文献   

15.
Consider k( ? 2) normal populations with unknown means μ1, …, μk, and a common known variance σ2. Let μ[1] ? ??? ? μ[k] denote the ordered μi.The populations associated with the t(1 ? t ? k ? 1) largest means are called the t best populations. Hsu and Panchapakesan (2004) proposed and investigated a procedure RHPfor selecting a non empty subset of the k populations whose size is at most m(1 ? m ? k ? t) so that at least one of the t best populations is included in the selected subset with a minimum guaranteed probability P* whenever μ[k ? t + 1] ? μ[k ? t] ? δ*, where P*?and?δ* are specified in advance of the experiment. This probability requirement is known as the indifference-zone probability requirement. In the present article, we investigate the same procedure RHP for the same goal as before but when k ? t < m ? k ? 1 so that at least one of the t best populations is included in the selected subset with a minimum guaranteed probability P* whatever be the configuration of the unknown μi. The probability requirement in this latter case is termed the subset selection probability requirement. Santner (1976) proposed and investigated a different procedure (RS) based on samples of size n from each of the populations, considering both cases, 1 ? m ? k ? t and k ? t < m ? k. The special case of t = 1 was earlier studied by Gupta and Santner (1973) and Hsu and Panchapakesan (2002) for their respective procedures.  相似文献   

16.
The purpose of this article is threefold. First, variance components testing for ANOVA ‐type mixed models is considered, in which response may not be divided into independent sub‐vectors, whereas most of existing methods are for models where response can be divided into independent sub‐vectors. Second, testing that a certain subset of variance components is zero. Third, as normality is often violated in practice, it is desirable to construct tests under very mild assumptions. To achieve these goals, an adaptive difference‐based test and an adaptive trace‐based test are constructed. The test statistics are asymptotically normal under the null hypothesis, are consistent against all global alternatives and can detect local alternatives distinct from the null at a rate as close to n ? 1 ∕ 2 as possible with n being the sample size. Moreover, when the dimensions of variance components in different sets are bounded, we develop a test with chi‐square as its limiting null distribution. The finite sample performance of the tests is examined via simulations, and a real data set is analysed for illustration.  相似文献   

17.
In the study of the stochastic behaviour of the lifetime of an element as a function of its length, it is often observed that the failure time (or lifetime) decreases as the length increases. In probabilistic terms, such an idea can be expressed as follows. Let T be the lifetime of a specimen of length x, so the survival function, which denotes the probability that an element of length x survives till time t, will be given by ST (t, x) = P(T > t/α(x), where α(x) is a monotonically decreasing function. In particular, it is often assumed that T has a Weibull distribution. In this paper, we propose a generalization of this Weibull model by assuming that the distribution of T is Generalized gamma (GG). Since the GG model contains the Weibull, Gamma and Lognormal models as special and limiting cases, a GG regression model is an appropriate tool for describing the size effect on the lifetime and for selecting among the embedded models. Maximum likelihood estimates are obtained for the GG regression model with α(x) = cxb . As a special case this provide an alternative to the usual approach to estimation for the GG distribution which involves reparametrization. Related parametric inference issues are addressed and illustrated using two experimental data sets. Some discussion of censored data is also provided.  相似文献   

18.
ABSTRACT

Let {yt } be a Poisson-like process with the mean μ t which is a periodic function of time t. We discuss how to fit this type of data set using quasi-likelihood method. Our method provides a new avenue to fit a time series data when the usual assumption of stationarity and homogeneous residual variances are invalid. We show that the estimators obtained are strongly consistent and also asymptotically normal.  相似文献   

19.
The present paper has as its objective an accurate quantification of the robustness of the two–sample t-test over an extensive practical range of distributions. The method is that of a major Monte Carlo study over the Pearson system of distributions and the details indicate that the results are quite accurate. The study was conducted over the range β 1 =0.0(0.4)2.0 (negative and positive skewness) and β 2 =1.4 (0.4)7.8 with equal sample sizes and for both the one-and two-tail t-tests. The significance level and power levels (for nominal values of 0.05, 0.50, and 0.95, respectively) were evaluated for each underlying distribution and for each sample size, with each probability evaluated from 100,000 generated values of the test-statistic. The results precisely quantify the degree of robustness inherent in the two-sample t-test and indicate to a user the degree of confidence one can have in this procedure over various regions of the Pearson system. The results indicate that the equal-sample size two-sample t-test is quite robust with respect to departures from normality, perhaps even more so than most people realize.  相似文献   

20.
Although the t-type estimator is a kind of M-estimator with scale optimization, it has some advantages over the M-estimator. In this article, we first propose a t-type joint generalized linear model as a robust extension to the classical joint generalized linear models for modeling data containing extreme or outlying observations. Next, we develop a t-type pseudo-likelihood (TPL) approach, which can be viewed as a robust version to the existing pseudo-likelihood (PL) approach. To determine which variables significantly affect the variance of the response variable, we then propose a unified penalized maximum TPL method to simultaneously select significant variables for the mean and dispersion models in t-type joint generalized linear models. Thus, the proposed variable selection method can simultaneously perform parameter estimation and variable selection in the mean and dispersion models. With appropriate selection of the tuning parameters, we establish the consistency and the oracle property of the regularized estimators. Simulation studies are conducted to illustrate the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号