首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Consider the general linear model Y = Xβ + ? , where E[??'] = σ2I and rank of X is less than or equal to the number of columns of X. It is well known that the linear parametric function λ'β is estimable if and only if λ' is in the row space of X. This paper characterizes all orthogonal matrices P such that the row space of XP is equal to the row space of X, i.e. the estimability of λ'β is invariant under P. An additional property of these matrices is the invariance of the spectrum of the information matrix X'X. An application of the results is also given.  相似文献   

2.
We consider estimation of β in the semiparametric regression model y ( i ) - x T( i )β + f ( i / n ) + ε( i ) where x ( i ) = g ( i )/ n ) + e ( i , f and g are unknown smooth functions and the processes ε( i ) and e ( i ) are stationary with short- or long-range dependence. For the case of i.i.d. errors, Speckman (1988) proposed a √ n –consistent estimator of β. In this paper it is shown that, under suitable regularity conditions, this estimator is asymptotically unbiased and √ n –consistent even if the errors exhibit long-range dependence. The orders of the finite sample bias and of the required bandwidth depend on the long-memory parameters. Simulations and a data example illustrate the method  相似文献   

3.
4.
The object of this paper is the statistical analysis of Several Closely related models arising in water quality analysis. In particular, concern is with the autoregressive scheme Xr = ρXr?1 + Yr where 0 < ρ < 1 and Y's are i.i.d, and non-negative. The estimation and testing problem is considered for three parametric models - Gaussian, uniform and exponential - as well as for the nonparametric case where it is assumed that the Y's have a positive continuous distribution.  相似文献   

5.
We introduce a modified version ?nof the piecewiss linear hisiugrimi uf Beirlant et al. (1998) which is a true probability density, i.e., ?n[d] 0 and [d]?n=1. We prove that ?nestimates the underlying densitv ? strongly consistently in the L1mmn, derive large deviation inequalities for the t\ error \?n- f\ and prove that £||/"-/|| tends to zero with the rate n -1\3, We also show that the derivative lf'n estimates consistently in ine expected Lx error the derivative/ of sufficiently smooth density and evaluate the rate of convergence n-i/5 for Epf'n -f'% The estimator/" thus enables to approximate/in the Besov space with a guaranteed rate of convergence. Optimization of the smoothing parameter is also studied. The theoretical or experimentally approximated values of the expected errors E\\?n- f\\ and E||2?'n-?' are compared with tiie errors aCiiieveu u-y t"e histogram of Beirlant et ah, and other nonparametric methods.  相似文献   

6.
Let X1,X2…be i.i.d. observations from a mixture density. The support of the unknown prior distribution is the union of two unknown intervals. The paper deals with an empirical Bayes testing approach (?≤ c against>c where c is an unknown parameter to be estimated) in order to classify the observed variables as coming from one population or the other as ? belongs to one or the other unknown interval. Two methods are proposed in which asymptotically optimal decision rules are constructed avoiding the estimation of the unknown prior. The first method deals with the case of exponential families and is a generalization of the method of Johns and Van Ryzin (1971, 1972) whereas the second one deals with families that are closed under convolution and is a Fourier method. The application of the Fourier method to some densities (i.e. contaminated Gaussian distributions, exponential distribution, double-exponential distribution) which are interesting in view of applications and which cannot be studied by means of the direct method, is also considered herein.  相似文献   

7.
The mark variogram [Cressie, 1993. Statistics for Spatial Data. Wiley, New York] is a useful tool to analyze data from marked point processes. In this paper, we investigate the asymptotic properties of its estimator. Our main findings are that the sample mark variogram is a consistent estimator for the true mark variogram and is asymptotically normal under some mild conditions. These results hold for both the geostatistical marking case (i.e., the case where the marks and points are independent) and the non-geostatistical marking case (i.e., the case where the marks and points are dependent). As an application we develop a general test for spatial isotropy and study our methodology through a simulation study and an application to a data set on long leaf pine trees.  相似文献   

8.
This paper presents two simple non-Gaussian first-order autoregressive markovian processes which are easy to simulate via a computer. The autoregressive Gamma process {Xn:} is constructed according to the stochastic difference equation Xn:=Vn:Xn?1+?n:, where {?n:} is an i.i.d. Exponential sequence and {Vn:} is i.i.d. with Power-function distribution defined on the interval [0,1). The autoregressive Weibull process {Xn:} is constructed from the probabilistic model Xn:= k.min (Xn?1:, Yn:) where {Yn:} is an i.i.d. Weibull sequence and k > 1.  相似文献   

9.
This paper provides the theoretical explanation and Monte Carlo experiments of using a modified version of Durbin-Watson ( D W ) statistic to test an 1 ( 1 ) process against I ( d ) alternatives, that is, integrated process of order d, where d is a fractional number. We provide the exact order of magnitude of the modified D W test when the data generating process is an I ( d ) process with d E (0. 1.5). Moreover, the consistency of the modified DW statistic as a unit root test against I ( d ) alternatives with d E ( 0 , l ) U ( 1 , 1.5) is proved in this paper. In addition to the theoretical analysis, Monte Carlo experiments show that the performance of the modified D W statistic reveals that it can be used as a unit root test against I ( d ) alternatives.  相似文献   

10.
In this paper we consider a binary, monotone system whose component states are dependent through the possible occurrence of independent common shocks, i.e. shocks that destroy several components at once. The individual failure of a component is also thought of as a shock. Such systems can be used to model common cause failures in reliability analysis. The system may be a technological one, or a human being. It is observed until it fails or dies. At this instant, the set of failed components and the failure time of the system are noted. The failure times of the components are not known. These are the so-called autopsy data of the system. For the case of independent components, i.e. no common shocks, Meilijson (1981), Nowik (1990), Antoine et al . (1993) and GTsemyr (1998) discuss the corresponding identifiability problem, i.e. whether the component life distributions can be determined from the distribution of the observed data. Assuming a model where autopsy data is known to be enough for identifia bility, Meilijson (1994) goes beyond the identifiability question and into maximum likelihood estimation of the parameters of the component lifetime distributions based on empirical autopsy data from a sample of several systems. He also considers life-monitoring of some components and conditional life-monitoring of some other. Here a corresponding Bayesian approach is presented for the shock model. Due to prior information one advantage of this approach is that the identifiability problem represents no obstacle. The motivation for introducing the shock model is that the autopsy model is of special importance when components can not be tested separately because it is difficult to reproduce the conditions prevailing in the functioning system. In Gåsemyr & Natvig (1997) we treat the Bayesian approach to life-monitoring and conditional life- monitoring of components  相似文献   

11.
This paper concerns a family of univariate distributions suggested by Topp & Leone in 1955. Topp & Leone provided no motivation for this new family and by way of properties they derived only the first four integer-order moments, i.e. E(Xn) for n=1, r 2, r 3, r 4 . In this paper we provide a motivation for the family of distributions and derive explicit algebraic expressions for: (1) hazard rate function; (2) E(Xn) when n ± 1 is any integer; (3) E(Xn) for n=1, r 2, r … r , r 10 , and (4) E[{X-E(X)} n] , n=2, r 3, r 4 . We also give an expression for the characteristic function and discuss issues on estimation and simulation. The main calculations of this paper use properties of the Gauss hypergeometric function.  相似文献   

12.
In an earlier paper it was recommended that an experimental design for the study of a mixture system in which the components had lower and upper limits should consist of a subset of the vertices and centroids of the region defined by the limitson the components. This paper extends this methodology to the situation where linear combinations of two or more components (e.g., liquid content=x3+x4+≦0.35) are subject to lower and upper constraints. The CONSIM algorithm, developed by R. E. Wheeler, is recommended for computing the vertices of the resulting experimental region. Procedures for developing linear and quadratic mixture model designs are discussed. A five-component example which has two multiple-component constraints is included to illustrate the proposed methods of mixture experimentation.  相似文献   

13.
The authors consider the problem of estimating the density g of independent and identically distributed variables XI, from a sample Z1,… Zn such that ZI = XI + σ? for i = 1,…, n, and E is noise independent of X, with σ? having a known distribution. They present a model selection procedure allowing one to construct an adaptive estimator of g and to find nonasymptotic risk bounds. The estimator achieves the minimax rate of convergence, in most cases where lower bounds are available. A simulation study gives an illustration of the good practical performance of the method.  相似文献   

14.
We present a concise summary of recent progress in developing algorithms for restricted least absolute value (LAV) estimation (i. e. ?1 approximation subject to linear constraints). The emphasis is on our own new algorithm, and we provide some numerical results obtained with it.  相似文献   

15.
Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S 1 , … , S k ; random effects can then be a useful model: Si = E(S) + k i . Here, the temporal variation in survival probability is treated as random with average value E( k 2 ) = σ 2 . This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, σ 2 , estimation of E(S) and var(Ê(S)) where the latter includes a component for σ 2 as well as the traditional component for v ar(S|S). Furthermore, the random effects model leads to shrinkage estimates, S i , as improved (in mean square error) estimators of Si compared to the MLE, S i , from the unrestricted time-effects model. Appropriate confidence intervals based on the S i are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of σ 2 , confidence interval coverage on σ 2 , coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: S i = S (no effects), Si = E(S) + k i (random effects), and S 1 , … , S k (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the S i .  相似文献   

16.
In pattern classification of sampled vector valued random variables it is often essential, due to computational and accuracy considerations, to consider certain measurable transformations of the random variable. These transformations are generally of a dimension-reducing nature. In this paper we consider the class of linear dimension reducing transformations, i.e., the k × n matrices of rank k where k < n and n is the dimension of the range of the sampled vector random variable.

In this connection, we use certain results (Decell and Quirein, 1973), that guarantee, relative to various class separability criteria, the existence of an extremal transformation. These results also guarantee that the extremal transformation can be expressed in the form (Ik∣ Z)U where Ik is the k × k identity matrix and U is an orthogonal n × n matrix. These results actually limit the search for the extremal linear transformation to a search over the obviously smaller class of k × n matrices of the form (Ik ∣Z)U. In this paper these results are refined in the sense that any extremal transformation can be expressed in the form (IK∣Z)Hp … H1 where p ≤ min{k, n?k} and Hi is a Householder transformation i=l,…, p, The latter result allows one to construct a sequence of transformations (LK∣ Z)H1, (IK Z)H2H1 … such that the values of the class separability criterion evaluated at this sequence is a bounded, monotone sequence of real numbers. The construction of the i-th element of the sequence of transformations requires the solution of an n-dimensional optimization problem. The solution, for various class separability criteria, of the optimization problem will be the subject of later papers. We have conjectured (with supporting theorems and empirical results) that, since the bounded monotone sequence of real class separability values converges to its least upper bound, this least upper bound is an extremal value of the class separability criterion.

Several open questions are stated and the practical implications of the results are discussed.  相似文献   

17.
Traditionally, sphericity (i.e., independence and homoscedasticity for raw data) is put forward as the condition to be satisfied by the variance–covariance matrix of at least one of the two observation vectors analyzed for correlation, for the unmodified t test of significance to be valid under the Gaussian and constant population mean assumptions. In this article, the author proves that the sphericity condition is too strong and a weaker (i.e., more general) sufficient condition for valid unmodified t testing in correlation analysis is circularity (i.e., independence and homoscedasticity after linear transformation by orthonormal contrasts), to be satisfied by the variance–covariance matrix of one of the two observation vectors. Two other conditions (i.e., compound symmetry for one of the two observation vectors; absence of correlation between the components of one observation vector, combined with a particular pattern of joint heteroscedasticity in the two observation vectors) are also considered and discussed. When both observation vectors possess the same variance–covariance matrix up to a positive multiplicative constant, the circularity condition is shown to be necessary and sufficient. “Observation vectors” may designate partial realizations of temporal or spatial stochastic processes as well as profile vectors of repeated measures. From the proof, it follows that an effective sample size appropriately defined can measure the discrepancy from the more general sufficient condition for valid unmodified t testing in correlation analysis with autocorrelated and heteroscedastic sample data. The proof is complemented by a simulation study. Finally, the differences between the role of the circularity condition in the correlation analysis and its role in the repeated measures ANOVA (i.e., where it was first introduced) are scrutinized, and the link between the circular variance–covariance structure and the centering of observations with respect to the sample mean is emphasized.  相似文献   

18.
In this paper we consider the problem of testing for a scale change in the infinite order moving average process X j = i =0 a i j i , where j are i.i.d. r.v.s with E 1 < for some > 0. In performing the test, a cusum of squares test statistic analogous to Inclan & Tiao's (1994) statistic is considered. It is well-known from the literature that outliers affect test procedures leading to false conclusions. In order to remedy this, a cusum of squares test based on trimmed observations is considered. It is demonstrated that this test is robust against outliers, is valid for infinite variance processes as well. Simulation results are given for illustration.  相似文献   

19.
A 2 2 2 contingency table can often be analysed in an exact fashion by using Fisher's exact test and in an approximate fashion by using the chi-squared test with Yates' continuity correction, and it is traditionally held that the approximation is valid when the minimum expected quantity E is E S 5. Unfortunately, little research has been carried out into this belief, other than that it is necessary to establish a bound E>E*, that the condition E S 5 may not be the most appropriate (Martín Andrés et al., 1992) and that E* is not a constant, but usually increasing with the growth of the sample size (Martín Andrés & Herranz Tejedor, 1997). In this paper, the authors conduct a theoretical experimental study from which they ascertain that E* value (which is very variable and frequently quite a lot greater than 5) is strongly related to the magnitude of the skewness of the underlying hypergeometric distribution, and that bounding the skewness is equivalent to bounding E (which is the best control procedure). The study enables estimating the expression for the above-mentioned E* (which in turn depends on the number of tails in the test, the alpha error used, the total sample size, and the minimum marginal imbalance) to be estimated. Also the authors show that E* increases generally with the sample size and with the marginal imbalance, although it does reach a maximum. Some general and very conservative validity conditions are E S 35.53 (one-tailed test) and E S 7.45 (two-tailed test) for alpha nominal errors in 1% h f h 10%. The traditional condition E S 5 is only valid when the samples are small and one of the marginals is very balanced; alternatively, the condition E S 5.5 is valid for small samples or a very balanced marginal. Finally, it is proved that the chi-squared test is always valid in tables where both marginals are balanced, and that the maximum skewness permitted is related to the maximum value of the bound E*, to its value for tables with at least one balanced marginal and to the minimum value that those marginals must have (in non-balanced tables) for the chi-squared test to be valid.  相似文献   

20.
In biomedical research and diagnostic practice it is common to classify objects dichotomously based on continuous observations (x) measuring some form of biological activity, where some proportion of the objects have a level of activity above background. In this paper, we consider the problem of estimating the proportion of positive objects for a typical assay where:(i) the distribution of x for positive objects is unknown. although (ii) the risk of positivity is known to be a monotonic function of x:and (iii) x has been measured for a set of negative control objects. Monte Carlo simulations evaluating four alternative estimators of the positivity, including novel non-parametric mixture decompositions, indicate that where the positives and negatives have distributions of x with a moderate degree of overlap, a non-parametric decomposition using a latent class model provides precise and close to unbiased estimates. The methods are illustrated using data from an autoradiography assay used in cell biology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号