首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Abstract. A right‐censored version of a U ‐statistic with a kernel of degree m 1 is introduced by the principle of a mean preserving reweighting scheme which is also applicable when the dependence between failure times and the censoring variable is explainable through observable covariates. Its asymptotic normality and an expression of its standard error are obtained through a martingale argument. We study the performances of our U ‐statistic by simulation and compare them with theoretical results. A doubly robust version of this reweighted U ‐statistic is also introduced to gain efficiency under correct models while preserving consistency in the face of model mis‐specifications. Using a Kendall's kernel, we obtain a test statistic for testing homogeneity of failure times for multiple failure causes in a multiple decrement model. The performance of the proposed test is studied through simulations. Its usefulness is also illustrated by applying it to a real data set on graft‐versus‐host‐disease.  相似文献   

2.
In this paper, we study the effects of noise on bipower variation, realized volatility (RV) and testing for co‐jumps in high‐frequency data under the small noise framework. We first establish asymptotic properties of bipower variation in this framework. In the presence of the small noise, RV is asymptotically biased, and the additional asymptotic conditional variance term appears in its limit distribution. We also propose consistent estimators for the asymptotic variances of RV. Second, we derive the asymptotic distribution of the test statistic proposed in (Ann. Stat. 37, 1792‐1838) under the presence of small noise for testing the presence of co‐jumps in a two‐dimensional Itô semimartingale. In contrast to the setting in (Ann. Stat. 37, 1792‐1838), we show that the additional asymptotic variance terms appear and propose consistent estimators for the asymptotic variances in order to make the test feasible. Simulation experiments show that our asymptotic results give reasonable approximations in the finite sample cases.  相似文献   

3.
Small‐area estimation techniques have typically relied on plug‐in estimation based on models containing random area effects. More recently, regression M‐quantiles have been suggested for this purpose, thus avoiding conventional Gaussian assumptions, as well as problems associated with the specification of random effects. However, the plug‐in M‐quantile estimator for the small‐area mean can be shown to be the expected value of this mean with respect to a generally biased estimator of the small‐area cumulative distribution function of the characteristic of interest. To correct this problem, we propose a general framework for robust small‐area estimation, based on representing a small‐area estimator as a functional of a predictor of this small‐area cumulative distribution function. Key advantages of this framework are that it naturally leads to integrated estimation of small‐area means and quantiles and is not restricted to M‐quantile models. We also discuss mean squared error estimation for the resulting estimators, and demonstrate the advantages of our approach through model‐based and design‐based simulations, with the latter using economic data collected in an Australian farm survey.  相似文献   

4.
Directional testing of vector parameters, based on higher order approximations of likelihood theory, can ensure extremely accurate inference, even in high‐dimensional settings where standard first order likelihood results can perform poorly. Here we explore examples of directional inference where the calculations can be simplified, and prove that in several classical situations, the directional test reproduces exact results based on F‐tests. These findings give a new interpretation of some classical results and support the use of directional testing in general models, where exact solutions are typically not available. The Canadian Journal of Statistics 47: 619–627; 2019 © 2019 Statistical Society of Canada  相似文献   

5.
n possibly different success probabilities p 1, p 2, ..., p n is frequently approximated by a Poisson distribution with parameter λ = p 1 + p 2 + ... + p n . LeCam's bound p 2 1 + p 2 2 + ... + p n 2 for the total variation distance between both distributions is particularly useful provided the success probabilities are small. The paper presents an improved version of LeCam's bound if a generalized d-dimensional Poisson binomial distribution is to be approximated by a compound Poisson distribution. Received: May 10, 2000; revised version: January 15, 2001  相似文献   

6.
We discuss the problem of selecting among alternative parametric models within the Bayesian framework. For model selection problems, which involve non‐nested models, the common objective choice of a prior on the model space is the uniform distribution. The same applies to situations where the models are nested. It is our contention that assigning equal prior probability to each model is over simplistic. Consequently, we introduce a novel approach to objectively determine model prior probabilities, conditionally, on the choice of priors for the parameters of the models. The idea is based on the notion of the worth of having each model within the selection process. At the heart of the procedure is the measure of this worth using the Kullback–Leibler divergence between densities from different models.  相似文献   

7.
The estimation of abundance from presence–absence data is an intriguing problem in applied statistics. The classical Poisson model makes strong independence and homogeneity assumptions and in practice generally underestimates the true abundance. A controversial ad hoc method based on negative‐binomial counts (Am. Nat.) has been empirically successful but lacks theoretical justification. We first present an alternative estimator of abundance based on a paired negative binomial model that is consistent and asymptotically normally distributed. A quadruple negative binomial extension is also developed, which yields the previous ad hoc approach and resolves the controversy in the literature. We examine the performance of the estimators in a simulation study and estimate the abundance of 44 tree species in a permanent forest plot.  相似文献   

8.
In this paper, we study an inference problem for a stochastic model where k deterministic Lotka–Volterra systems of ordinary differential equations (ODEs) are perturbed with k pairs of random errors. The k deterministic systems describe the ecological interaction between k predator–prey populations. These k deterministic systems depend on unknown parameters. We consider the testing problem concerning the homogeneity between k pairs of the interaction parameters of the ODEs. We assume that the k pairs of random errors are independent and that, each pair follows correlated Ornstein–Uhlenbeck processes. Thus, we extend the stochastic model suggested in Froda and Colavita [2005. Estimating predator–prey systems via ordinary differential equations with closed orbits. Aust. N.Z. J. Stat. 2, 235–254] as well as in Froda and Nkurunziza [2007. Prediction of predator–prey populations modeled by perturbed ODE. J. Math. Biol. 54, 407–451] where k=1. Under this statistical model, we propose a likelihood ratio test and study the asymptotic properties of this test. Finally, we highlight the performance of our method through some simulations studies.  相似文献   

9.
Liu and Singh (1993, 2006) introduced a depth‐based d‐variate extension of the nonparametric two sample scale test of Siegel and Tukey (1960). Liu and Singh (2006) generalized this depth‐based test for scale homogeneity of k ≥ 2 multivariate populations. Motivated by the work of Gastwirth (1965), we propose k sample percentile modifications of Liu and Singh's proposals. The test statistic is shown to be asymptotically normal when k = 2, and compares favorably with Liu and Singh (2006) if the underlying distributions are either symmetric with light tails or asymmetric. In the case of skewed distributions considered in this paper the power of the proposed tests can attain twice the power of the Liu‐Singh test for d ≥ 1. Finally, in the k‐sample case, it is shown that the asymptotic distribution of the proposed percentile modified Kruskal‐Wallis type test is χ2 with k ? 1 degrees of freedom. Power properties of this k‐sample test are similar to those for the proposed two sample one. The Canadian Journal of Statistics 39: 356–369; 2011 © 2011 Statistical Society of Canada  相似文献   

10.
We propose a new summary statistic for inhomogeneous intensity‐reweighted moment stationarity spatio‐temporal point processes. The statistic is defined in terms of the n‐point correlation functions of the point process, and it generalizes the J‐function when stationarity is assumed. We show that our statistic can be represented in terms of the generating functional and that it is related to the spatio‐temporal K‐function. We further discuss its explicit form under some specific model assumptions and derive ratio‐unbiased estimators. We finally illustrate the use of our statistic in practice. © 2014 Board of the Foundation of the Scandinavian Journal of Statistics  相似文献   

11.
Large O and small o approximations of the expected value of a class of functions (modified K-functional and Lipschitz class) of the normalized partial sums of dependent random variables by the expectation of the corresponding functions of infinitely divisible random variables have been established. As a special case, we have obtained rates of convergence to the Stable Limit Laws and to the Weak Laws of Large Numbers. The technique used is the conditional version of the operator method of Trotter and the Taylor expansion.  相似文献   

12.
CVX‐based numerical algorithms are widely and freely available for solving convex optimization problems but their applications to solve optimal design problems are limited. Using the CVX programs in MATLAB, we demonstrate their utility and flexibility over traditional algorithms in statistics for finding different types of optimal approximate designs under a convex criterion for nonlinear models. They are generally fast and easy to implement for any model and any convex optimality criterion. We derive theoretical properties of the algorithms and use them to generate new A‐, c‐, D‐ and E‐optimal designs for various nonlinear models, including multi‐stage and multi‐objective optimal designs. We report properties of the optimal designs and provide sample CVX program codes for some of our examples that users can amend to find tailored optimal designs for their problems. The Canadian Journal of Statistics 47: 374–391; 2019 © 2019 Statistical Society of Canada  相似文献   

13.
Let X1, X2,… be an independently and identically distributed sequence with ξX1 = 0, ξ exp (tX1 < ∞ (t ≧ 0) and partial sums Sn = X1 + … + Xn. Consider the maximum increment D1 (N, K) = max0≤nN - K (Sn + K - Sn)of the sequence (Sn) in (0, N) over a time K = KN, 1 ≦ KN. Under appropriate conditions on (KN) it is shown that in the case KN/log N → 0, but KN/(log N)1/2 → ∞, there exists a sequence (αN) such that K-1/2 D1 (N, K) - αN converges to 0 w. p. 1. This result provides a small increment analogue to the improved Erd?s-Rényi-type laws stated by Csörg? and Steinebach (1981).  相似文献   

14.
This paper considers a connected Markov chain for sampling 3 × 3 ×K contingency tables having fixed two‐dimensional marginal totals. Such sampling arises in performing various tests of the hypothesis of no three‐factor interactions. A Markov chain algorithm is a valuable tool for evaluating P‐values, especially for sparse datasets where large‐sample theory does not work well. To construct a connected Markov chain over high‐dimensional contingency tables with fixed marginals, algebraic algorithms have been proposed. These algorithms involve computations in polynomial rings using Gröbner bases. However, algorithms based on Gröbner bases do not incorporate symmetry among variables and are very time‐consuming when the contingency tables are large. We construct a minimal basis for a connected Markov chain over 3 × 3 ×K contingency tables. The minimal basis is unique. Some numerical examples illustrate the practicality of our algorithms.  相似文献   

15.
The authors develop consistent nonparametric estimation techniques for the directional mixing density. Classical spherical harmonics are used to adapt Euclidean techniques to this directional environment. Minimax rates of convergence are obtained for rotation ally invariant densities verifying various smoothness conditions. It is found that the differences in smoothness between the Laplace, the Gaussian and the von Mises‐Fisher distributions lead to contrasting inferential conclusions.  相似文献   

16.
In this paper, we consider non‐parametric copula inference under bivariate censoring. Based on an estimator of the joint cumulative distribution function, we define a discrete and two smooth estimators of the copula. The construction that we propose is valid for a large range of estimators of the distribution function and therefore for a large range of bivariate censoring frameworks. Under some conditions on the tails of the distributions, the weak convergence of the corresponding copula processes is obtained in l([0,1]2). We derive the uniform convergence rates of the copula density estimators deduced from our smooth copula estimators. Investigation of the practical behaviour of these estimators is performed through a simulation study and two real data applications, corresponding to different censoring settings. We use our non‐parametric estimators to define a goodness‐of‐fit procedure for parametric copula models. A new bootstrap scheme is proposed to compute the critical values.  相似文献   

17.
Model summaries based on the ratio of fitted and null likelihoods have been proposed for generalised linear models, reducing to the familiar R2 coefficient of determination in the Gaussian model with identity link. In this note I show how to define the Cox–Snell and Nagelkerke summaries under arbitrary probability sampling designs, giving a design‐consistent estimator of the population model summary. It is also shown that for logistic regression models under case–control sampling the usual Cox–Snell and Nagelkerke R2 are not design‐consistent, but are systematically larger than would be obtained with a cross‐sectional or cohort sample from the same population, even in settings where the weighted and unweighted logistic regression estimators are similar or identical. Implementation of the new estimators is straightforward and code is provided in R.  相似文献   

18.
In the existing statistical literature, the almost default choice for inference on inhomogeneous point processes is the most well‐known model class for inhomogeneous point processes: reweighted second‐order stationary processes. In particular, the K‐function related to this type of inhomogeneity is presented as the inhomogeneous K‐function. In the present paper, we put a number of inhomogeneous model classes (including the class of reweighted second‐order stationary processes) into the common general framework of hidden second‐order stationary processes, allowing for a transfer of statistical inference procedures for second‐order stationary processes based on summary statistics to each of these model classes for inhomogeneous point processes. In particular, a general method to test the hypothesis that a given point pattern can be ascribed to a specific inhomogeneous model class is developed. Using the new theoretical framework, we reanalyse three inhomogeneous point patterns that have earlier been analysed in the statistical literature and show that the conclusions concerning an appropriate model class must be revised for some of the point patterns.  相似文献   

19.
In the 1960s, W. B. Rosen conducted some remarkable experiments on unidirectional fibrous composites that gave seminal insights into their failure under increasing tensile load. These insights led him to a grid system where the nodes in the grid were ineffective   length fibers and to model the composite as something he called a chainofbundles model (i.e., a series system of parallel subsystems of horizontal nodes that he referred to as bundles), where the chain fails when one of the bundles fails. A load‐sharing rule was used to quantify how the load is borne among the nodes. Here, Rosen's experiments are analyzed to determine the shape of a bundle. The analysis suggests that the bundles are not horizontal collection of nodes but rather small rectangular grid systems of nodes where the load‐sharing between nodes is local in its form. In addition, a Gibbs measure representation for the joint distribution of binary random variables is given. This is used to show how the system reliability for a reliability structure can be obtained from the partition function for the Gibbs measure and to illustrate how to assess the risk of failure of a bundle in the chain‐of‐bundle model.  相似文献   

20.
Abstract. The Hirsch index (commonly referred to as h‐index) is a bibliometric indicator which is widely recognized as effective for measuring the scientific production of a scholar since it summarizes size and impact of the research output. In a formal setting, the h‐index is actually an empirical functional of the distribution of the citation counts received by the scholar. Under this approach, the asymptotic theory for the empirical h‐index has been recently exploited when the citation counts follow a continuous distribution and, in particular, variance estimation has been considered for the Pareto‐type and the Weibull‐type distribution families. However, in bibliometric applications, citation counts display a distribution supported by the integers. Thus, we provide general properties for the empirical h‐index under the small‐ and large‐sample settings. In addition, we also introduce consistent non‐parametric variance estimation, which allows for the implementation of large‐sample set estimation for the theoretical h‐index.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号