首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The popular empirical likelihood method not only has a convenient chi-square limiting distribution but is also Bartlett correctable, leading to a high-order coverage precision of the resulting confidence regions. Meanwhile, it is one of many nonparametric likelihoods in the Cressie–Read power divergence family. The other likelihoods share many attractive properties but are not Bartlett correctable. In this paper, we develop a new technique to achieve the effect of being Bartlett correctable. Our technique is generally applicable to pivotal quantities with chi-square limiting distributions. Numerical experiments and an example reveal that the method is successful for several important nonparametric likelihoods.  相似文献   

2.
For many environmental processes, recent studies have shown that the dependence strength is decreasing when quantile levels increase. This implies that the popular max‐stable models are inadequate to capture the rate of joint tail decay, and to estimate joint extremal probabilities beyond observed levels. We here develop a more flexible modeling framework based on the class of max‐infinitely divisible processes, which extend max‐stable processes while retaining dependence properties that are natural for maxima. We propose two parametric constructions for max‐infinitely divisible models, which relax the max‐stability property but remain close to some popular max‐stable models obtained as special cases. The first model considers maxima over a finite, random number of independent observations, while the second model generalizes the spectral representation of max‐stable processes. Inference is performed using a pairwise likelihood. We illustrate the benefits of our new modeling framework on Dutch wind gust maxima calculated over different time units. Results strongly suggest that our proposed models outperform other natural models, such as the Student‐t copula process and its max‐stable limit, even for large block sizes.  相似文献   

3.
Gaussian mixture model-based clustering is now a standard tool to determine a hypothetical underlying structure in continuous data. However, many usual parsimonious models, despite either their appealing geometrical interpretation or their ability to deal with high dimensional data, suffer from major drawbacks due to scale dependence or unsustainability of the constraints after projection. In this work we present a new family of parsimonious Gaussian models based on a variance-correlation decomposition of the covariance matrices. These new models are stable when projected into the canonical planes and, so, faithfully representable in low dimension. They are also stable by modification of the measurement units of the data and such a modification does not change the model selection based on likelihood criteria. We highlight all these stability properties by a specific graphical representation of each model. A detailed Generalized EM (GEM) algorithm is also provided for every model inference. Then, on biological and geological data, we compare our stable models to standard ones (geometrical models and factor analyzer models), which underlines all the profit to obtain unit-free models.  相似文献   

4.
ABSTRACT

The identification of the out of control variable, or variables, after a multivariate control chart signals, is an appealing subject for many researchers in the last years. In this paper we propose a new method for approaching this problem based on principal components analysis. Theoretical control limits are derived and a detailed investigation of the properties and the limitations of the new method is given. A graphical technique which can be applied in some of these limiting situations is also provided.  相似文献   

5.
Competing risks data often occur in many medical follow-up studies. When the survival time is the outcome variable, the restricted mean survival time has heuristic and clinically meaningful interpretation. In this article, we propose a class of regression models for the restricted mean survival time in the competing risks setting. We adopt a technique of pseudo-observations to develop estimating equation approaches for the model parameters and establish asymptotic properties of the resulting estimators. The finite-sample behavior of the proposed method is evaluated through simulation studies, and an application to the Women’s Interagency HIV Study is provided.  相似文献   

6.
This article approaches the problem of selecting significant principal components from a Bayesian model selection perspective. The resulting Bayes rule provides a simple graphical technique that can be used instead of (or together with) the popular scree plot to determine the number of significant components to retain. We study the theoretical properties of the new method and show, by examples and simulation, that it provides more clear-cut answers than the scree plot in many interesting situations.  相似文献   

7.
The starship procedure for transformations to normality described by Owen (1988) is implemented using the Johnson (1949) System of transformations, the Slifker-Shapiro (1980) technique for choosing a transformation, and the Shapiro-Wilk (1965) test for normality. This procedure was applied to obtain maximum likelihood point estimates of a mean, to obtain confidence intervals on a mean, and to estimate percentiles of a distribution based on a sample. Simulations of three distributions show that the starship has many desirable properties, and can be compared very favorably with the bootstrap procedures of Efron (1987).  相似文献   

8.
The properties of a parameterized form of generalized simulated annealing for function minimization are investigated by studying the properties of repeated minimizations from random starting points. This leads to the comparison of distributions of function values and of numbers of function evaluations. Parameter values which yield searches repeatedly terminating close to the global minimum may require unacceptably many function evaluations. If computational resources are a constraint, the total number of function evaluations may be limited. A sensible strategy is then to restart at a random point any search which terminates, until the total allowable number of function evaluations has been exhausted. The response is now the minimum of the function values obtained. This strategy yields a surprisingly stable solution for the parameter values of the simulated annealing algorithm. The algorithm can be further improved by segmentation in which each search is limited to a maximum number of evaluations, perhaps no more than a fifth of the total available. The main tool for interpreting the distributions of function values is the boxplot. The application is to the optimum design of experiments.  相似文献   

9.
Stochastic gradient descent (SGD) provides a scalable way to compute parameter estimates in applications involving large‐scale data or streaming data. As an alternative version, averaged implicit SGD (AI‐SGD) has been shown to be more stable and more efficient. Although the asymptotic properties of AI‐SGD have been well established, statistical inferences based on it such as interval estimation remain unexplored. The bootstrap method is not computationally feasible because it requires to repeatedly resample from the entire data set. In addition, the plug‐in method is not applicable when there is no explicit covariance matrix formula. In this paper, we propose a scalable statistical inference procedure, which can be used for conducting inferences based on the AI‐SGD estimator. The proposed procedure updates the AI‐SGD estimate as well as many randomly perturbed AI‐SGD estimates, upon the arrival of each observation. We derive some large‐sample theoretical properties of the proposed procedure and examine its performance via simulation studies.  相似文献   

10.
The stable distribution, in its many parametrizations, is central to many stochastic processes. Many random variables that occur in the study of Lévy processes are related to it. Good progress has been made recently for simulating various quantities related to the stable law. In this note, we survey exact random variate generators for these distributions. Many distributional identities are also reviewed.  相似文献   

11.
Summary.  Efron's biased coin design is a well-known randomization technique that helps to neutralize selection bias in sequential clinical trials for comparing treatments, while keeping the experiment fairly balanced. Extensions of the biased coin design have been proposed by several researchers who have focused mainly on the large sample properties of their designs. We modify Efron's procedure by introducing an adjustable biased coin design, which is more flexible than his. We compare it with other existing coin designs; in terms of balance and lack of predictability, its performance for small samples appears in many cases to be an improvement with respect to the other sequential randomized allocation procedures.  相似文献   

12.
Kernel smoothing methods are widely used in many research areas in statistics. However, kernel estimators suffer from boundary effects when the support of the function to be estimated has finite endpoints. Boundary effects seriously affect the overall performance of the estimator. In this article, we propose a new method of boundary correction for univariate kernel density estimation. Our technique is based on a data transformation that depends on the point of estimation. The proposed method possesses desirable properties such as local adaptivity and non-negativity. Furthermore, unlike many other transformation methods available, the proposed estimator is easy to implement. In a Monte Carlo study, the accuracy of the proposed estimator is numerically analyzed and compared with the existing methods of boundary correction. We find that it performs well for most shapes of densities. The theory behind the new methodology, along with the bias and variance of the proposed estimator, are presented. Results of a data analysis are also given.  相似文献   

13.
It is often desirable to use Gray codes with properties different from those of the standard binary reflected code. Previously, only short codes (32- to 256-element) could be systematically generated or found with a given set of properties. This paper describes a technique where long codes (65000-element or longer) as well as short codes can systematically be generated with desired properties. The technique is described and demostrated by generating codes of various lenght with the desired property of equal column change counts. Several examples of the use of the technique for generating codes with other desired properties are outlined.  相似文献   

14.
Analysis of random censored life-time data along with some related stochastic covariables is of great importance in many applied sciences. The parametric estimation technique commonly used under this set-up is based on the efficient but non-robust likelihood approach. In this paper, we propose a robust parametric estimator for censored data with stochastic covariates based on the minimum density power divergence approach. The resulting estimator also has competitive efficiency with respect to the maximum likelihood estimator under pure data. The strong robustness property of the proposed estimator with respect to the presence of outliers is examined and illustrated through an appropriate real data example and simulation studies. Further, the theoretical asymptotic properties of the proposed estimator are also derived in terms of a general class of M-estimators based on the estimating equation.  相似文献   

15.
The performance of different information criteria – namely Akaike, corrected Akaike (AICC), Schwarz–Bayesian (SBC), and Hannan–Quinn – is investigated so as to choose the optimal lag length in stable and unstable vector autoregressive (VAR) models both when autoregressive conditional heteroscedasticity (ARCH) is present and when it is not. The investigation covers both large and small sample sizes. The Monte Carlo simulation results show that SBC has relatively better performance in lag-choice accuracy in many situations. It is also generally the least sensitive to ARCH regardless of stability or instability of the VAR model, especially in large sample sizes. These appealing properties of SBC make it the optimal criterion for choosing lag length in many situations, especially in the case of financial data, which are usually characterized by occasional periods of high volatility. SBC also has the best forecasting abilities in the majority of situations in which we vary sample size, stability, variance structure (ARCH or not), and forecast horizon (one period or five). frequently, AICC also has good lag-choosing and forecasting properties. However, when ARCH is present, the five-period forecast performance of all criteria in all situations worsens.  相似文献   

16.
This article studies the problem of model identification and estimation for stable autoregressive process observed in a symmetric stable noise environment. A new tool called partial auto-covariation function is introduced to identify the stable autoregressive signals. The signal and noise parameters are estimated using a modified version of Generalized Yule Walker type method and the method of moments. The proposed methods are illustrated through data simulated from autoregressive signals with symmetric stable innovations. The new technique is applied to analyze the time series of sea surface temperature anomaly and compared with its Gaussian counterpart.  相似文献   

17.
The quantification of peptides in Matrix assisted laser desorption/ionization time-of-flight mass spectrum analysis coupled with stable isotope standards has been used to quantify native peptides under many experimental conditions. This approach has difficulties quantifying samples containing peptides with ion currents in overlapping (convolved) spectra. In a previous article we proposed a reparametrized Gaussian mixture model based on the known characteristics of the peptides that could also accommodate overlapping spectra. We demonstrated the application of our model in a series of single and overlapping peptides quantification experiments. Here, we focus solely on studying the properties of our approach and examine the characteristics of the GMM approach in convolved peptides using simulated spectra and provide a method for simulating these spectra.  相似文献   

18.
One of the standard variable selection procedures in multiple linear regression is to use a penalisation technique in least‐squares (LS) analysis. In this setting, many different types of penalties have been introduced to achieve variable selection. It is well known that LS analysis is sensitive to outliers, and consequently outliers can present serious problems for the classical variable selection procedures. Since rank‐based procedures have desirable robustness properties compared to LS procedures, we propose a rank‐based adaptive lasso‐type penalised regression estimator and a corresponding variable selection procedure for linear regression models. The proposed estimator and variable selection procedure are robust against outliers in both response and predictor space. Furthermore, since rank regression can yield unstable estimators in the presence of multicollinearity, in order to provide inference that is robust against multicollinearity, we adjust the penalty term in the adaptive lasso function by incorporating the standard errors of the rank estimator. The theoretical properties of the proposed procedures are established and their performances are investigated by means of simulations. Finally, the estimator and variable selection procedure are applied to the Plasma Beta‐Carotene Level data set.  相似文献   

19.
Estimation of scale and index parameters of positive stable laws is considered. Maximum likelihood estimation is known to be efficient, but very difficult to compute, while methods based on the sample characteristic function are computationally easy, but have uncertain efficiency properties.
In this paper an estimation method is presented which is reasonably easy to compute, and which has good efficiency properties, at least when the index α (0, 0.5). The method is based on an expression for the characteristic function of the logarithm of a positive stable random variable, and is derived by relating the stable estimation problem to that of location/scale estimation in extreme-value-distribution families, for which efficient methods are known.
The proposed method has efficiency which →1 as α→,but on the other hand, efficiencies deteriorate after α >0.5, and in fact appear to →0 as α+ 1.  相似文献   

20.
Summary. In many biomedical studies, covariates are subject to measurement error. Although it is well known that the regression coefficients estimators can be substantially biased if the measurement error is not accommodated, there has been little study of the effect of covariate measurement error on the estimation of the dependence between bivariate failure times. We show that the dependence parameter estimator in the Clayton–Oakes model can be considerably biased if the measurement error in the covariate is not accommodated. In contrast with the typical bias towards the null for marginal regression coefficients, the dependence parameter can be biased in either direction. We introduce a bias reduction technique for the bivariate survival function in copula models while assuming an additive measurement error model and replicated measurement for the covariates, and we study the large and small sample properties of the dependence parameter estimator proposed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号