首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
The point availability of a repairable system is the probability that the system is operating at a specified time. As time increases, the point availability converges to a positive constant called the limiting availability. Baxter and Li (1994a) developed a technique for constructing nonparametric confidence intervals for the point availability. However, nonparametric estimators of the limiting availability have not previously been studied in the literature. In this paper, we consider two separate cases: (1) the data are complete and (2) the data are subject to right censorship. For each case, a nonparametric confidence interval for the limiting availability is derived. Applications and simulation studies are presented.deceased after the paper was accepted  相似文献   

2.
Sometimes, in industrial quality control experiments and destructive stress testing, only values smaller than all previous ones are observed. Here we consider nonparametric quantile estimation, both the ‘sample quantile function’ and kernel-type estimators, from such record-breaking data. For a single record-breaking sample, consistent estimation is not possible except in the extreme tails of the distribution. Hence replication is required, and for m. such independent record-breaking samples the quantile estimators are shown to be strongly consistent and asymptotically normal as m-→∞. Also, for small m, the mean-squared errors, biases and smoothing parameters (for the smoothed estimators) are investigated through computer simulations.  相似文献   

3.
The plant ‘Heat Rate’ (HR) is a measure of overall efficiency of a thermal power generating system. It depends on a large number of factors, some of which are non-measurable, while data relating to others are seldom available and recorded. However, coal quality (expressed in terms of ‘effective heat value’ (EHV) as kcal/kg) transpires to be one of the important factors that influences HR values and data on EHV are available in any thermal power generating system. In the present work, we propose a prediction interval of the HR values on the basis of only EHV, keeping in mind that coal quality is one of the important (but not the only) factors that have a pronounced effect on the combustion process and hence on HR. The underlying theory borrows the idea of providing simultaneous confidence interval (SCI) to the coefficients of a p-th p(≥1) order autoregressive model (AR(p)). The theory has been substantiated with the help of real life data from a power utility (after suitable base and scale transformation of the data to maintain the confidentiality of the classified document). Scope for formulating strategies to enhance the economy of a thermal power generating system has also been explored.  相似文献   

4.
This paper deals with the problem of estimating the multivariate version of the Conditional-Tail-Expectation, proposed by Di Bernardino et al. [(2013), ‘Plug-in Estimation of Level Sets in a Non-Compact Setting with Applications in Multivariable Risk Theory’, ESAIM: Probability and Statistics, (17), 236–256]. We propose a new nonparametric estimator for this multivariate risk-measure, which is essentially based on Kendall's process [Genest and Rivest, (1993), ‘Statistical Inference Procedures for Bivariate Archimedean Copulas’, Journal of American Statistical Association, 88(423), 1034–1043]. Using the central limit theorem for Kendall's process, proved by Barbe et al. [(1996), ‘On Kendall's Process’, Journal of Multivariate Analysis, 58(2), 197–229], we provide a functional central limit theorem for our estimator. We illustrate the practical properties of our nonparametric estimator on simulations and on two real test cases. We also propose a comparison study with the level sets-based estimator introduced in Di Bernardino et al. [(2013), ‘Plug-In Estimation of Level Sets in A Non-Compact Setting with Applications in Multivariable Risk Theory’, ESAIM: Probability and Statistics, (17), 236–256] and with (semi-)parametric approaches.  相似文献   

5.
This article provides alternative circular smoothing methods in nonparametric estimation of periodic functions. By treating the data as ‘circular’, we solve the “boundary issue” in the nonparametric estimation treating the data as ‘linear’. By redefining the distance metric and signed distance, we modify many estimators used in the situations involving periodic patterns. In the perspective of ‘nonparametric estimation of periodic functions’, we present the examples in nonparametric estimation of (1) a periodic function, (2) multiple periodic functions, (3) an evolving function, (4) a periodically varying-coefficient model and (5) a generalized linear model with periodically varying coefficient. In the perspective of ‘circular statistics’, we provide alternative approaches to calculate the weighted average and evaluate the ‘linear/circular–linear/circular’ association and regression. Simulation studies and an empirical study of electricity price index have been conducted to illustrate and compare our methods with other methods in the literature.  相似文献   

6.
Two-component mixture cure rate model is popular in cure rate data analysis with the proportional hazards and accelerated failure time (AFT) models being the major competitors for modelling the latency component. [Wang, L., Du, P., and Liang, H. (2012), ‘Two-Component Mixture Cure Rate Model with Spline Estimated Nonparametric Components’, Biometrics, 68, 726–735] first proposed a nonparametric mixture cure rate model where the latency component assumes proportional hazards with nonparametric covariate effects in the relative risk. Here we consider a mixture cure rate model where the latency component assumes AFTs with nonparametric covariate effects in the acceleration factor. Besides the more direct physical interpretation than the proportional hazards, our model has an additional scalar parameter which adds more complication to the computational algorithm as well as the asymptotic theory. We develop a penalised EM algorithm for estimation together with confidence intervals derived from the Louis formula. Asymptotic convergence rates of the parameter estimates are established. Simulations and the application to a melanoma study shows the advantages of our new method.  相似文献   

7.
This paper presents a method of fitting factorial models to recidivism data consisting of the (possibly censored) time to ‘fail’ of individuals, in order to test for differences between groups. Here ‘failure’ means rearrest, reconviction or reincarceration, etc. A proportion P of the sample is assumed to be ‘susceptible’ to failure, i.e. to fail eventually, while the remaining 1-P are ‘immune’, and never fail. Thus failure may be described in two ways: by the probability P that an individual ever fails again (‘probability of recidivism’), and by the rate of failure Λ for the susceptibles. Related analyses have been proposed previously: this paper argues that a factorial approach, as opposed to regression approaches advocated previously, offers simplified analysis and interpretation of these kinds of data. The methods proposed, which are also applicable in medical statistics and reliability analyses, are demonstrated on data sets in which the factors are Parole Type (released to freedom or on parole), Age group (≤ 20 years, 20–40 years, > 40 years), and Marital Status. The outcome (failure) is a return to prison following first or second release.  相似文献   

8.
Abstract A model is introduced here for multivariate failure time data arising from heterogenous populations. In particular, we consider a situation in which the failure times of individual subjects are often temporally clustered, so that many failures occur during a relatively short age interval. The clustering is modelled by assuming that the subjects can be divided into ‘internally homogenous’ latent classes, each such class being then described by a time‐dependent frailty profile function. As an example, we reanalysed the dental caries data presented earlier in Härkänen et al. [Scand. J. Statist. 27 (2000) 577], as it turned out that our earlier model could not adequately describe the observed clustering.  相似文献   

9.
Cui  Ruifei  Groot  Perry  Heskes  Tom 《Statistics and Computing》2019,29(2):311-333

We consider the problem of causal structure learning from data with missing values, assumed to be drawn from a Gaussian copula model. First, we extend the ‘Rank PC’ algorithm, designed for Gaussian copula models with purely continuous data (so-called nonparanormal models), to incomplete data by applying rank correlation to pairwise complete observations and replacing the sample size with an effective sample size in the conditional independence tests to account for the information loss from missing values. When the data are missing completely at random (MCAR), we provide an error bound on the accuracy of ‘Rank PC’ and show its high-dimensional consistency. However, when the data are missing at random (MAR), ‘Rank PC’ fails dramatically. Therefore, we propose a Gibbs sampling procedure to draw correlation matrix samples from mixed data that still works correctly under MAR. These samples are translated into an average correlation matrix and an effective sample size, resulting in the ‘Copula PC’ algorithm for incomplete data. Simulation study shows that: (1) ‘Copula PC’ estimates a more accurate correlation matrix and causal structure than ‘Rank PC’ under MCAR and, even more so, under MAR and (2) the usage of the effective sample size significantly improves the performance of ‘Rank PC’ and ‘Copula PC.’ We illustrate our methods on two real-world datasets: riboflavin production data and chronic fatigue syndrome data.

  相似文献   

10.
In this paper, we are interested in the estimation of the reliability parameter R = P(X > Y) where X, a component strength, and Y, a component stress, are independent power Lindley random variables. The point and interval estimation of R, based on maximum likelihood, nonparametric and parametric bootstrap methods, are developed. The performance of the point estimate and confidence interval of R under the considered estimation methods is studied through extensive simulation. A numerical example, based on a real data, is presented to illustrate the proposed procedure.  相似文献   

11.
A variant of a sexual Gallon–Watson process is considered. At each generation the population is partitioned among n‘hosts’ (population patches) and individual members mate at random only with others within the same host. This is appropriate for many macroparasite systems, and at low parasite loads it gives rise to a depressed rate of reproduction relative to an asexual system, due to the possibility that females are unmated. It is shown that stochasticity mitigates against this effect, so that for small initial populations the probability of ultimate extinction (the complement of an ‘epidemic’) displays a tradeoff as a function of n between the strength of fluctuations which overcome this ‘mating’ probability, and the probability of the subpopulation in one host being ‘rescued’ by that in another. Complementary approximations are developed for the extinction probability: an asymptotically exact approximation at large n, and for small n a short‐time probability that is exact in the limit where the mean number of offspring per parent is large.  相似文献   

12.
This article reviews semiparametric estimators for limited dependent variable (LDV) models with endogenous regressors, where nonlinearity and nonseparability pose difficulties. We first introduce six main approaches in the linear equation system literature to handle endogenous regressors with linear projections: (i) ‘substitution’ replacing the endogenous regressors with their projected versions on the system exogenous regressors x, (ii) instrumental variable estimator (IVE) based on E{(error) × x} = 0, (iii) ‘model-projection’ turning the original model into a model in terms of only x-projected variables, (iv) ‘system reduced form (RF)’ finding RF parameters first and then the structural form (SF) parameters, (v) ‘artificial instrumental regressor’ using instruments as artificial regressors with zero coefficients, and (vi) ‘control function’ adding an extra term as a regressor to control for the endogeneity source. We then check if these approaches are applicable to LDV models using conditional mean/quantiles instead of linear projection. The six approaches provide a convenient forum on which semiparametric estimators in the literature can be categorized, although there are a few exceptions. The pros and cons of the approaches are discussed, and a small-scale simulation study is provided for some reviewed estimators.  相似文献   

13.
We consider nonparametric estimation problems in the presence of dependent data, notably nonparametric regression with random design and nonparametric density estimation. The proposed estimation procedure is based on a dimension reduction. The minimax optimal rate of convergence of the estimator is derived assuming a sufficiently weak dependence characterised by fast decreasing mixing coefficients. We illustrate these results by considering classical smoothness assumptions. However, the proposed estimator requires an optimal choice of a dimension parameter depending on certain characteristics of the function of interest, which are not known in practice. The main issue addressed in our work is an adaptive choice of this dimension parameter combining model selection and Lepski's method. It is inspired by the recent work of Goldenshluger and Lepski [(2011), ‘Bandwidth Selection in Kernel Density Estimation: Oracle Inequalities and Adaptive Minimax Optimality’, The Annals of Statistics, 39, 1608–1632]. We show that this data-driven estimator can attain the lower risk bound up to a constant provided a fast decay of the mixing coefficients.  相似文献   

14.
Pretest–posttest studies are an important and popular method for assessing the effectiveness of a treatment or an intervention in many scientific fields. While the treatment effect, measured as the difference between the two mean responses, is of primary interest, testing the difference of the two distribution functions for the treatment and the control groups is also an important problem. The Mann–Whitney test has been a standard tool for testing the difference of distribution functions with two independent samples. We develop empirical likelihood-based (EL) methods for the Mann–Whitney test to incorporate the two unique features of pretest–posttest studies: (i) the availability of baseline information for both groups; and (ii) the structure of the data with missing by design. Our proposed methods combine the standard Mann–Whitney test with the EL method of Huang, Qin and Follmann [(2008), ‘Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study’, Journal of the American Statistical Association, 103(483), 1270–1280], the imputation-based empirical likelihood method of Chen, Wu and Thompson [(2015), ‘An Imputation-Based Empirical Likelihood Approach to Pretest–Posttest Studies’, The Canadian Journal of Statistics accepted for publication], and the jackknife empirical likelihood method of Jing, Yuan and Zhou [(2009), ‘Jackknife Empirical Likelihood’, Journal of the American Statistical Association, 104, 1224–1232]. Theoretical results are presented and finite sample performances of proposed methods are evaluated through simulation studies.  相似文献   

15.
‘?…?if we are prepared to assume that the unknown density has k derivatives, then?…?the optimal mean integrated squared error is of order n?2 k/(2 k+1)?…?’ The citation is from Silverman [(1986), Density Estimation for Statistics and Data Analysis, London: Chapman &; Hall] and its assertion is based on a classical minimax lower bound which is the pillar of the modern nonparametric statistics. This paper proposes a new minimax methodology that implies a faster decreasing minimax lower bound that is attainable by a data-driven estimator, and the same estimator is also minimax under the classical approach. The recommendation is to test performance of estimators via the new and classical minimax approaches.  相似文献   

16.
This article deals with the construction of an X? control chart using the Bayesian perspective. We obtain new control limits for the X? chart for exponentially distributed data-generating processes through the sequential use of Bayes’ theorem and credible intervals. Construction of the control chart is illustrated using a simulated data example. The performance of the proposed, standard, tolerance interval, exponential cumulative sum (CUSUM) and exponential exponentially weighted moving average (EWMA) control limits are examined and compared via a Monte Carlo simulation study. The proposed Bayesian control limits are found to perform better than standard, tolerance interval, exponential EWMA and exponential CUSUM control limits for exponentially distributed processes.  相似文献   

17.
For a nonparametric regression model y = m(x)+e with n independent observations, we analyze a robust method of finding the root of m(x) based on an M-estimation first discussed by Härdle & Gasser (1984). It is shown here that the robustness properties (minimaxity and breakdown function) of such an estimate are quite analogous to those of an M -estimator in the simple location model, but the rate of convergence is somewhat limited due to the nonparametric nature of the problem.  相似文献   

18.
ABSTRACT

For monitoring systemic risk from regulators’ point of view, this article proposes a relative risk measure, which is sensitive to the market comovement. The asymptotic normality of a nonparametric estimator and its smoothed version is established when the observations are independent. To effectively construct an interval without complicated asymptotic variance estimation, a jackknife empirical likelihood inference procedure based on the smoothed nonparametric estimation is provided with a Wilks type of result in case of independent observations. When data follow from AR-GARCH models, the relative risk measure with respect to the errors becomes useful and so we propose a corresponding nonparametric estimator. A simulation study and real-life data analysis show that the proposed relative risk measure is useful in monitoring systemic risk.  相似文献   

19.
The Enigma was a cryptographic (enciphering) machine used by the German military during WWII. The German navy changed part of the Enigma keys every other day. One of the important cryptanalytic attacks against the naval usage was called Banburismus, a sequentiai Bayesian procedure (anticipating sequential analysis) which was used from the sorine of 1941 until the middle of 1943. It was invented mainlv bv A. M. Turina and was perhaps the first important sequential Bayesian IE is unnecessab to describe it here. Before Banburismus could be started on a given day it was necessary to identifv which of nine ‘biaram’ (or ‘diaraph’) tables was in use on that day. In Turing’s approach to this identification hk had io istimate the probabilities of certain ‘trigraphs’. rrhese trigraphs were used. as described below. for determinine the initial wheel settings of messages). For estimatidg the probabilities, Turing inventedin important special case o the nonparametric (nonhypermetric) Empirid Bayes method independently of Herbert Robbins. The techniaue is the sumxisine form of Emdrical Baves in which a physical prior is assumed to eist but no apbroxiGate functional fonn is assumed for it.  相似文献   

20.
Doubly truncated data appear in a number of applications, including astronomy and survival analysis. For double-truncated data, the lifetime T is observable only when UTV, where U and V are the left-truncated and right-truncated time, respectively. In some situations, the lifetime T also suffers interval censoring. Using the EM algorithm of Turnbull [The empirical distribution function with arbitrarily grouped censored and truncated data, J. R. Stat. Soc. Ser. B 38 (1976), pp. 290–295] and iterative convex minorant algorithm [P. Groeneboom and J.A. Wellner, Information Bounds and Nonparametric Maximum Likelihood Estimation, Birkhäuser, Basel, 1992], we study the performance of the nonparametric maximum-likelihood estimates (NPMLEs) of the distribution function of T. Simulation results indicate that the NPMLE performs adequately for the finite sample.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号