首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
ABSTRACT

In this article, the residual Renyi entropy (RRE) of k-record values arising from an absolutely continuous distribution is considered. A representation of RRE of k-records arising from an arbitrary distribution in terms of RRE of k-record values arising from uniform distribution is given. Some properties for RRE of k-records are also discussed.  相似文献   

2.
This paper addresses the largest and the smallest observations, at the times when a new record of either kind (upper or lower) occurs, which are it called the current upper and lower record, respectively. We examine the entropy properties of these statistics, especially the difference between entropy of upper and lower bounds of record coverage. The results are presented for some common parametric families of distributions. Several upper and lower bounds, in terms of the entropy of parent distribution, for the entropy of current records are obtained. It is shown that mutual information, as well as Kullback–Leibler distance between the endpoints of record coverage, Kullback–Leibler distance between data distribution, and current records, are all distribution-free.  相似文献   

3.
Record values can be viewed as order statistics from a sample whose size is determined by the values and the order of occurrence of observations. They are closely connected with the occurrence times of a corresponding non-homogenous Poisson process and reliability theory. In this paper, the information properties of record values are presented based on Shannon information. Several upper and lower bounds for the entropy of record values are obtained. It is shown that, the mutual information between record values is distribution free and is computable using the distribution of the record values of the sequence from the uniform distribution.  相似文献   

4.
According to the law of likelihood, statistical evidence for one (simple) hypothesis against another is measured by their likelihood ratio. When the experimenter can choose between two or more experiments (of approximately the same cost) to obtain data, he would want to know which experiment provides (on average) stronger true evidence for one hypothesis against another. In this article, after defining a pre-experimental criterion for the potential strength of evidence provided by an experiment, based on entropy distance, we compare the potential statistical evidence in lower record values with that in the same number of iid observations from the same parent distribution. We also establish a relation between Fisher information and Kullback–Leibler distance.  相似文献   

5.
We compare the Fisher information (FI) contained in the firstn record values and record times with the FI inn i. i. d. observations. General results are established for exponential family and Weibull type setups, and a summary table is provided listing several common distributions. We show that the FI in record data improves notably once the record times are included, often changing from being less to being equal or greater than the FI in a random sample of the same size. The behavior in the Weibull case is surprising. There it depends onn, whether the record or the i.i. d. observations have more FI. We propose new estimators based on record data. The results may be of interest in some life testing situations. Supported in part by Fondo Nacional de Desarrollo Cientifico y Tecnologico (FONDECYT) grant # 1010222 of Chile.  相似文献   

6.
In recent years, several attempts have been made to characterize the generalized Pareto distribution (GPD) based on the properties of order statistics and record values. In the present article, we give a characterization result on GPD based on the spacing of generalized order statistics.  相似文献   

7.
Sometimes, in industrial quality control experiments and destructive stress testing, only values smaller than all previous ones are observed. Here we consider nonparametric quantile estimation, both the ‘sample quantile function’ and kernel-type estimators, from such record-breaking data. For a single record-breaking sample, consistent estimation is not possible except in the extreme tails of the distribution. Hence replication is required, and for m. such independent record-breaking samples the quantile estimators are shown to be strongly consistent and asymptotically normal as m-→∞. Also, for small m, the mean-squared errors, biases and smoothing parameters (for the smoothed estimators) are investigated through computer simulations.  相似文献   

8.
We consider the problem of estimating the stress-strength reliability when the available data is in the form of record values. The one parameter and two parameters exponential distribution are considered. In the case of two parameters exponential distributions we considered the case where the location parameter is common and the case where the scale parameter is common. The maximum likelihood estimators and the associated confidence intervals are derived.  相似文献   

9.
Science looks to statistics for an objective measure of the strength of evidence in a given body of observations. In this paper, we shall use a criterion defined as a combination of probabilities of weak and strong misleading evidence to do the comparison between only record values and the same number of record values and inter-record times. Also, a simulation is presented to illustrate the results.  相似文献   

10.
We present here several latest record schemes in this article. Some basic properties of these new record schemes are discussed. Various comparisons with the classical records are shown.  相似文献   

11.
Record scheme is a method to reduce the total time on test of an experiment. In this scheme, items are sequentially observed and only values smaller than all previous ones are recorded. In some situations, when the experiments are time-consuming and sometimes the items are lost during the experiment, the record scheme dominates the usual random sample scheme [M. Doostparast and N. Balakrishnan, Optimal sample size for record data and associated cost analysis for exponential distribution, J. Statist. Comput. Simul. 80(12) (2010), pp. 1389–1401]. Estimation of the mean of an exponential distribution based on record data has been treated by Samaniego and Whitaker [On estimating population characteristics from record breaking observations I. Parametric results, Naval Res. Logist. Q. 33 (1986), pp. 531–543] and Doostparast [A note on estimation based on record data, Metrika 69 (2009), pp. 69–80]. The lognormal distribution is used in a wide range of applications when the multiplicative scale is appropriate and the log-transformation removes the skew and brings about symmetry of the data distribution [N.T. Longford, Inference with the lognormal distribution, J. Statist. Plann. Inference 139 (2009), pp. 2329–2340]. In this paper, point estimates as well as confidence intervals for the unknown parameters are obtained. This will also be addressed by the Bayesian point of view. To carry out the performance of the estimators obtained, a simulation study is conducted. For illustration proposes, a real data set, due to Lawless [Statistical Models and Methods for Lifetime Data, 2nd ed., John Wiley & Sons, New York, 2003], is analysed using the procedures obtained.  相似文献   

12.
The question of how much information is contained in an ordered observation was studied by Tukey (1964) in terms of a linear sensitivity measure. This paper deals with exact Fisher information for censored data. The concept of hazard rate function is extended and some fundamental moment relations are established between them and the score functions. Some new moment equalities are obtained for the normal and gamma distributions.  相似文献   

13.
!n this paper we consider the predicf an problem of the future nth record value based an the first m (m < n) observed record values from one-parameter exponential distribution. We introduce four procedures for obtaining prediction intervals for the nth record value. The performance of the so obtained intervals is assessed through numerical and simulation studies. In these studies, we provide the means and standard errors of lower limits. upper limits and lengths of prediction intervals. Further, we check the validation of these intervals based on some point predictors.  相似文献   

14.
An Opial-type inequality is applied to obtain relations for expectations of functions of m-generalized order statistics (m-gOSs), their distribution functions, as well as moment-generating functions. Respective inequalities for common order statistics and record values are contained as particular cases.  相似文献   

15.
Keeping scores     
Consider a sports competition in which participants alternately perform a scored task (such as the distance a discus is thrown), and a list of the top m scores is updated throughout. We consider the average number and distribution of records among the top m throughout and at the end of the competition. We also answer questions concerning the number of times the list changes, when each change occurs, and the waiting times between changes. We touch on results concerning changes in l-records and the values within the list.  相似文献   

16.
In this paper, we study the Kullback–Leibler (KL) information of a censored variable, which we will simply call it censored KL information. The censored KL information is shown to have the necessary monotonicity property in addition to inherent properties of nonnegativity and characterization. We also present a representation of the censored KL information in terms of the relative risk and study its relation with the Fisher information in censored data. Finally, we evaluate the estimated censored KL information as a goodness-of-fit test statistic.  相似文献   

17.
Prediction of records plays an important role in many applications, such as, meteorology, hydrology, industrial stress testing and athletic events. In this paper, based on the observed current records of an iid sequence sample drawn from an arbitrary unknown distribution, we develop distribution-free prediction intervals as well as prediction upper and lower bounds for current records from another iid sequence. We also present sharp upper bounds for the expected lengths of the so obtained prediction intervals. Numerical computations of the coverage probabilities are presented for choosing the appropriate limits of the prediction intervals.   相似文献   

18.
Estimation of the mean of an exponential distribution based on record data has been treated by Samaniego and Whitaker [F.J. Samaniego, and L.R. Whitaker, On estimating popular characteristics from record breaking observations I. Parametric results, Naval Res. Logist. Quart. 33 (1986), pp. 531–543] and Doostparast [M. Doostparast, A note on estimation based on record data, Metrika 69 (2009), pp. 69–80]. When a random sample Y 1, …, Y n is examined sequentially and successive minimum values are recorded, Samaniego and Whitaker [F.J. Samaniego, and L.R. Whitaker, On estimating popular characteristics from record breaking observations I. Parametric results, Naval Res. Logist. Quart. 33 (1986), pp. 531–543] obtained a maximum likelihood estimator of the mean of the population and showed its convergence in probability. We establish here its convergence in mean square error, which is stronger than the convergence in probability. Next, we discuss the optimal sample size for estimating the mean based on a criterion involving a cost function as well as the Fisher information based on records arising from a random sample. Finally, a comparison between complete data and record is carried out and some special cases are discussed in detail.  相似文献   

19.
Doostparast and Balakrishnan (Pareto record-based analysis, Statistics, under review) recently developed optimal confidence intervals as well as uniformly most powerful tests for one- and two-sided hypotheses concerning shape and scale parameters, for the two-parameter Pareto distribution based on record data. In this paper, on the basis of record values and inter-record times from the two-parameter Pareto distribution, maximum-likelihood and Bayes estimators as well as credible regions are developed for the two parameters of the Pareto distribution. For illustrative purposes, a data set on annual wages of a sample of production-line workers in a large industrial firm is analysed using the proposed procedures.  相似文献   

20.
Despite its importance, there has been little attention in the modeling of time series data of categorical nature in the recent past. In this paper, we present a framework based on the Pegram's [An autoregressive model for multilag Markov chains. Journal of Applied Probabability 17, 350–362] operator that was originally proposed only to construct discrete AR(pp) processes. We extend the Pegram's operator to accommodate categorical processes with ARMA representations. We observe that the concept of correlation is not always suitable for categorical data. As a sensible alternative, we use the concept of mutual information, and introduce auto-mutual information to define the time series process of categorical data. Some model selection and inferential aspects are also discussed. We implement the developed methodologies to analyze a time series data set on infant sleep status.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号