首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
4.
We study the gambler’s ruin problem with a general distribution of the payoffs in each game. Assuming the expected value of the payoff distribution is negative, so that eventual ruin occurs with probability 1, we are interested in the distribution of the duration to ruin, also known as the first-passage time distribution. A generating function for this distribution is obtained. Exact expressions for the expected value and variance of this distribution, as well as asymptotic expressions for the case of large initial wealth, are derived.  相似文献   

5.
6.
Khatri (1968) has extended Cochran’s theorem (1934) to matrices which are not necessarily symmetric. An alternative proof of the theorem is furnished here with some generalization with respect to one of Khatri’s conditions.  相似文献   

7.

This article provides a concise overview of the main mathematical theory of Benford’s law in a form accessible to scientists and students who have had first courses in calculus and probability. In particular, one of the main objectives here is to aid researchers who are interested in applying Benford’s law, and need to understand general principles clarifying when to expect the appearance of Benford’s law in real-life data and when not to expect it. A second main target audience is students of statistics or mathematics, at all levels, who are curious about the mathematics underlying this surprising and robust phenomenon, and may wish to delve more deeply into the subject. This survey of the fundamental principles behind Benford’s law includes many basic examples and theorems, but does not include the proofs or the most general statements of the theorems; rather it provides precise references where both may be found.

  相似文献   

8.
Repeated neuropsychological measurements, such as mini-mental state examination (MMSE) scores, are frequently used in Alzheimer’s disease (AD) research to study change in cognitive function of AD patients. A question of interest among dementia researchers is whether some AD patients exhibit transient “plateaus” of cognitive function in the course of the disease. We consider a statistical approach to this question, based on irregularly spaced repeated MMSE scores. We propose an algorithm that formalizes the measurement of an apparent cognitive plateau, and a procedure to evaluate the evidence of plateaus in AD using this algorithm based on applying the algorithm to the observed data and to data sets simulated from a linear mixed model. We apply these methods to repeated MMSE data from the Michigan Alzheimer’s Disease Research Center, finding a high rate of apparent plateaus and also a high rate of false discovery. Simulation studies are also conducted to assess the performance of the algorithm. In general, the false discovery rate of the algorithm is high unless the rate of decline is high compared with the measurement error of the cognitive test. It is argued that the results are not a problem of the specific algorithm chosen, but reflect a lack of information concerning the presence of plateaus in the data.  相似文献   

9.
Simple nonparametric estimates of the conditional distribution of a response variable given a covariate are often useful for data exploration purposes or to help with the specification or validation of a parametric or semi-parametric regression model. In this paper we propose such an estimator in the case where the response variable is interval-censored and the covariate is continuous. Our approach consists in adding weights that depend on the covariate value in the self-consistency equation proposed by Turnbull (J R Stat Soc Ser B 38:290–295, 1976), which results in an estimator that is no more difficult to implement than Turnbull’s estimator itself. We show the convergence of our algorithm and that our estimator reduces to the generalized Kaplan–Meier estimator (Beran, Nonparametric regression with randomly censored survival data, 1981) when the data are either complete or right-censored. We demonstrate by simulation that the estimator, bootstrap variance estimation and bandwidth selection (by rule of thumb or cross-validation) all perform well in finite samples. We illustrate the method by applying it to a dataset from a study on the incidence of HIV in a group of female sex workers from Kinshasa.  相似文献   

10.
Abstract

Technical services staff, along with programmers, supervisors, and frontline librarians, participate in all sorts of systems. Whether they recognize it or not, they are used to interacting with the world through the lens of the systems they work with. In this presentation from the North Carolina Serials Conference, Andreas Orphanides looks at some of the challenges of interacting with the world in terms of systems, discusses the human costs of failing to recognize the limitations of systems, and provides a framework for thinking about systems to help ensure that our systems respect the humanity of their human participants.  相似文献   

11.
Nonparametric estimates of the conditional distribution of a response variable given a covariate are important for data exploration purposes. In this article, we propose a nonparametric estimator of the conditional distribution function in the case where the response variable is subject to interval censoring and double truncation. Using the approach of Dehghan and Duchesne (2011), the proposed method consists in adding weights that depend on the covariate value in the self-consistency equation of Turnbull (1976), which results in a nonparametric estimator. We demonstrate by simulation that the estimator, bootstrap variance estimation and bandwidth selection all perform well in finite samples.  相似文献   

12.
Suppliers and retailers typically do not have identical incentives to avoid stockouts (lost sales due to the lack of product availability on the shelf). Thus, the supplier needs to monitor the retailer’s restocking efforts with the available data. We empirically assess stockout levels using only shipment and sales data that is readily available to the supplier. The model distinguishes between store stockouts (zero inventory in the store) and shelf stockouts (an empty shelf but some inventory in other parts of the store), thereby identifying the cause of the stockout to be either a supply chain or a restocking issue. We find that, as suspected by the supplier, the average stockout rate is much higher than published averages. In addition, stockout rates vary widely between stores. Moreover, almost all stockouts are shelf stockouts. The model identifies stores that may have restocking issues.  相似文献   

13.
Abstract

In this work, we establish exponential inequalities for the Robbins–Monro’s algorithm with ψ-mixing variables, and we give a result on the almost complete convergence rate.  相似文献   

14.
x 1, ..., x n+r can be treated as the sample values of a Markov chain of order r or less (chain in which the dependence extends over r+1 consecutive variables only), and consider the problem of testing the hypothesis H 0 that a chain of order r− 1 will be sufficient on the basis of the tools given by the Statistical Information Theory: ϕ-Divergences. More precisely, if p a 1 ....., a r: a r +1 denotes the transition probability for a r th order Markov chain, the hypothesis to be tested is H 0:p a 1 ....., a r: a r +1 = p a 2 ....., a r: a r +1, a i ∈{1, ..., s}, i = 1, ..., r + 1 The tests given in this paper, for the first time, will have as a particular case the likelihood ratio test and the test based on the chi-squared statistic. Received: August 3, 1998; revised version: November 25, 1999  相似文献   

15.
In dose-response studies, Wadley’s problem occurs when the number of organisms that survive exposure to varying doses of a treatment is observed but the number initially present is unknown. The unknown number of organisms initially treated has traditionally been modelled by a Poisson distribution, resulting in a Poisson distribution for the number of survivors with parameter proportional to the probability of survival. Data in this setting are often overdispersed. This study revisits the beta-Poisson distribution and considers its effectiveness in modelling overdispersed data from a Wadley’s problem setting.  相似文献   

16.
《Serials Review》2012,38(4):219-226
Abstract

This study uses systematic random sampling to compare the content of “Beall’s List of Predatory Journals and Publishers” and “Cabell’s Blacklist” of journals. The Beall’s List data was generated from its new site that maintains a new list besides the original list. It found that 28.5% Beall’s List sample publishers are out of business, some Cabell’s Blacklist journals have become ceased. The main takeaway is that among the Beall’s List sample publishers with a working website for journal publishing, only 31.8% can be found on Cabell’s Blacklist.  相似文献   

17.
Abstract

In some clinical, environmental, or economical studies, researchers are interested in a semi-continuous outcome variable which takes the value zero with a discrete probability and has a continuous distribution for the non-zero values. Due to the measuring mechanism, it is not always possible to fully observe some outcomes, and only an upper bound is recorded. We call this left-censored data and observe only the maximum of the outcome and an independent censoring variable, together with an indicator. In this article, we introduce a mixture semi-parametric regression model. We consider a parametric model to investigate the influence of covariates on the discrete probability of the value zero. For the non-zero part of the outcome, a semi-parametric Cox’s regression model is used to study the conditional hazard function. The different parameters in this mixture model are estimated using a likelihood method. Hereby the infinite dimensional baseline hazard function is estimated by a step function. As results, we show the identifiability and the consistency of the estimators for the different parameters in the model. We study the finite sample behaviour of the estimators through a simulation study and illustrate this model on a practical data example.  相似文献   

18.
19.
In sample surveys and many other areas of application, the ratio of variables is often of great importance. This often occurs when one variable is available at the population level while another variable of interest is available for sample data only. In this case, using the sample ratio, we can often gather valuable information on the variable of interest for the unsampled observations. In many other studies, the ratio itself is of interest, for example when estimating proportions from a random number of observations. In this note we compare three confidence intervals for the population ratio: A large sample interval, a log based version of the large sample interval, and Fieller’s interval. This is done through data analysis and through a small simulation experiment. The Fieller method has often been proposed as a superior interval for small sample sizes. We show through a data example and simulation experiments that Fieller’s method often gives nonsensical and uninformative intervals when the observations are noisy relative to the mean of the data. The large sample interval does not similarly suffer and thus can be a more reliable method for small and large samples.  相似文献   

20.
In 1945, George Alfred Barnard presented an unconditional exact test to compare two independent proportions. Critical regions for this test, by construction accomplish the very useful property of being Barnard convex sets. Besides, there are empirical findings suggesting that Barnard’s test is the most generally powerful. For Barnard’s test, calculation of critical regions is complicated due that they are constructed in an iterative form until is obtained a test size, as close as possible to the nominal significance level and less than or equal to it. In this article we present an extension to non-inferiority of this very leading test. This extension was contructed for any dissimilarity measure and tables were constructed for the difference between proportions. Also we calculate the critical regions for this extended test for sample sizes less or equal than 30, nominal significance level 0.01, 0.025, 0.05, and 0.10 and for non-inferiority margins 0.05, 0.10, 0.15, and 0.20. Additionally, we computed test sizes for the mentioned configurations. To do this calculations, we have written a program in the R environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号