首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Pseudo maximum likelihood estimation (PML) for the Dirich-let-multinomial distribution is proposed and examined in this pa-per. The procedure is compared to that based on moments (MM) for its asymptotic relative efficiency (ARE) relative to the maximum likelihood estimate (ML). It is found that PML, requiring much less computational effort than ML and possessing considerably higher ARE than MM, constitutes a good compromise between ML and MM. PML is also found to have very high ARE when an estimate for the scale parameter in the Dirichlet-multinomial distribution is all that is needed.  相似文献   

2.
Summary.  Local polynomial regression is a useful non-parametric regression tool to explore fine data structures and has been widely used in practice. We propose a new non-parametric regression technique called local composite quantile regression smoothing to improve local polynomial regression further. Sampling properties of the estimation procedure proposed are studied. We derive the asymptotic bias, variance and normality of the estimate proposed. The asymptotic relative efficiency of the estimate with respect to local polynomial regression is investigated. It is shown that the estimate can be much more efficient than the local polynomial regression estimate for various non-normal errors, while being almost as efficient as the local polynomial regression estimate for normal errors. Simulation is conducted to examine the performance of the estimates proposed. The simulation results are consistent with our theoretical findings. A real data example is used to illustrate the method proposed.  相似文献   

3.
Circular data – data whose values lie in the interval [0,2π) – are important in a number of application areas. In some, there is a suspicion that a sequence of circular readings may contain two or more segments following different models. An analysis may then seek to decide whether there are multiple segments, and if so, to estimate the changepoints separating them. This paper presents an optimal method for segmenting sequences of data following the von Mises distribution. It is shown by example that the method is also successful in data following a distribution with much heavier tails.  相似文献   

4.
Using a sample of medical malpractice insurance claims closed between 1 October 1985 and 1 October 1989 in the USA, we estimate the impact of legal reforms on the longevity of disputes, via a competing risks model that accounts for length-biased sampling and a finite sampling horizon. We find that only the 'English rule'-a rule which requires the loser at trial to pay all legal expenses-shortens the duration of disputes. Our results for this law also show that failure to correct for length-biased sampling can incorrectly imply that the English rule lengthens the time needed for settlement and litigation. Our estimates also suggest that tort reforms that place additional procedural hurdles in the plaintiff s' paths tend to lengthen the time to disposition. Here, correction for a finite sampling horizon substantially changes the inferences with regard to the eff ect of this reform on duration.  相似文献   

5.
Abstract.  Let π denote an intractable probability distribution that we would like to explore. Suppose that we have a positive recurrent, irreducible Markov chain that satisfies a minorization condition and has π as its invariant measure. We provide a method of using simulations from the Markov chain to construct a statistical estimate of π from which it is straightforward to sample. We show that this estimate is 'strongly consistent' in the sense that the total variation distance between the estimate and π converges to 0 almost surely as the number of simulations grows. Moreover, we use some recently developed asymptotic results to provide guidance as to how much simulation is necessary. Draws from the estimate can be used to approximate features of π or as intelligent starting values for the original Markov chain. We illustrate our methods with two examples.  相似文献   

6.
To design a phase III study with a final endpoint and calculate the required sample size for the desired probability of success, we need a good estimate of the treatment effect on the endpoint. It is prudent to fully utilize all available information including the historical and phase II information of the treatment as well as external data of the other treatments. It is not uncommon that a phase II study may use a surrogate endpoint as the primary endpoint and has no or limited data for the final endpoint. On the other hand, external information from the other studies for the other treatments on the surrogate and final endpoints may be available to establish a relationship between the treatment effects on the two endpoints. Through this relationship, making full use of the surrogate information may enhance the estimate of the treatment effect on the final endpoint. In this research, we propose a bivariate Bayesian analysis approach to comprehensively deal with the problem. A dynamic borrowing approach is considered to regulate the amount of historical data and surrogate information borrowing based on the level of consistency. A much simpler frequentist method is also discussed. Simulations are conducted to compare the performances of different approaches. An example is used to illustrate the applications of the methods.  相似文献   

7.
Abstract

The most commonly studied generalized normal distribution is the well-known skew-normal by Azzalini. In this paper, a new generalized normal distribution is defined and studied. The distribution is unimodal and it can be skewed right or left. The relationships between the parameters and the mean, variance, skewness, and kurtosis are discussed. It is observed that the new distribution has a much wider range of skewness and kurtosis than the skew-normal distribution. The method of maximum likelihood is proposed to estimate the distribution parameters. Two real data sets are applied to illustrate the flexibility of the distribution.  相似文献   

8.
Existing research on mixtures of regression models are limited to directly observed predictors. The estimation of mixtures of regression for measurement error data imposes challenges for statisticians. For linear regression models with measurement error data, the naive ordinary least squares method, which directly substitutes the observed surrogates for the unobserved error-prone variables, yields an inconsistent estimate for the regression coefficients. The same inconsistency also happens to the naive mixtures of regression estimate, which is based on the traditional maximum likelihood estimator and simply ignores the measurement error. To solve this inconsistency, we propose to use the deconvolution method to estimate the mixture likelihood of the observed surrogates. Then our proposed estimate is found by maximizing the estimated mixture likelihood. In addition, a generalized EM algorithm is also developed to find the estimate. The simulation results demonstrate that the proposed estimation procedures work well and perform much better than the naive estimates.  相似文献   

9.
Abstract

Per the 2019 NASIG Core Competencies for electronic resources librarians (ERL), ERLs “work with concepts and methods that are very much in flux … [they are] knowledgeable about the legal framework within which libraries and information agencies operate… [including] laws relating to… equal rights (e.g., the Americans with Disabilities Act)”. However, the Core Competencies do not define the level to which an ERL is responsible for determining the accessibility of an electronic resource. This article aims to create a better understanding of the steps an ERL can take to develop an accessibility statement pertaining to procuring accessible content. This article synthesizes key laws and policies that ERLs should be aware of in order to draft an accessibility procurement statement for their institution. It will also discuss licensing strategies, documentation collection, and conducting potential audits of electronic purchases.  相似文献   

10.
We investigate a generalized semiparametric regression. Such a model can avoid the risk of wrongly choosing the base measure function. We propose a profile likelihood to efficiently estimate both parameter and nonparametric function. The main difference from the classical profile likelihood is that the profile likelihood proposed is a functional of the base measure function, instead of a function of a real variable. By making the most of the structure information of the semiparametric exponential family, we get an explicit expression of the estimator of the least favorable curve. It ensures that the new profile likelihood is computationally simple. Due to the use of the least favorable curve, the semiparametric efficiency is achieved successfully and the estimation bias is reduced significantly. Simulation studies can illustrate that our proposal is much better than the existing methodologies for most cases under study, and is robust to the different model conditions.  相似文献   

11.
There is considerable question about how a Bayesian might provide a point estimate for a parameter when no loss function is specified. The mean, median, and mode of the posterior distribution have all been suggested. This article considers a natural Bayesian estimator based on the predictive distribution of future observations. It is shown that for the set of parameters that admit an unbiased estimate, this predictive estimate coincides with the posterior mean of the parameter. It is argued that this result provides some justification for use of the posterior mean as a Bayesian point estimate when there is no loss structure.  相似文献   

12.
The joint-risk estimate of the survival function, used for censored survival data grouped into fixed intervals, is shown to be the geometric mean of all the product-limit estimates that correspond to all the possible orderings of all the failure times and censoring times in the group. The joint-risk estimate is proposed as a more appropriate and better means of dealing with ties for data containing tied failure times and censoring times. It is also applicable to competing risk problems with tied failure times involving different causes. It could be used as a substitute for the product-limit estimate in discrete failure time analysis.  相似文献   

13.
A generalised regression estimation procedure is proposed that can lead to much improved estimation of population characteristics, such as quantiles, variances and coefficients of variation. The method involves conditioning on the discrepancy between an estimate of an auxiliary parameter and its known population value. The key distributional assumption is joint asymptotic normality of the estimates of the target and auxiliary parameters. This assumption implies that the relationship between the estimated target and the estimated auxiliary parameters is approximately linear with coefficients determined by their asymptotic covariance matrix. The main contribution of this paper is the use of the bootstrap to estimate these coefficients, which avoids the need for parametric distributional assumptions. First‐order correct conditional confidence intervals based on asymptotic normality can be improved upon using quantiles of a conditional double bootstrap approximation to the distribution of the studentised target parameter estimate.  相似文献   

14.
A household budget survey often suffers from a high nonresponse rate and a selective response. The bias that may be introduced in the estimation of budget shares because of this nonresponse can affect the estimate of a consumer price index, which is a weighted sum of partial price index numbers (weighted with the estimated budget shares). The bias is especially important when related to the standard error of the estimate. Because of the impossibility of subsampling nonrespondents to the budget survey, no exact information on the bias can be obtained. To evaluate the nonresponse bias, bounds for this bias are calculated using linear programming methods for several assumptions. The impact on a price index of a high nonresponse rate among people with a high income can also be assessed by using the elasticity with respect to total expenditure. Attention is also given to the possible nonresponse bias in a time series of price index numbers. The possible nonresponse bias is much larger than the standard error of the estimate.  相似文献   

15.
Abstract

This article develops a method to estimate search frictions as well as preference parameters in differentiated product markets. Search costs are nonparametrically identified, which means our method can be used to estimate search costs in differentiated product markets that lack a suitable search cost shifter. We apply our model to the U.S. Medigap insurance market. We find that search costs are substantial: the estimated median cost of searching for an insurer is $30. Using the estimated parameters we find that eliminating search costs could result in price decreases of as much as $71 (or 4.7%), along with increases in average consumer welfare of up to $374.  相似文献   

16.
The kernel function method developed by Yamato (1971) to estimate a probability density function essentially is a way of smoothing the empirical distribution function. This paper shows how one can generalize this method to estimate signals for a semimartingale model. A recursive convolution smoothed estimate is used to obtain an absolutely continuous estimate for an absolutely continuous signal of a semimartingale model. It is also shown that the estimator obtained has a smaller asymptotic variance than the one obtained in Thavaneswaran (1988).  相似文献   

17.
It is an important problem to compare two time series in many applications. In this paper, a computational bootstrap procedure is proposed to test if two dependent stationary time series have the same autocovariance structures. The blocks of blocks bootstrap on bivariate time series is employed to estimate the covariance matrix which is necessary in order to construct the proposed test statistic. Without much additional effort, the bootstrap critical values can also be computed as a byproduct from the same bootstrap procedure. The asymptotic distribution of the test statistic under the null hypothesis is obtained. A simulation study is conducted to examine the finite sample performance of the test. The simulation results show that the proposed procedure with the bootstrap critical values performs well empirically and is especially useful when time series are short and non-normal. The proposed test is applied to an analysis of a real data set to understand the relationship between the input and output signals of a chemical process.  相似文献   

18.
赵俊康 《统计研究》1997,14(1):34-36
统计调查中回答误差的计量赵俊康ABSTRACTItisaveryimportantproblemdemandingpromptsolutionthathowtoestimateresponseerorinstatisticalsurveys.This...  相似文献   

19.
This article shows how to use any correlation coefficient to produce an estimate of location and scale. It is part of a broader system, called a correlation estimation system (CES), that uses correlation coefficients as the starting point for estimations. The method is illustrated using the well-known normal distribution. This article shows that any correlation coefficient can be used to fit a simple linear regression line to bivariate data and then the slope and intercept are estimates of standard deviation and location. Because a robust correlation will produce robust estimates, this CES can be recommended as a tool for everyday data analysis. Simulations indicate that the median with this method using a robust correlation coefficient appears to be nearly as efficient as the mean with good data and much better if there are a few errant data points. Hypothesis testing and confidence intervals are discussed for the scale parameter; both normal and Cauchy distributions are covered.  相似文献   

20.
In this paper, a ranked set sampling procedure with ranking based on a length-biased concomitant variable is proposed. The estimate for population mean based on this sample is given. It is proved that the estimate based on ranked set samples is asymptotically more efficient than the estimate based on simple random samples. Simulation studies are conducted to present the properties of the proposed estimate for finite sample size. Moreover, the consequence of ignoring length bias is also addressed by simulation studies and the real data analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号