共查询到20条相似文献,搜索用时 15 毫秒
1.
Terry E. Dielman 《The American statistician》2013,67(2):111-122
A data base that provides a multivariate statistical history for each of a number of individual entities is called a pooled cross-sectional and time series data base in the econometrics literature. In marketing and survey literature the terms panel data or longitudinal data are often used. In management science a convenient term might be management data base. Such a data base provides a particularly rich environment for statistical analysis. This article reviews methods for estimating multivariate relationships particular to each individual entity and for summarizing these relationships for a number of individuals. Inference to a larger population when the data base is viewed as a sample is also considered. 相似文献
2.
Eugene Demidenko 《Scandinavian Journal of Statistics》2017,44(3):636-665
The exact density distribution of the non‐linear least squares estimator in the one‐parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the non‐linear regression with an arbitrary number of linear parameters and one intrinsically non‐linear parameter. For a very special non‐linear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieler almost a century ago, unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the non‐linear least squares are illustrated, such as non‐existence and/or multiple solutions, as major factors contributing to poor density approximation. The non‐linear Markov–Gauss theorem is formulated on the basis of the near exact EE density approximation. 相似文献
3.
In this article, we consider the problem of estimating the shape and scale parameters and predicting the unobserved removed data based on a progressive type II censored sample from the Weibull distribution. Maximum likelihood and Bayesian approaches are used to estimate the scale and shape parameters. The sampling-based method is used to draw Monte Carlo (MC) samples and it has been used to estimate the model parameters and also to predict the removed units in multiple stages of the censored sample. Two real datasets are presented and analyzed for illustrative purposes and Monte carlo simulations are performed to study the behavior of the proposed methods. 相似文献
4.
John M. Antle 《商业与经济统计学杂志》2013,31(3):192-201
Conventional production function specifications are shown to impose restrictions on the probability distribution of output that cannot be tested with the conventional models. These restrictions have important implications for firm behavior under uncertainty. A flexible representation of a firm's stochastic technology is developed based on the moments of the probability distribution of output. These moments are a unique representation of the technology and are functions of inputs. Large-sample estimators are developed for a linear moment model that is sufficiently flexible to test the restrictions implied by conventional production function specifications. The flexible moment-based approach is applied to milk production data. The first three moments of output are statistically significant functions of inputs. The cross-moment restrictions implied by conventional models are rejected. 相似文献
5.
Ryan Martin 《The American statistician》2017,71(2):128-136
Introductory statistical inference texts and courses treat the point estimation, hypothesis testing, and interval estimation problems separately, with primary emphasis on large-sample approximations. Here, I present an alternative approach to teaching this course, built around p-values, emphasizing provably valid inference for all sample sizes. Details about computation and marginalization are also provided, with several illustrative examples, along with a course outline. Supplementary materials for this article are available online. 相似文献
6.
Yi-Ting Chen 《商业与经济统计学杂志》2018,36(3):438-455
We propose a unified approach that is flexibly applicable to various types of grouped data for estimating and testing parametric income distributions. To simplify the use of our approach, we also provide a parametric bootstrap method and show its asymptotic validity. We also compare this approach with existing methods for grouped income data, and assess their finite-sample performance by a Monte Carlo simulation. For empirical demonstrations, we apply our approach to recovering China's income/consumption distributions from a sequence of income/consumption share tables and the U.S. income distributions from a combination of income shares and sample quantiles. Supplementary materials for this article are available online. 相似文献
7.
We discuss the maximum likelihood estimates (MLEs) of the parameters of the log-gamma distribution based on progressively Type-II censored samples. We use the profile likelihood approach to tackle the problem of the estimation of the shape parameter κ. We derive approximate maximum likelihood estimators of the parameters μ and σ and use them as initial values in the determination of the MLEs through the Newton–Raphson method. Next, we discuss the EM algorithm and propose a modified EM algorithm for the determination of the MLEs. A simulation study is conducted to evaluate the bias and mean square error of these estimators and examine their behavior as the progressive censoring scheme and the shape parameter vary. We also discuss the interval estimation of the parameters μ and σ and show that the intervals based on the asymptotic normality of MLEs have very poor probability coverages for small values of m. Finally, we present two examples to illustrate all the methods of inference discussed in this paper. 相似文献
8.
几何分布产品不完全数据场合下的统计分析 总被引:1,自引:0,他引:1
几何分布是离散型寿命分布中最为重要的分布之一,许多产品的寿命(比如开关等)都可以用几何分布来描述。由于几何分布的无记忆性,它在可靠性理论与应用概率模型中有着非常重要的地位。目前,对关于几何分布在全样本场合、截尾样本场合以及加速寿命试验场合下参数的统计分析已经有了广泛的研究。并且有着重要的理论与应用价值。因此将不完全数据场合下的几何分布问题转化为指数分布问题,再利用指数分布的已有结果首次得到了几何分布在缺失数据场合和分组数据场合下参数的近似点估计,Monte—Carlo模拟算例结果令人满意,说明该方法是可行的。 相似文献
9.
Inference from Accelerated Degradation and Failure Data Based on Gaussian Process Models 总被引:1,自引:0,他引:1
An important problem in reliability and survival analysis is that of modeling degradation together with any observed failures in a life test. Here, based on a continuous cumulative damage approach with a Gaussian process describing degradation, a general accelerated test model is presented in which failure times and degradation measures can be combined for inference about system lifetime. Some specific models when the drift of the Gaussian process depends on the acceleration variable are discussed in detail. Illustrative examples using simulated data as well as degradation data observed in carbon-film resistors are presented. 相似文献
10.
The POT (Peaks-Over-Threshold) approach consists of using the generalized Pareto distribution (GPD) to approximate the distribution of excesses over thresholds. In this article, we establish the asymptotic normality of the well-known extreme quantile estimators based on this POT method, under very general assumptions. As an illustration, from this result, we deduce the asymptotic normality of the POT extreme quantile estimators in the case where the maximum likelihood (ML) or the generalized probability-weighted moments (GPWM) methods are used. Simulations are provided in order to compare the efficiency of these estimators based on ML or GPWM methods with classical ones proposed in the literature. 相似文献
11.
A bootstrap algorithm is provided for obtaining a confidence interval for the mean of a probability distribution when sequential data are considered. For this kind of data the empirical distribution can be biased but its bias is bounded by the coefficient of variation of the stopping rule associated with the sequential procedure. When using this distribution for resampling the validity of the bootstrap approach is established by means of a series expansion of the corresponding pivotal quantity. A simulation study is carried out using Wang and Tsiatis type tests and considering the normal and exponential distributions to generate the data. This study confirms that for moderate coefficients of variation of the stopping rule, the bootstrap method allows adequate confidence intervals for the parameters to be obtained, whichever is the distribution of data. 相似文献
12.
High leverage points can induce or disrupt multicollinearity patterns in data. Observations responsible for this problem are generally known as collinearity-influential observations. A significant amount of published work on the identification of collinearity-influential observations exists; however, we show in this article that all commonly used detection techniques display greatly reduced sensitivity in the presence of multiple high leverage collinearity-influential observations. We propose a new measure based on a diagnostic robust group deletion approach. Some practical cutoff points for existing and developed diagnostics measures are also introduced. Numerical examples and simulation results show that the proposed measure provides significant improvement over the existing measures. 相似文献
13.
A New Kernel Distribution Function Estimator Based on a Non-parametric Transformation of the Data 总被引:1,自引:0,他引:1
Abstract. A new kernel distribution function (df) estimator based on a non-parametric transformation of the data is proposed. It is shown that the asymptotic bias and mean squared error of the estimator are considerably smaller than that of the standard kernel df estimator. For the practical implementation of the new estimator a data-based choice of the bandwidth is proposed. Two possible areas of application are the non-parametric smoothed bootstrap and survival analysis. In the latter case new estimators for the survival function and the mean residual life function are derived. 相似文献
14.
It is important to educational planners to estimate the likelihood and time-scale of graduation of students enrolled on a curriculum. The particular case we are concerned with, emerges when studies are not completed in the prescribed interval of time. Under these circumstances we use a framework of survival analysis applied to lifetime-type educational data to examine the distribution of duration of undergraduate studies for 10,313 students, enrolled in a Greek university during ten consecutive academic years. Non-parametric and parametric survival models have been developed for handling this distribution as well as a modified procedure for testing goodness-of-fit of the models. Data censoring was taken into account in the statistical analysis and the problems of thresholding of graduation and of perpetual students are also addressed. We found that the proposed parametric model adequately describes the empirical distribution provided by non-parametric estimation. We also found significant difference between duration of studies of men and women students. The proposed methodology could be useful to analyse data from any other type and level of education or general lifetime data with similar characteristics. 相似文献
15.
In this article, an EM algorithm approach to obtain the maximum likelihood estimates of parameters for analyzing bivariate skew normal data with non monotone missing values is presented. A simulation study is implemented to investigate the performance of the presented algorithm. Results of an application are also reported where a Bootstrap approach is used to find the variances of the parameter estimates. 相似文献
16.
A new two-parameter distribution over the unit interval, called the Unit-Inverse Gaussian distribution, is introduced and studied in detail. The proposed distribution shares many properties with other known distributions on the unit interval, such as Beta, Johnson SB, Unit-Gamma, and Kumaraswamy distributions. Estimation of the parameters of the proposed distribution are obtained by transforming the data to the inverse Gaussian distribution. Unlike most distributions on the unit interval, the maximum likelihood or method of moments estimators of the parameters of the proposed distribution are expressed in simple closed forms which do not need iterative methods to compute. Application of the proposed distribution to a real data set shows better fit than many known two-parameter distributions on the unit interval. 相似文献
17.
The potency of antiretroviral agents in AIDS clinical trials can be assessed on the basis of a viral response such as viral decay rate or change in viral load (number of HIV RNA copies in plasma). Linear, nonlinear, and nonparametric mixed-effects models have been proposed to estimate such parameters in viral dynamic models. However, there are two critical questions that stand out: whether these models achieve consistent estimates for viral decay rates, and which model is more appropriate for use in practice. Moreover, one often assumes that a model random error is normally distributed, but this assumption may be unrealistic, obscuring important features of within- and among-subject variations. In this article, we develop a skew-normal (SN) Bayesian linear mixed-effects (SN-BLME) model, an SN Bayesian nonlinear mixed-effects (SN-BNLME) model, and an SN Bayesian semiparametric nonlinear mixed-effects (SN-BSNLME) model that relax the normality assumption by considering model random error to have an SN distribution. We compare the performance of these SN models, and also compare their performance with the corresponding normal models. An AIDS dataset is used to test the proposed models and methods. It was found that there is a significant incongruity in the estimated viral decay rates. The results indicate that SN-BSNLME model is preferred to the other models, implying that an arbitrary data truncation is not necessary. The findings also suggest that it is important to assume a model with an SN distribution in order to achieve reasonable results when the data exhibit skewness. 相似文献
18.
There is an increasing number of goodness-of-fit tests whose test statistics measure deviations between the empirical characteristic function and an estimated characteristic function of the distribution in the null hypothesis. With the aim of overcoming certain computational difficulties with the calculation of some of these test statistics, a transformation of the data is considered. To apply such a transformation, the data are assumed to be continuous with arbitrary dimension, but we also provide a modification for discrete random vectors. Practical considerations leading to analytic formulas for the test statistics are studied, as well as theoretical properties such as the asymptotic null distribution, validity of the corresponding bootstrap approximation, and consistency of the test against fixed alternatives. Five applications are provided in order to illustrate the theory. These applications also include numerical comparison with other existing techniques for testing goodness-of-fit. 相似文献
19.
In this article, we propose a novel approach for testing the equality of two log-normal populations using a computational approach test (CAT) that does not require explicit knowledge of the sampling distribution of the test statistic. Simulation studies demonstrate that the proposed approach can perform hypothesis testing with satisfying actual size even at small sample sizes. Overall, it is superior to other existing methods. Also, a CAT is proposed for testing about reliability of two log-normal populations when the means are the same. Simulations show that the actual size of this new approach is close to nominal level and better than the score test. At the end, the proposed methods are illustrated using two examples. 相似文献
20.
Ming-Hung Shu 《统计学通讯:理论与方法》2014,43(14):2907-2922
Quality has become a major business strategy such that organizations with successful improvement of their products quality can gain productivity, enhance market penetration, achieve great profitability, and strongly sustain their competitive advantages. The quality of materials received from suppliers determines not only the quality of assembled products but also satisfaction and loyalty of downstream customers. In this article, we employ decision-making processes of the stochastic dominance on the basis of loss-based capability indices to compare certain potential suppliers. In view of compared results of the first-order and second-order stochastic dominances, each supplier is categorized as a superior supplier, weakly superior supplier, strongly non dominated supplier, or non dominated supplier. We develop a general computational procedure to select the preferable suppliers in an analytical way. To assist decision-makers in selecting preferable suppliers, quantile-quantile plots of loss-based capability indices presenting the results of the first-order stochastic dominance of the indices’ estimators are developed so that they can simultaneously visualize pair-wise comparisons of the suppliers and make appropriate decisions. Finally, a practical example invoking the stochastic dominance using the loss-based capability indices to carry out the quality-based supplier evaluation and selection is presented to demonstrate the applicability of our proposed methodology. 相似文献