共查询到20条相似文献,搜索用时 171 毫秒
1.
AbstractComplete expectation of life of an individual gives an intuitive and interesting perspective on the ageing process and is an important concept in the insurance sector for determination of premium. We propose a new test for testing equality of complete expectations of life of two groups/populations. Power of the new test is calculated through simulations and compared with the power of the tests given by Berger, Boos, and Guess (1988) and Aly (1997). It is observed that the proposed test statistic is more powerful than the competing tests for the cases considered in this paper. A real life illustration is included. 相似文献
2.
Steven Cook 《Journal of applied statistics》2006,33(2):233-240
In recent research, Elliott et al. (1996) have shown the use of local-to-unity detrending via generalized least squares (GLS) to substantially increase the power of the Dickey–Fuller (1979) unit root test. In this paper the relationship between the extent of detrending undertaken, determined by the detrending parameter &art1;, and the power of the resulting GLS-based Dickey–Fuller (DF-GLS) test is examined. Using Monte Carlo simulation it is shown that the values of &art1; suggested by Elliott et al. (1996) on the basis of a limiting power function seldom maximize the power of the DF-GLS test for the finite samples encountered in applied research. This result is found to hold for the DF-GLS test including either an intercept or an intercept and a trend term. An empirical examination of the order of integration of the UK household savings ratio illustrates these findings, with the unit root hypothesis rejected using values of &art1; other than that proposed by Elliott et al. (1996). 相似文献
3.
AbstractIn this article, we propose new efficient and more generalized difference-cum-exponential type estimator and generalized-difference-cum-generalized exponential type estimators for estimating the mean of sensitivity variable using the auxiliary information. We also discuss theoretically that proposed generalized estimators are more efficient than Sousa et al. (2010), Gupta et al. (2012) and Koyuncu, Gupta, and Sousa (2014) estimators. Results from a real life application and simulation study are presented to demonstrate the performance of the proposed mean estimators in relation to some of the existing mean estimators. 相似文献
4.
AbstractGenetic pleiotropy occurs when a single gene influences two or more seemingly unrelated phenotypic traits. It is significant to detect pleiotropy and understand its causes. However, most current statistical methods to discover pleiotropy mainly test the null hypothesis that none of the traits is associated with a variant, which departures from the null to test just one associated trait or k associated traits. Schaid et al. (2016) first proposed a sequential testing framework to analyze pleiotropy based on a linear model and a multivariate normal distribution. In this paper, we analyze the Economic pleiotropy which occurs when an economic action or policy influences two or more economic phenomena. In this paper, we extend the linear model to Box-Cox transformation model and proposed a new decision method. It improves the efficiency of hypothesis test and controls the Type I error. We then apply the method using economic data to multivariate sectoral employments in response to governmental expenditures and provide a quantitative assessment and some insights of different impacts from economic policy. 相似文献
5.
ABSTRACTWe propose an efficient numerical integration-based nonparametric entropy estimator for serial dependence and show that the new entropy estimator has a smaller asymptotic variance than Hong and White’s (2005) sample average-based estimator. This delivers an asymptotically more efficient test for serial dependence. In particular, the uniform kernel gives the smallest asymptotic variance for the numerical integration-based entropy estimator over a class of positive kernel functions. Moreover, the naive bootstrap can be used to obtain accurate inferences for our test, whereas it is not applicable to Hong and White’s (2005) sample averaging approach. A simulation study confirms the merits of our approach. 相似文献
6.
ABSTRACTThe K-nearest-neighbor (Knn) method is known to be more suitable in fitting nonparametrically specified curves than the kernel method (with a globally fixed smoothing parameter) when data sets are highly unevenly distributed. In this paper, we propose to estimate a nonparametric regression function subject to a monotonicity restriction using the Knn method. We also propose using a new convergence criterion to measure the closeness between an unconstrained and the (monotone) constrained Knn-estimated curves. This method is an alternative to the monotone kernel methods proposed by Hall and Huang (2001), and Du et al. (2013). We use a bootstrap procedure for testing the validity of the monotone restriction. We apply our method to the “Job Market Matching” data taken from Gan and Li (2016) and find that the unconstrained/constrained Knn estimators work better than kernel estimators for this type of highly unevenly distributed data. 相似文献
7.
Yougui Wu 《统计学通讯:理论与方法》2020,49(6):1446-1461
AbstractIn diagnostic trials, clustered data are obtained when several subunits of the same patient are observed. Intracluster correlations need to be taken into account when analyzing such clustered data. A nonparametric method has been proposed by Obuchowski (1997) to estimate the Receiver Operating Characteristic curve area (AUC) for such clustered data. However, Obuchowski’s estimator is not efficient as it gives equal weight to all pairwise rankings within and between cluster. In this paper, we propose a more efficient nonparametric AUC estimator with two sets of optimal weights. Simulation results show that the loss of efficiency of Obuchowski’s estimator for a single AUC or the AUC difference can be substantial when there is a moderate intracluster test correlation and the cluster size is large. The efficiency gain of our weighted AUC estimator for a single AUC or the AUC difference is further illustrated using the data from a study of screening tests for neonatal hearing. 相似文献
8.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17], Wang et al. [19] and Hung et al. [9]. 相似文献
9.
To develop estimators with stronger efficiencies than the trimmed means which use the empirical quantile, Kim (1992) and Chen & Chiang (1996), implicitly or explicitly used the symmetric quantile, and thus introduced new trimmed means for location and linear regression models, respectively. This study further investigates the properties of the symmetric quantile and extends its application in several aspects. (a) The symmetric quantile is more efficient than the empirical quantiles in asymptotic variances when quantile percentage α is either small or large. This reveals that for any proposal involving the α th quantile of small or large α s, the symmetric quantile is the right choice; (b) a trimmed mean based on it has asymptotic variance achieving a Cramer-Rao lower bound in one heavy tail distribution; (c) an improvement of the quantiles-based control chart by Grimshaw & Alt (1997) is discussed; (d) Monte Carlo simulations of two new scale estimators based on symmetric quantiles also support this new quantile. 相似文献
10.
Bruce E. Hansen 《Econometric Reviews》2017,36(6-9):840-852
ABSTRACTMaasoumi (1978) proposed a Stein-like estimator for simultaneous equations and showed that his Stein shrinkage estimator has bounded finite sample risk, unlike the three-stage least square estimator. We revisit his proposal by investigating Stein-like shrinkage in the context of two-stage least square (2SLS) estimation of a structural parameter. Our estimator follows Maasoumi (1978) in taking a weighted average of the 2SLS and ordinary least square estimators, with the weight depending inversely on the Hausman (1978) statistic for exogeneity. Using a local-to-exogenous asymptotic theory, we derive the asymptotic distribution of the Stein estimator and calculate its asymptotic risk. We find that if the number of endogenous variables exceeds 2, then the shrinkage estimator has strictly smaller risk than the 2SLS estimator, extending the classic result of James and Stein (1961). In a simple simulation experiment, we show that the shrinkage estimator has substantially reduced finite sample median squared error relative to the standard 2SLS estimator. 相似文献
11.
Guillaume Chevillon 《Econometric Reviews》2017,36(5):514-545
Standard tests for the rank of cointegration of a vector autoregressive process present distributions that are affected by the presence of deterministic trends. We consider the recent approach of Demetrescu et al. (2009) who recommend testing a composite null. We assess this methodology in the presence of trends (linear or broken) whose magnitude is small enough not to be always detectable at conventional significance levels. We model them using local asymptotics and derive the properties of the test statistics. We show that whether the trend is orthogonal to the cointegrating vector has a major impact on the distributions but that the test combination approach remains valid. We apply of the methodology to the study of cointegration properties between global temperatures and the radiative forcing of human gas emissions. We find new evidence of Granger Causality. 相似文献
12.
This article studies the heavy-traffic (HT) behavior of queueing networks with a single roving server. External customers arrive at the queues according to independent renewal processes and after completing service, a customer either leaves the system or is routed to another queue. This type of customer routing in queueing networks arises very naturally in many application areas (in production systems, computer- and communication networks, maintenance, etc.). In these networks, the single most important characteristic of the system performance is oftentimes the path time, i.e., the total time spent in the system by an arbitrary customer traversing a specific path. The current article presents the first HT asymptotic for the path-time distribution in queueing networks with a roving server under general renewal arrivals. In particular, we provide a strong conjecture for the system’s behavior under HT extending the conjecture of Coffman et al.[8,9] to the roving server setting of the current article. By combining this result with novel light-traffic asymptotics, we derive an approximation of the mean path time for arbitrary values of the load and renewal arrivals. This approximation is not only highly accurate for a wide range of parameter settings, but is also exact in various limiting cases. 相似文献
13.
Yuttana Ratibenyakool 《统计学通讯:理论与方法》2020,49(14):3537-3556
14.
This article studies the estimation of change point in panel models. We extend Bai (2010) and Feng et al. (2009) to the case of stationary or nonstationary regressors and error term, and whether the change point is present or not. We prove consistency and derive the asymptotic distributions of the Ordinary Least Squares (OLS) and First Difference (FD) estimators. We find that the FD estimator is robust for all cases considered. 相似文献
15.
Karlis and Santourian [14] proposed a model-based clustering algorithm, the expectation–maximization (EM) algorithm, to fit the mixture of multivariate normal-inverse Gaussian (NIG) distribution. However, the EM algorithm for the mixture of multivariate NIG requires a set of initial values to begin the iterative process, and the number of components has to be given a priori. In this paper, we present a learning-based EM algorithm: its aim is to overcome the aforementioned weaknesses of Karlis and Santourian's EM algorithm [14]. The proposed learning-based EM algorithm was first inspired by Yang et al. [24]: the process of how they perform self-clustering was then simulated. Numerical experiments showed promising results compared to Karlis and Santourian's EM algorithm. Moreover, the methodology is applicable to the analysis of extrasolar planets. Our analysis provides an understanding of the clustering results in the ln?P?ln?M and ln?P?e spaces, where M is the planetary mass, P is the orbital period and e is orbital eccentricity. Our identified groups interpret two phenomena: (1) the characteristics of two clusters in ln?P?ln?M space might be related to the tidal and disc interactions (see [9]); and (2) there are two clusters in ln?P?e space. 相似文献
16.
AbstractThe log-normal distribution is widely used to model non-negative data in many areas of applied research. In this paper, we introduce and study a family of distributions with non-negative reals as support and termed the log-epsilon-skew normal (LESN) which includes the log-normal distributions as a special case. It is related to the epsilon-skew normal developed in Mudholkar and Hutson (2000) the way the log-normal is related to the normal distribution. We study its main properties, hazard function, moments, skewness and kurtosis coefficients, and discuss maximum likelihood estimation of model parameters. We summarize the results of a simulation study to examine the behavior of the maximum likelihood estimates, and we illustrate the maximum likelihood estimation of the LESN distribution parameters to two real world data sets. 相似文献
17.
Guangyu Mao 《统计学通讯:理论与方法》2020,49(14):3572-3584
AbstractThis paper investigates a class of statistics based on Pearson’s correlation coefficient for testing the mutual independence of a random vector in high dimensions. Two existing statistics, proposed by Schott (2005) and Mao (2014) respectively, are special cases of the class. A generic testing theory for the class of statistics is developed, which clarifies under what conditions the class of statistics can be employed for the testing purpose. By virtue of the theory, three new tests are introduced, and related statistical properties are discussed. To examine our theoretical findings and check the performance of the new tests, simulation studies are applied. The simulation results justify the theoretical findings and show that the newly introduced tests perform well, as long as both the dimension and the sample size of the data are moderately large. 相似文献
18.
AbstractIn this paper, two bivariate models based on the proposed methods of Marshall and Olkin are introduced. In the first model, the new bivariate distribution is presented based on the proposed method of Marshall and Olkin (1967) which has natural interpretations, and it can be applied in fatal shock models or in competing risks models. In the second model, the proposed method of Marshall and Olkin (1997) is generalized to bivariate case and a new bivariate distribution is introduced. We call these new distributions as the bivariate Gompertz (BGP) distribution and bivariate Gompertz-geometric (BGPG) distribution, respectively. Moreover, the BGP model can be obtained as a special case of the BGPG model. Then, we present various properties of the new bivariate models. In this regard, the joint and conditional density functions, the joint cumulative distribution function can be obtained in compact forms. Also, the aging properties and the bivariate hazard gradient are discussed. This model has five unknown parameters and the maximum likelihood estimators cannot be obtained in explicit form. We propose to use the EM algorithm to compute the maximum likelihood estimators of the unknown parameters, and it is computationally quite tractable. Also, Monte Carlo simulations are performed to investigate the effectiveness of the proposed algorithm. Finally, we analyze three real data sets for illustrative purposes. 相似文献
19.
Buffered Autoregressive Models With Conditional Heteroscedasticity: An Application to Exchange Rates
This article introduces a new model called the buffered autoregressive model with generalized autoregressive conditional heteroscedasticity (BAR-GARCH). The proposed model, as an extension of the BAR model in Li et al. (2015), can capture the buffering phenomena of time series in both the conditional mean and variance. Thus, it provides us a new way to study the nonlinearity of time series. Compared with the existing AR-GARCH and threshold AR-GARCH models, an application to several exchange rates highlights the importance of the BAR-GARCH model. 相似文献
20.
In this paper, the adaptive estimation for varying coefficient models proposed by Chen, Wang, and Yao (2015) is extended to allowing for nonstationary covariates. The asymptotic properties of the estimator are obtained, showing different convergence rates for the integrated covariates and stationary covariates. The nonparametric estimator of the functional coefficient with integrated covariates has a faster convergence rate than the estimator with stationary covariates, and its asymptotic distribution is mixed normal. Moreover, the adaptive estimation is more efficient than the least square estimation for non normal errors. A simulation study is conducted to illustrate our theoretical results. 相似文献