首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
One of the well-known problems with testing for sharp null hypotheses against two-sided alternatives is that, when sample sizes diverge, every consistent test rejects the null with a probability converging to one, even when it is true. This kind of problem emerges in practically all applications of traditional two-sided tests. The main purpose of the present paper is to overcome this very intriguing impasse by considering a general solution to the problem of testing for an equivalence null interval against a two one-sided alternative. Our goal is to go beyond the limitations of likelihood-based methods by working in a nonparametric permutation framework. This solution requires the nonparameteric Combination of dependent permutation tests, which is the methodological tool that achieves Roy’s Union–intersection principle. To obtain practical solutions, the related algorithm is presented. To appreciate its effectiveness for practical purposes, a simple example and some simulation results are also presented. In addition, for every pair of consistent partial test statistics it is proved that, if sample sizes diverge, when the effect lies in the open equivalence interval, the Rejection probability (RP) converges to zero. Analogously, if the effect lies outside that interval, the RP converges to one.  相似文献   

2.
We address the approximation of functionals depending on a system of particles, described by stochastic differential equations (SDEs), in the mean-field limit when the number of particles approaches infinity. This problem is equivalent to estimating the weak solution of the limiting McKean–Vlasov SDE. To that end, our approach uses systems with finite numbers of particles and a time-stepping scheme. In this case, there are two discretization parameters: the number of time steps and the number of particles. Based on these two parameters, we consider different variants of the Monte Carlo and Multilevel Monte Carlo (MLMC) methods and show that, in the best case, the optimal work complexity of MLMC, to estimate the functional in one typical setting with an error tolerance of \(\mathrm {TOL}\), is Open image in new window when using the partitioning estimator and the Milstein time-stepping scheme. We also consider a method that uses the recent Multi-index Monte Carlo method and show an improved work complexity in the same typical setting of Open image in new window . Our numerical experiments are carried out on the so-called Kuramoto model, a system of coupled oscillators.  相似文献   

3.
4.
Bilgehan Güven 《Statistics》2013,47(4):802-814
We consider the Fuller–Battese model where random effects are allowed to be from non-normal universes. The asymptotic distribution of the F-statistic in this model is derived as the number of groups tends to infinity (is large) and sample size from any group is either fixed or large. The result is used to establish an approximate test for the significance of the random effect variance component. Robustness of the established approximate test is given.  相似文献   

5.
This paper considers testing the null hypothesis that a times series is uncorrelated when the time series is uncorrelated but statistically dependent. This case is of interest in economic and finance applications. The GARCH(1, 1) model is a leading example of a model that generates serially uncorrelated but statistically dependent data. The tests of serial correlation introduced by Andrews and Ploberger (1996, hereafter AP) are generalized for the purpose of testing the null. The rationale for generalizing the AP tests is that they have attractive properties for cases for which they were originally designed: they are consistent against all nonwhite-noise alternatives and have good all-round power against nonseasonal alternatives compared to several widely used tests in the literature. These properties are inherited by the generalized AP tests.  相似文献   

6.
In this paper we consider some non-parametric goodness-of-fit statistics for testing the partial Koziol–Green regression model. In this model, the response at a given covariate value is subject to random right censoring by two independent censoring times. One of these censoring times is informative in the sense that its survival function is some power of the survival function of the response. The goodness-of-fit statistics are based on an underlying empirical process for which large sample theory is obtained.  相似文献   

7.
Pretest–posttest studies are an important and popular method for assessing the effectiveness of a treatment or an intervention in many scientific fields. While the treatment effect, measured as the difference between the two mean responses, is of primary interest, testing the difference of the two distribution functions for the treatment and the control groups is also an important problem. The Mann–Whitney test has been a standard tool for testing the difference of distribution functions with two independent samples. We develop empirical likelihood-based (EL) methods for the Mann–Whitney test to incorporate the two unique features of pretest–posttest studies: (i) the availability of baseline information for both groups; and (ii) the structure of the data with missing by design. Our proposed methods combine the standard Mann–Whitney test with the EL method of Huang, Qin and Follmann [(2008), ‘Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study’, Journal of the American Statistical Association, 103(483), 1270–1280], the imputation-based empirical likelihood method of Chen, Wu and Thompson [(2015), ‘An Imputation-Based Empirical Likelihood Approach to Pretest–Posttest Studies’, The Canadian Journal of Statistics accepted for publication], and the jackknife empirical likelihood method of Jing, Yuan and Zhou [(2009), ‘Jackknife Empirical Likelihood’, Journal of the American Statistical Association, 104, 1224–1232]. Theoretical results are presented and finite sample performances of proposed methods are evaluated through simulation studies.  相似文献   

8.
Marshall–Olkin extended distributions offer a wider range of behaviour than the basic distributions from which they are derived and therefore may find applications in modeling lifetime data, especially within proportional odds models, and elsewhere. The present paper carries out a simulation study of likelihood ratio, Wald and score tests for the parameter that distinguishes the extended distribution from the basic one, for the Weibull and exponential cases, allowing for right censored data. The likelihood ratio test is found to perform better than the others. The test is shown to have sufficient power to detect alternatives that correspond to interesting departures from the basic model and can be useful in modeling.  相似文献   

9.
The aim of this paper is to compare the parameters' estimations of the Marshall–Olkin extended Lindley distribution obtained by six estimation methods: maximum likelihood, ordinary least-squares, weighted least-squares, maximum product of spacings, Cramér–von Mises and Anderson–Darling. The bias, root mean-squared error, average absolute difference between the true and estimate distributions' functions and the maximum absolute difference between the true and estimate distributions' functions are used as comparison criteria. Although the maximum product of spacings method is not widely used, the simulation study concludes that it is highly competitive with the maximum likelihood method.  相似文献   

10.
The inverted (or inverse) distributions are sometimes very useful to explore additional properties of the phenomenons which non-inverted distributions cannot. We introduce a new inverted model called the inverted Nadarajah–Haghighi distribution which exhibits decreasing and unimodal (right-skewed) density while the hazard rate shapes are decreasing and upside-down bathtub. Our main focus is the estimation (from both frequentist and Bayesian points of view) of the unknown parameters along with some mathematical properties of the new model. The Bayes estimators and the associated credible intervals are obtained using Markov Chain Monte Carlo techniques under squared error loss function. The gamma priors are adopted for both scale and shape parameters. The potentiality of the distribution is analysed by means of two real data sets. In fact, it is found to be superior in its ability to sufficiently model the data as compared to the inverted Weibull, inverted Rayleigh, inverted exponential, inverted gamma, inverted Lindley and inverted power Lindley models.  相似文献   

11.
We present a new method for imposing and testing concavity of cost functions using asymptotic least squares, which can be easily implemented even for nonlinear cost functions. We provide an illustration for a (generalized) Box–Cox cost function with six inputs: capital, labor disaggregated in three skill levels, energy, and intermediate materials. We present a parametric concavity test and compare price elasticities when curvature conditions are imposed versus when they are not. Although concavity is statistically rejected, estimates are not very sensitive to its imposition. We find stronger substitution between the different type of labor than between any other two inputs.  相似文献   

12.
Age–period–cohort decomposition requires an identification assumption because there is a linear relationship between age, survey period, and birth cohort (age+cohort=period). This paper proposes new decomposition methods based on factor models such as principal components model and partial least squares model. Although factor models have been applied to overcome the problem of many observed variables with possible co-linearity, they are applied to overcome the perfect co-linearity among age, period, and cohort dummy variables. Since any unobserved factor in the factor model is represented as a linear combination of the observed variables, the parameter estimates for age, period, and cohort effects are automatically obtained after the application of these factor models. Simulation results suggest that in almost all cases, the performance of the proposed method is better than that of a conventional econometric method. Empirical examples are also provided.  相似文献   

13.
The Durbin–Watson (DW) test for lag 1 autocorrelation has been generalized (DWG) to test for autocorrelations at higher lags. This includes the Wallis test for lag 4 autocorrelation. These tests are also applicable to test for the important hypothesis of randomness. It is found that for small sample sizes a normal distribution or a scaled beta distribution by matching the first two moments approximates well the null distribution of the DW and DWG statistics. The approximations seem to be adequate even when the samples are from nonnormal distributions. These approximations require the first two moments of these statistics. The expressions of these moments are derived.  相似文献   

14.
Algebraic relationships between Hosmer–Lemeshow (HL), Pigeon–Heyse (J2), and Tsiatis (T) goodness-of-fit statistics for binary logistic regression models with continuous covariates were investigated, and their distributional properties and performances studied using simulations. Groups were formed under deciles-of-risk (DOR) and partition-covariate-space (PCS) methods. Under DOR, HL and T followed reported null distributions, while J2 did not. Under PCS, only T followed its reported null distribution, with HL and J2 dependent on model covariate number and partitioning. Generally, all had similar power. Of the three, T performed best, maintaining Type-I error rates and having a distribution invariant to covariate characteristics, number, and partitioning.  相似文献   

15.
16.
This paper deals with a testing problem for each of the interaction parameters of the Lotka–Volterra ordinary differential equations system~(ODE). In short, when the rates of birth and death are fixed, we would like to test if each interaction parameter is higher or lower than a fixed reference rate. We choose a statistical model where the actual population sizes are modelled as random perturbations of the solutions to this ODE. By assuming that the random perturbations follow correlated Ornstein–Uhlenbeck processes, we propose the uniformly most powerful test concerning each interaction parameter of the ODE and, we establish the asymptotic properties of the test. Further, we illustrate the suggested test on the Canadian mink–muskrat data set. This research has received the financial support from Natural Sciences and Engineering Research Council of Canada and Institut des Sciences Mathématiques.  相似文献   

17.
Competing risks models are of great importance in reliability and survival analysis. They are often assumed to have independent causes of failure in literature, which may be unreasonable. In this article, dependent causes of failure are considered by using the Marshall–Olkin bivariate Weibull distribution. After deriving some useful results for the model, we use ML, fiducial inference, and Bayesian methods to estimate the unknown model parameters with a parameter transformation. Simulation studies are carried out to assess the performances of the three methods. Compared with the maximum likelihood method, the fiducial and Bayesian methods could provide better parameter estimation.  相似文献   

18.
The aim of this article is to compare via Monte Carlo simulations the finite sample properties of the parameter estimates of the Marshall–Olkin extended exponential distribution obtained by ten estimation methods: maximum likelihood, modified moments, L-moments, maximum product of spacings, ordinary least-squares, weighted least-squares, percentile, Crámer–von-Mises, Anderson–Darling, and Right-tail Anderson–Darling. The bias, root mean-squared error, absolute and maximum absolute difference between the true and estimated distribution functions are used as criterion of comparison. The simulation study reveals that the L-moments and maximum products of spacings methods are highly competitive with the maximum likelihood method in small as well as in large-sized samples.  相似文献   

19.
Asymptotic variance plays an important role in the inference using interval estimate of attributable risk. This paper compares asymptotic variances of attributable risk estimate using the delta method and the Fisher information matrix for a 2×2 case–control study due to the practicality of applications. The expressions of these two asymptotic variance estimates are shown to be equivalent. Because asymptotic variance usually underestimates the standard error, the bootstrap standard error has also been utilized in constructing the interval estimates of attributable risk and compared with those using asymptotic estimates. A simulation study shows that the bootstrap interval estimate performs well in terms of coverage probability and confidence length. An exact test procedure for testing independence between the risk factor and the disease outcome using attributable risk is proposed and is justified for the use with real-life examples for a small-sample situation where inference using asymptotic variance may not be valid.  相似文献   

20.
In this paper, we consider the bootstrap procedure for the augmented Dickey–Fuller (ADF) unit root test by implementing the modified divergence information criterion (MDIC, Mantalos et al. [An improved divergence information criterion for the determination of the order of an AR process, Commun. Statist. Comput. Simul. 39(5) (2010a), pp. 865–879; Forecasting ARMA models: A comparative study of information criteria focusing on MDIC, J. Statist. Comput. Simul. 80(1) (2010b), pp. 61–73]) for the selection of the optimum number of lags in the estimated model. The asymptotic distribution of the resulting bootstrap ADF/MDIC test is established and its finite sample performance is investigated through Monte-Carlo simulations. The proposed bootstrap tests are found to have finite sample sizes that are generally much closer to their nominal values, than those tests that rely on other information criteria, like the Akaike information criterion [H. Akaike, Information theory and an extension of the maximum likelihood principle, in Proceedings of the 2nd International Symposium on Information Theory, B.N. Petrov and F. Csáki, eds., Akademiai Kaido, Budapest, 1973, pp. 267–281]. The simulations reveal that the proposed procedure is quite satisfactory even for models with large negative moving average coefficients.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号