首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
Conditional mean independence (CMI) is one of the most widely used assumptions in the treatment effect literature to achieve model identification. We propose a Kolmogorov–Smirnov-type statistic to test CMI under a specific symmetry condition. We also propose a bootstrap procedure to obtain the p-values and critical values that are required to carry out the test. Results from a simulation study suggest that our test can work very well even in small to moderately sized samples. As an empirical illustration, we apply our test to a dataset that has been used in the literature to estimate the return on college education in China, to check whether the assumption of CMI is supported by the dataset and show the plausibility of the extra symmetry condition that is necessary for this new test.  相似文献   

2.
This paper studies a functional coe?cient time series model with trending regressors, where the coe?cients are unknown functions of time and random variables. We propose a local linear estimation method to estimate the unknown coe?cient functions, and establish the corresponding asymptotic theory under mild conditions. We also develop a test procedure to see if the functional coe?cients take particular parametric forms. For practical use, we further propose a Bayesian approach to select the bandwidths, and conduct several numerical experiments to examine the finite sample performance of our proposed local linear estimator and the test procedure. The results show that the local linear estimator works well and the proposed test has satisfactory size and power. In addition, our simulation studies show that the Bayesian bandwidth selection method performs better than the cross-validation method. Furthermore, we use the functional coe?cient model to study the relationship between consumption per capita and income per capita in United States, and it was shown that the functional coe?cient model with our proposed local linear estimator and Bayesian bandwidth selection method performs well in both in-sample fitting and out-of-sample forecasting.  相似文献   

3.
In this paper we address the evaluation of measurement process quality. We mainly focus on the evaluation procedure, as far as it is based on the numerical outcomes for the measurement of a single physical quantity. We challenge the approach where the ‘exact’ value of the observed quantity is compared with the error interval obtained from the measurements under test and we propose a procedure where reference measurements are used as ‘gold standard’. To this purpose, we designed a specific t-test procedure, explained here. We also describe and discuss a numerical simulation experiment demonstrating the behaviour of our procedure.  相似文献   

4.
Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an Expectation-Maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.  相似文献   

5.
We propose a method in order to maximize the accuracy in the estimation of piecewise constant and piecewise smooth variance functions in a nonparametric heteroscedastic fixed design regression model. The difference-based initial estimates are obtained from the given observations. Then an estimator is constructed by using iterative regularization method with the analysis-prior undecimated three-level Haar transform as regularizer term. We notice that this method shows better results in the mean square sense over an existing adaptive estimation procedure considering all the standard test functions used in addition to the functions that we target. Some simulations and comparisons with other methods are conducted to assess the performance of the proposed method.  相似文献   

6.
Bootstrap tests: how many bootstraps?   总被引:3,自引:0,他引:3  
In practice, bootstrap tests must use a finite number of bootstrap samples. This means that the outcome of the test will depend on the sequence of random numbers used to generate the bootstrap samples, and it necessarily results in some loss of power. We examine the extent of this power loss and propose a simple pretest procedure for choosing the number of bootstrap samples so as to minimize experimental randomness. Simulation experiments suggest that this procedure will work very well in practice.  相似文献   

7.
Consider the standard treatment-control model with a time-to-event endpoint. We propose a novel interpretable test statistic from a quantile function point of view. The large sample consistency of our estimator is proven for fixed bandwidth values theoretically and validated empirically. A Monte Carlo simulation study also shows that given small sample sizes, utilization of a tuning parameter through the application of a smooth quantile function estimator shows an improvement in efficiency in terms of the MSE when compared to direct application of classic Kaplan–Meier survival function estimator. The procedure is finally illustrated via an application to epithelial ovarian cancer data.  相似文献   

8.
We consider a Bayesian approach to the study of independence in a two-way contingency table which has been obtained from a two-stage cluster sampling design. If a procedure based on single-stage simple random sampling (rather than the appropriate cluster sampling) is used to test for independence, the p-value may be too small, resulting in a conclusion that the null hypothesis is false when it is, in fact, true. For many large complex surveys the Rao–Scott corrections to the standard chi-squared (or likelihood ratio) statistic provide appropriate inference. For smaller surveys, though, the Rao–Scott corrections may not be accurate, partly because the chi-squared test is inaccurate. In this paper, we use a hierarchical Bayesian model to convert the observed cluster samples to simple random samples. This provides surrogate samples which can be used to derive the distribution of the Bayes factor. We demonstrate the utility of our procedure using an example and also provide a simulation study which establishes our methodology as a viable alternative to the Rao–Scott approximations for relatively small two-stage cluster samples. We also show the additional insight gained by displaying the distribution of the Bayes factor rather than simply relying on a summary of the distribution.  相似文献   

9.
Quantile-based reliability analysis has received much attention recently. We propose new quantile-based tests for exponentiality against decreasing mean residual quantile function (DMRQ) and new better than used in expectation (NBUE) classes of alternatives. The exact null distribution of the test statistic is derived when the alternative class is DMRQ. The asymptotic properties of both the test statistics are studied. The performance of the proposed tests with other existing tests in the literature is evaluated through simulation study. Finally, we illustrate our test procedure using real data sets.  相似文献   

10.
The Wilcoxon–Mann–Whitney (WMW) test is a popular rank-based two-sample testing procedure for the strong null hypothesis that the two samples come from the same distribution. A modified WMW test, the Fligner–Policello (FP) test, has been proposed for comparing the medians of two populations. A fact that may be under-appreciated among some practitioners is that the FP test can also be used to test the strong null like the WMW. In this article, we compare the power of the WMW and FP tests for testing the strong null. Our results show that neither test is uniformly better than the other and that there can be substantial differences in power between the two choices. We propose a new, modified WMW test that combines the WMW and FP tests. Monte Carlo studies show that the combined test has good power compared to either the WMW and FP test. We provide a fast implementation of the proposed test in an open-source software. Supplementary materials for this article are available online.  相似文献   

11.
We propose a new test for testing the equality of location parameter of two populations based on empirical distribution function (ECDF). The test statistics is obtained as a power divergence between two ECDFs. The test is shown to be distribution free, and its null distribution is obtained. We conducted empirical power comparison of the proposed test with several other available tests in the literature. We found that the proposed test performs better than its competitors considered here under several population structures. We also used two real datasets to illustrate the procedure.  相似文献   

12.
We propose a nonparametric procedure to test for changes in correlation matrices at an unknown point in time. The new test requires constant expectations and variances, but only mild assumptions on the serial dependence structure, and has considerable power in finite samples. We derive the asymptotic distribution under the null hypothesis of no change as well as local power results and apply the test to stock returns.  相似文献   

13.
Software packages usually report the results of statistical tests using p-values. Users often interpret these values by comparing them with standard thresholds, for example, 0.1, 1, and 5%, which is sometimes reinforced by a star rating (***, **, and *, respectively). We consider an arbitrary statistical test whose p-value p is not available explicitly, but can be approximated by Monte Carlo samples, for example, by bootstrap or permutation tests. The standard implementation of such tests usually draws a fixed number of samples to approximate p. However, the probability that the exact and the approximated p-value lie on different sides of a threshold (the resampling risk) can be high, particularly for p-values close to a threshold. We present a method to overcome this. We consider a finite set of user-specified intervals that cover [0, 1] and that can be overlapping. We call these p-value buckets. We present algorithms that, with arbitrarily high probability, return a p-value bucket containing p. We prove that for both a bounded resampling risk and a finite runtime, overlapping buckets need to be employed, and that our methods both bound the resampling risk and guarantee a finite runtime for such overlapping buckets. To interpret decisions with overlapping buckets, we propose an extension of the star rating system. We demonstrate that our methods are suitable for use in standard software, including for low p-value thresholds occurring in multiple testing settings, and that they can be computationally more efficient than standard implementations.  相似文献   

14.
We propose a weighted empirical likelihood approach to inference with multiple samples, including stratified sampling, the estimation of a common mean using several independent and non-homogeneous samples and inference on a particular population using other related samples. The weighting scheme and the basic result are motivated and established under stratified sampling. We show that the proposed method can ideally be applied to the common mean problem and problems with related samples. The proposed weighted approach not only provides a unified framework for inference with multiple samples, including two-sample problems, but also facilitates asymptotic derivations and computational methods. A bootstrap procedure is also proposed in conjunction with the weighted approach to provide better coverage probabilities for the weighted empirical likelihood ratio confidence intervals. Simulation studies show that the weighted empirical likelihood confidence intervals perform better than existing ones.  相似文献   

15.
The multinomial selection problem is considered under the formulation of comparison with a standard, where each system is required to be compared to a single system, referred to as a “standard,” as well as to other alternative systems. The goal is to identify systems that are better than the standard, or to retain the standard when it is equal to or better than the other alternatives in terms of the probability to generate the largest or smallest performance measure. We derive new multinomial selection procedures for comparison with a standard to be applied in different scenarios, including exact small-sample procedure and approximate large-sample procedure. Empirical results and the proof are presented to demonstrate the statistical validity of our procedures. The tables of the procedure parameters and the corresponding exact probability of correct selection are also provided.  相似文献   

16.
Summary.  We use the forward search to provide robust Mahalanobis distances to detect the presence of outliers in a sample of multivariate normal data. Theoretical results on order statistics and on estimation in truncated samples provide the distribution of our test statistic. We also introduce several new robust distances with associated distributional results. Comparisons of our procedure with tests using other robust Mahalanobis distances show the good size and high power of our procedure. We also provide a unification of results on correction factors for estimation from truncated samples.  相似文献   

17.
Large, family-based imaging studies can provide a better understanding of the interactions of environmental and genetic influences on brain structure and function. The interpretation of imaging data from large family studies, however, has been hindered by the paucity of well-developed statistical tools for that permit the analysis of complex imaging data together with behavioral and clinical data. In this paper, we propose to use two methods for these analyses. First, a variance components model along with score statistics is used to test linear hypotheses of unknown parameters, such as the associations of brain measures (e.g., cortical and subcortical surfaces) with their potential genetic determinants. Second, we develop a test procedure based on a resampling method to assess simultaneously the statistical significance of linear hypotheses across the entire brain. The value of these methods lies in their computational simplicity and in their applicability to a wide range of imaging data. Simulation studies show that our test procedure can accurately control the family-wise error rate. We apply our methods to the detection of statistical significance of gender-by-age interactions and of the effects of genetic variation on the thickness of the cerebral cortex in a family study of major depressive disorder.  相似文献   

18.
We propose a procedure to identify a lowest dose having greater effect than a threshold dose under the assumption of monotonicity of dose mean response in dose response test. So, we use statistics based on contrasts among sample means and apply a group sequential procedure to our procedure to identify effectively the dose. If we can identify the dose at an early step in the sequential test, since we can terminate the procedure with a few observations, the procedure is useful from an economical point of view. In a simulation studies, we compare the superiority among these procedures based on three contrasts.  相似文献   

19.
ABSTRACT

Standard econometric methods can overlook individual heterogeneity in empirical work, generating inconsistent parameter estimates in panel data models. We propose the use of methods that allow researchers to easily identify, quantify, and address estimation issues arising from individual slope heterogeneity. We first characterize the bias in the standard fixed effects estimator when the true econometric model allows for heterogeneous slope coefficients. We then introduce a new test to check whether the fixed effects estimation is subject to heterogeneity bias. The procedure tests the population moment conditions required for fixed effects to consistently estimate the relevant parameters in the model. We establish the limiting distribution of the test and show that it is very simple to implement in practice. Examining firm investment models to showcase our approach, we show that heterogeneity bias-robust methods identify cash flow as a more important driver of investment than previously reported. Our study demonstrates analytically, via simulations, and empirically the importance of carefully accounting for individual specific slope heterogeneity in drawing conclusions about economic behavior.  相似文献   

20.
We propose a Bayesian procedure to sample from the distribution of the multi-dimensional effective dose. This effective dose is the set of dose levels of multiple predictive factors that produce a binary response with a fixed probability.We apply our algorithms to parametric and semiparametric logistics regression models, respectively. The graphical display of random samples obtained through Markov chain Monte Carlo can provide some insight into the predictive distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号