首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The Ames Salmonella test is a widely used bioassay method for assessing the mutagenic potency of a potential carcinogen. The test is quick and reliable, and exploits the correlation that exists between mutagenic potential and carcinogenic potential. The data for this case study came from an international study involving 20 laboratories in nine countries. The laboratories participated in a designed experiment in which substances (complex chemical mixtures of the type encountered in the environment) were evaluated for mutagenicity using the Ames test. A stringent protocol was followed. The study's principal aim was to investigate intra- and inter-laboratory variation in test results. The data consist of counts of revertant Salmonella colonies at each of six dose levels of a substance. The data were obtained for each of five test substances from each participating laboratory. The bioassays were carried out according to a prescribed factorial experimental design. Three sets of analysts participated in this case study. They were asked to model the dose-response relationship for two substances, to develop an index of the strength of the relationship, and to assess intra- and inter-laboratory variation in bioassay results.  相似文献   

2.
3.
ABSTRACT

A two-dimensionally indexed random coefficients autoregressive models (2D ? RCAR) and the corresponding statistical inference are important tools for the analysis of spatial lattice data. The study of such models is motivated by their second-order properties that are similar to those of 2D ? (G)ARCH which play an important role in spatial econometrics. In this article, we study the asymptotic properties of two-stage generalized moment method (2S ? GMM) under general asymptotic framework for 2D ? RCA models. So, the efficiency, strong consistency, the asymptotic normality, and hypothesis tests of 2S ? GMM estimation are derived. A simulation experiment is presented to highlight the theoretical results.  相似文献   

4.
The author considers density estimation from contaminated data where the measurement errors come from two very different sources. A first error, of Berkson type, is incurred before the experiment: the variable X of interest is unobservable and only a surrogate can be measured. A second error, of classical type, is incurred after the experiment: the surrogate can only be observed with measurement error. The author develops two nonparametric estimators of the density of X, valid whenever Berkson, classical or a mixture of both errors are present. Rates of convergence of the estimators are derived and a fully data‐driven procedure is proposed. Finite sample performance is investigated via simulations and on a real data example.  相似文献   

5.
A cluster methodology, motivated by a robust similarity matrix is proposed for identifying likely multivariate outlier structure and to estimate weighted least-square (WLS) regression parameters in linear models. The proposed method is an agglomeration of procedures that begins from clustering the n-observations through a test of ‘no-outlier hypothesis’ (TONH) to a weighted least-square regression estimation. The cluster phase partition the n-observations into h-set called main cluster and a minor cluster of size n?h. A robust distance emerge from the main cluster upon which a test of no outlier hypothesis’ is conducted. An initial WLS regression estimation is computed from the robust distance obtained from the main cluster. Until convergence, a re-weighted least-squares (RLS) regression estimate is updated with weights based on the normalized residuals. The proposed procedure blends an agglomerative hierarchical cluster analysis of a complete linkage through the TONH to the Re-weighted regression estimation phase. Hence, we propose to call it cluster-based re-weighted regression (CBRR). The CBRR is compared with three existing procedures using two data sets known to exhibit masking and swamping. The performance of CBRR is further examined through simulation experiment. The results obtained from the data set illustration and the Monte Carlo study shows that the CBRR is effective in detecting multivariate outliers where other methods are susceptible to it. The CBRR does not require enormous computation and is substantially not susceptible to masking and swamping.  相似文献   

6.
We propose a general class of Markov-switching-ARFIMA (MS-ARFIMA) processes in order to combine strands of long memory and Markov-switching literature. Although the coverage of this class of models is broad, we show that these models can be easily estimated with the Durbin–Levinson–Viterbi algorithm proposed. This algorithm combines the Durbin–Levinson and Viterbi procedures. A Monte Carlo experiment reveals that the finite sample performance of the proposed algorithm for a simple mixture model of Markov-switching mean and ARFIMA(1, d, 1) process is satisfactory. We apply the MS-ARFIMA models to the US real interest rates and the Nile river level data, respectively. The results are all highly consistent with the conjectures made or empirical results found in the literature. Particularly, we confirm the conjecture in Beran and Terrin [J. Beran and N. Terrin, Testing for a change of the long-memory parameter. Biometrika 83 (1996), pp. 627–638.] that the observations 1 to about 100 of the Nile river data seem to be more independent than the subsequent observations, and the value of differencing parameter is lower for the first 100 observations than for the subsequent data.  相似文献   

7.
This paper proposes various double unit root tests for cross-sectionally dependent panel data. The cross-sectional correlation is handled by the projection method [P.C.B. Phillips and D. Sul, Dynamic panel estimation and homogeneity testing under cross section dependence, Econom. J. 6 (2003), pp. 217–259; H.R. Moon and B. Perron, Testing for a unit root in panels with dynamic factors, J. Econom. 122 (2004), pp. 81–126] or the subtraction method [J. Bai and S. Ng, A PANIC attack on unit roots and cointegration, Econometrica 72 (2004), pp. 1127–1177]. Pooling or averaging is applied to combine results from different panel units. Also, to estimate autoregressive parameters the ordinary least squares estimation [D.P. Hasza and W.A. Fuller, Estimation for autoregressive processes with unit roots, Ann. Stat. 7 (1979), pp. 1106–1120] or the symmetric estimation [D.L. Sen and D.A. Dickey, Symmetric test for second differencing in univariate time series, J. Bus. Econ. Stat. 5 (1987), pp. 463–473] are used, and to adjust mean functions the ordinary mean adjustment or the recursive mean adjustment are used. Combinations of different methods in defactoring to eliminate the cross-sectional dependency, integrating results from panel units, estimating the parameters, and adjusting mean functions yields various available tests for double unit roots in panel data. Simple asymptotic distributions of the proposed test statistics are derived, which can be used to find critical values of the test statistics.

We perform a Monte Carlo experiment to compare the performance of these tests and to suggest optimal tests for a given panel data. Application of the proposed tests to a real data, the yearly export panel data sets of several Latin–American countries for the past 50 years, illustrates the usefulness of the proposed tests for panel data, in that they reveal stronger evidence of double unit roots than the componentwise double unit root tests of Hasza and Fuller [Estimation for autoregressive processes with unit roots, Ann. Stat. 7 (1979), pp. 1106–1120] or Sen and Dickey [Symmetric test for second differencing in univariate time series, J. Bus. Econ. Stat. 5 (1987), pp. 463–473].  相似文献   


8.
The Wisconsin Epidemiologic Study of Diabetic Retinopathy is a population-based epidemiological study carried out in Southern Wisconsin during the 1980s. The resulting data were analysed by different statisticians and ophthalmologists during the last two decades. Most of the analyses were carried out on the baseline data, although there were two follow-up studies on the same population. A Bayesian analysis of the first follow-up data, taken four years after the baseline study, was carried out by Angers and Biswas [Angers, J.-F. and Biswas, A., 2004, A Bayesian analysis of the four-year follow-up data of theWisconsin epidemiologic study of diabetic retinopathy. Statistics in Medicine, 23, 601–615.], where the choice of the best model in terms of the covariate inclusion is done, and estimates of the associated covariate effects were obtained using the baseline data to set the prior for the parameters. In the present article we consider an univariate transformation of the bivariate ordinal data, and a parallel analysis with the much simpler univariate data is carried out. The results are then compared with the results of Angers and Biswas (2004). In conclusion, our analyses suggest that the univariate analysis fails to detect features of the data found by the bivariate analysis. Even an univariate transformation of our data with quite high correlation with both left and right eyes is inadequate.  相似文献   

9.
In this article we propose a nonparametric test for autoregressive conditional heteroscedasticity based on finite-state Markov chains. A simple Monte Carlo experiment suggests that in finite samples it performs comparably to the Lagrange multiplier test under conditional normality and is superior for the t, lognormal, and exponential distributions. As an illustration, we apply both tests to Canadian/U.S. forward foreign exchange data.  相似文献   

10.
Counting by weighing is widely used in industry and often more efficient than counting manually which is time consuming and prone to human errors especially when the number of items is large. Lower confidence bounds on the numbers of items in infinitely many future bags based on the weights of the bags have been proposed recently in Liu et al. [Counting by weighing: Know your numbers with confidence, J. Roy. Statist. Soc. Ser. C 65(4) (2016), pp. 641–648]. These confidence bounds are constructed using the data from one calibration experiment and for different parameters (or numbers), but have the frequency interpretation similar to a usual confidence set for one parameter only. In this paper, the more challenging problem of constructing two-sided confidence intervals is studied. A simulation-based method for computing the critical constant is proposed. This method is proven to give the required critical constant when the number of simulations goes to infinity, and shown to be easily implemented on an ordinary computer to compute the critical constant accurately and quickly. The methodology is illustrated with a real data example.  相似文献   

11.
12.
ABSTRACT

In this paper, we propose modified spline estimators for nonparametric regression models with right-censored data, especially when the censored response observations are converted to synthetic data. Efficient implementation of these estimators depends on the set of knot points and an appropriate smoothing parameter. We use three algorithms, the default selection method (DSM), myopic algorithm (MA), and full search algorithm (FSA), to select the optimum set of knots in a penalized spline method based on a smoothing parameter, which is chosen based on different criteria, including the improved version of the Akaike information criterion (AICc), generalized cross validation (GCV), restricted maximum likelihood (REML), and Bayesian information criterion (BIC). We also consider the smoothing spline (SS), which uses all the data points as knots. The main goal of this study is to compare the performance of the algorithm and criteria combinations in the suggested penalized spline fits under censored data. A Monte Carlo simulation study is performed and a real data example is presented to illustrate the ideas in the paper. The results confirm that the FSA slightly outperforms the other methods, especially for high censoring levels.  相似文献   

13.
The current studies of using decision-making trial and evaluation laboratory (DEMATEL) method assume that the survey results from a group of people are reliable. However, the assumption might not be always true. In this study, we propose an integrated approach of using corrected item-total correlation and split-half methods to evaluate the consistency from the survey data. A case is illustrated to show that the proposed approach can identify those who have different opinions from the others. From managerial implications, the decision maker can further examine whether these different opinions should be taken into consideration when DEMATEL method is used.  相似文献   

14.
A regression model with a possible structural change and with a small number of measurements is considered. A priori information about the shape of the regression function is used to formulate the model as a linear regression model with inequality constraints and a likelihood ratio test for the presence of a change-point is constructed. The exact null distribution of the test statistic is given. Consistency of the test is proved when the noise level goes to zero. Numerical approximations to the powers against various alternatives are given and compared with the powers of the k-linear-r-ahead recursive residuals tests and CUSUM tests. Performance of four different estimators of the change-point is studied in a Monte Carlo experiment. An application of the procedures to some real data is also presented.  相似文献   

15.
Record scheme is a method to reduce the total time on test of an experiment. In this scheme, items are sequentially observed and only values smaller than all previous ones are recorded. In some situations, when the experiments are time-consuming and sometimes the items are lost during the experiment, the record scheme dominates the usual random sample scheme [M. Doostparast and N. Balakrishnan, Optimal sample size for record data and associated cost analysis for exponential distribution, J. Statist. Comput. Simul. 80(12) (2010), pp. 1389–1401]. Estimation of the mean of an exponential distribution based on record data has been treated by Samaniego and Whitaker [On estimating population characteristics from record breaking observations I. Parametric results, Naval Res. Logist. Q. 33 (1986), pp. 531–543] and Doostparast [A note on estimation based on record data, Metrika 69 (2009), pp. 69–80]. The lognormal distribution is used in a wide range of applications when the multiplicative scale is appropriate and the log-transformation removes the skew and brings about symmetry of the data distribution [N.T. Longford, Inference with the lognormal distribution, J. Statist. Plann. Inference 139 (2009), pp. 2329–2340]. In this paper, point estimates as well as confidence intervals for the unknown parameters are obtained. This will also be addressed by the Bayesian point of view. To carry out the performance of the estimators obtained, a simulation study is conducted. For illustration proposes, a real data set, due to Lawless [Statistical Models and Methods for Lifetime Data, 2nd ed., John Wiley & Sons, New York, 2003], is analysed using the procedures obtained.  相似文献   

16.
This article considers the estimation and testing of a within-group two-stage least squares (TSLS) estimator for instruments with varying degrees of weakness in a longitudinal (panel) data model. We show that adding the repeated cross-sectional information into a regression model can improve the estimation in weak instruments. Moreover, the consistency and limiting distribution of the TSLS estimator are established when both N and T tend to infinity. Some asymptotically pivotal tests are extended to a longitudinal data model and their asymptotic properties are examined. A Monte Carlo experiment is conducted to evaluate the finite sample performance of the proposed estimators.  相似文献   

17.
ABSTRACT

Background: Many exposures in epidemiological studies have nonlinear effects and the problem is to choose an appropriate functional relationship between such exposures and the outcome. One common approach is to investigate several parametric transformations of the covariate of interest, and to select a posteriori the function that fits the data the best. However, such approach may result in an inflated Type I error. Methods: Through a simulation study, we generated data from Cox's models with different transformations of a single continuous covariate. We investigated the Type I error rate and the power of the likelihood ratio test (LRT) corresponding to three different procedures that considered the same set of parametric dose-response functions. The first unconditional approach did not involve any model selection, while the second conditional approach was based on a posteriori selection of the parametric function. The proposed third approach was similar to the second except that it used a corrected critical value for the LRT to ensure a correct Type I error. Results: The Type I error rate of the second approach was two times higher than the nominal size. For simple monotone dose-response, the corrected test had similar power as the unconditional approach, while for non monotone, dose-response, it had a higher power. A real-life application that focused on the effect of body mass index on the risk of coronary heart disease death, illustrated the advantage of the proposed approach. Conclusion: Our results confirm that a posteriori selecting the functional form of the dose-response induces a Type I error inflation. The corrected procedure, which can be applied in a wide range of situations, may provide a good trade-off between Type I error and power.  相似文献   

18.
Clustering gene expression data are an important step in providing information to biologists. A Bayesian clustering procedure using Fourier series with a Dirichlet process prior for clusters was developed. As an optimal computational tool for this Bayesian approach, Gibbs sampling of a normal mixture with a Dirichlet process was implemented to calculate the posterior probabilities when the number of clusters was unknown. Monte Carlo study results showed that the model was useful for suitable clustering. The proposed method was applied to the budding yeast Saccaromyces cerevisiae and provided biologically interpretable results.  相似文献   

19.
Ranked set sampling (RSS) is a sampling procedure that can be used to improve the cost efficiency of selecting sample units of an experiment or a study. In this paper, RSS is considered for estimating the location and scale parameters a and b>0, as well as the population mean from the family F((x?a)/b). Modified best linear unbiased estimators (BLUEs) and best linear invariant estimators (BLIEs) are considered. Numerical computations with different location-scale distributions and different sample sizes are conducted to assess the efficiency of the suggested estimators. It is found that the modified BLIEs are uniformly higher than that of BLUEs for all distributions considered in this study. The modified BLUE and BLIE are more efficient when the underlying distribution is symmetric.  相似文献   

20.
For right-censored data, the accelerated failure time (AFT) model is an alternative to the commonly used proportional hazards regression model. It is a linear model for the (log-transformed) outcome of interest, and is particularly useful for censored outcomes that are not time-to-event, such as laboratory measurements. We provide a general and easily computable definition of the R2 measure of explained variation under the AFT model for right-censored data. We study its behavior under different censoring scenarios and under different error distributions; in particular, we also study its robustness when the parametric error distribution is misspecified. Based on Monte Carlo investigation results, we recommend the log-normal distribution as a robust error distribution to be used in practice for the parametric AFT model, when the R2 measure is of interest. We apply our methodology to an alcohol consumption during pregnancy data set from Ukraine.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号