首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Forecasting in economic data analysis is dominated by linear prediction methods where the predicted values are calculated from a fitted linear regression model. With multiple predictor variables, multivariate nonparametric models were proposed in the literature. However, empirical studies indicate the prediction performance of multi-dimensional nonparametric models may be unsatisfactory. We propose a new semiparametric model average prediction (SMAP) approach to analyse panel data and investigate its prediction performance with numerical examples. Estimation of individual covariate effect only requires univariate smoothing and thus may be more stable than previous multivariate smoothing approaches. The estimation of optimal weight parameters incorporates the longitudinal correlation and the asymptotic properties of the estimated results are carefully studied in this paper.  相似文献   

2.
Interval-grouped data are defined, in general, when the event of interest cannot be directly observed and it is only known to have been occurred within an interval. In this framework, a nonparametric kernel density estimator is proposed and studied. The approach is based on the classical Parzen–Rosenblatt estimator and on the generalisation of the binned kernel density estimator. The asymptotic bias and variance of the proposed estimator are derived under usual assumptions, and the effect of using non-equally spaced grouped data is analysed. Additionally, a plug-in bandwidth selector is proposed. Through a comprehensive simulation study, the behaviour of both the estimator and the plug-in bandwidth selector considering different scenarios of data grouping is shown. An application to real data confirms the simulation results, revealing the good performance of the estimator whenever data are not heavily grouped.  相似文献   

3.
Segmentation of the mean of heteroscedastic data via cross-validation   总被引:1,自引:0,他引:1  
This paper tackles the problem of detecting abrupt changes in the mean of a heteroscedastic signal by model selection, without knowledge on the variations of the noise. A new family of change-point detection procedures is proposed, showing that cross-validation methods can be successful in the heteroscedastic framework, whereas most existing procedures are not robust to heteroscedasticity. The robustness to heteroscedasticity of the proposed procedures is supported by an extensive simulation study, together with recent partial theoretical results. An application to Comparative Genomic Hybridization (CGH) data is provided, showing that robustness to heteroscedasticity can indeed be required for their analysis.  相似文献   

4.
The article considers Bayesian analysis of hierarchical models for count, binomial and multinomial data using efficient MCMC sampling procedures. To this end, an improved method of auxiliary mixture sampling is proposed. In contrast to previously proposed samplers the method uses a bounded number of latent variables per observation, independent of the intensity of the underlying Poisson process in the case of count data, or of the number of experiments in the case of binomial and multinomial data. The bounded number of latent variables results in a more general error distribution, which is a negative log-Gamma distribution with arbitrary integer shape parameter. The required approximations of these distributions by Gaussian mixtures have been computed. Overall, the improvement leads to a substantial increase in efficiency of auxiliary mixture sampling for highly structured models. The method is illustrated for finite mixtures of generalized linear models and an epidemiological case study.  相似文献   

5.
Single index models are natural extensions of linear models and overcome the so-called curse of dimensionality. They are very useful for longitudinal data analysis. In this paper, we develop a new efficient estimation procedure for single index models with longitudinal data, based on Cholesky decomposition and local linear smoothing method. Asymptotic normality for the proposed estimators of both the parametric and nonparametric parts will be established. Monte Carlo simulation studies show excellent finite sample performance. Furthermore, we illustrate our methods with a real data example.  相似文献   

6.
This article develops a new and stable estimator for information matrix when the EM algorithm is used in maximum likelihood estimation. This estimator is constructed using the smoothed individual complete-data scores that are readily available from running the EM algorithm. The method works for dependent data sets and when the expectation step is an irregular function of the conditioning parameters. In comparison to the approach of Louis (J. R. Stat. Soc., Ser. B 44:226–233, 1982), this new estimator is more stable and easier to implement. Both real and simulated data are used to demonstrate the use of this new estimator.  相似文献   

7.
In this paper, we consider Marshall–Olkin extended exponential (MOEE) distribution which is capable of modelling various shapes of failure rates and aging criteria. The purpose of this paper is three fold. First, we derive the maximum likelihood estimators of the unknown parameters and the observed the Fisher information matrix from progressively type-II censored data. Next, the Bayes estimates are evaluated by applying Lindley’s approximation method and Markov Chain Monte Carlo method under the squared error loss function. We have performed a simulation study in order to compare the proposed Bayes estimators with the maximum likelihood estimators. We also compute 95% asymptotic confidence interval and symmetric credible interval along with the coverage probability. Third, we consider one-sample and two-sample prediction problems based on the observed sample and provide appropriate predictive intervals under classical as well as Bayesian framework. Finally, we analyse a real data set to illustrate the results derived.  相似文献   

8.
Computational expressions for the exact CDF of Roy’s test statistic in MANOVA and the largest eigenvalue of a Wishart matrix are derived based upon their Pfaffian representations given in Gupta and Richards (SIAM J. Math. Anal. 16:852–858, 1985). These expressions allow computations to proceed until a prespecified degree of accuracy is achieved. For both distributions, convergence acceleration methods are used to compute CDF values which achieve reasonably fast run times for dimensions up to 50 and error degrees of freedom as large as 100. Software that implements these computations is described and has been made available on the Web.  相似文献   

9.
Square contingency tables with the same row and column classification occur frequently in a wide range of statistical applications, e.g. whenever the members of a matched pair are classified on the same scale, which is usually ordinal. Such tables are analysed by choosing an appropriate loglinear model. We focus on the models of symmetry, triangular, diagonal and ordinal quasi symmetry. The fit of a specific model is tested by the chi-squared test or the likelihood-ratio test, where p-values are calculated from the asymptotic chi-square distribution of the test statistic or, if this seems unjustified, from the exact conditional distribution. Since the calculation of exact p-values is often not feasible, we propose alternatives based on algebraic statistics combined with MCMC methods.  相似文献   

10.
Dynamic programming (DP) is a fast, elegant method for solving many one-dimensional optimisation problems but, unfortunately, most problems in image analysis, such as restoration and warping, are two-dimensional. We consider three generalisations of DP. The first is iterated dynamic programming (IDP), where DP is used to recursively solve each of a sequence of one-dimensional problems in turn, to find a local optimum. A second algorithm is an empirical, stochastic optimiser, which is implemented by adding progressively less noise to IDP. The final approach replaces DP by a more computationally intensive Forward-Backward Gibbs Sampler, and uses a simulated annealing cooling schedule. Results are compared with existing pixel-by-pixel methods of iterated conditional modes (ICM) and simulated annealing in two applications: to restore a synthetic aperture radar (SAR) image, and to warp a pulsed-field electrophoresis gel into alignment with a reference image. We find that IDP and its stochastic variant outperform the remaining algorithms.  相似文献   

11.
When there are several replicates available at each level combination of two factors, testing nonadditivity can be done by the usual two-way ANOVA method. However, the ANOVA method cannot be used when the experiment is unreplicated (one observation per cell of the two-way classification). Several tests have been developed to address nonadditivity in unreplicated experiments starting with Tukey's (1949 Tukey, J.W. (1949). One degree of freedom for non-additivity. Biometrics 5:232242.[Crossref], [Web of Science ®] [Google Scholar]) one-degree-of-freedom test for nonadditivity. Most of them assume that the interaction term has a multiplicative form. But such tests have low power if the assumed functional form is inappropriate. This leads to tests which do not assume a specific form for the interaction term. This paper proposes a new method for testing interaction which does not assume a specific form of interaction. The proposed test has the advantage over the earlier tests that it can also be used for incomplete two-way tables. A simulation study is performed to evaluate the power of the proposed test and compare it with other well-known tests.  相似文献   

12.
In this paper we compare the kernel density estimators proposed by Bhattacharyya et al. (1988) and Jones (1991) for length biased data, showing the asymptotic normality of the estimators. A method to construct a new estimator is proposed. Moreover, we extend these results to weighted data and we study an estimator for the weight function.  相似文献   

13.
Centroid-based partitioning cluster analysis is a popular method for segmenting data into more homogeneous subgroups. Visualization can help tremendously to understand the positions of these subgroups relative to each other in higher dimensional spaces and to assess the quality of partitions. In this paper we present several improvements on existing cluster displays using neighborhood graphs with edge weights based on cluster separation and convex hulls of inner and outer cluster regions. A new display called shadow-stars can be used to diagnose pairwise cluster separation with respect to the distribution of the original data. Artificial data and two case studies with real data are used to demonstrate the techniques.  相似文献   

14.
Let X 1,…,X n be the lifetimes of n items put on testing at the same time. It is not possible to observe the actual lifetimes. However, it is possible to inspect the items at a finite number of time intervals. At each time of inspection, the number of failures can be recorded. Only these numbers of failures at times of inspections will be available to make decision on the distribution of the lifetimes. Decision can be made at the time of inspection. A “sequential” statistical test is developed to test the mean levels of the lifetimes when the probability distribution is assumed to be exponential. Some numerical results will be presented. The power and the expected time for the decision are compared with those for the idealized situation when each and every actual lifetime is recorded. They are also compared with those for the case when one and only one inspection is allowed to make the decision.  相似文献   

15.
It is customary to use two groups of indices to evaluate a diagnostic method with a binary outcome: validity indices with a standard rater (sensitivity, specificity, and positive or negative predictive values) and reliability indices (positive, negative and overall agreements) without a standard rater. However neither of these classic indices is chance-corrected, and this may distort the analysis of the problem (especially in comparative studies). One way of chance-correcting these indices is by using the Delta model (an alternative to the Kappa model), but this means having to use a computer program to work out the calculations. This paper gives an asymptotic version of the Delta model, thus allowing simple expressions to be obtained for the estimator of each of the above-mentioned chance-corrected indices (as well as for its standard error).  相似文献   

16.
This article considers Bayesian p-values for testing independence in 2 × 2 contingency tables with cell counts observed from the two independent binomial sampling scheme and the multinomial sampling scheme. From the frequentist perspective, Fisher's p-value (p F ) is the most commonly used p-value but it can be conservative for small to moderate sample sizes. On the other hand, from the Bayesian perspective, Bayarri and Berger (2000 Bayarri , M. J. , Berger , J. O. ( 2000 ). P-values for composite null models (with discussion) . J. Amer. Statist. Assoc. 95 : 11271170 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) first proposed the partial posterior predictive p-value (p PPOST ), which can avoid the double use of the data that occurs in another Bayesian p-value proposed by Guttman (1967 Guttman , I. ( 1967 ). The use of the concept of a future observation in goodness-of-fit problems . J. Roy. Statist. Soc. Ser. B 29 : 83100 . [Google Scholar]) and Rubin (1984 Rubin , D. B. ( 1984 ). Bayesianly justifiable and relevant frequency calculations for the applied statistician . Ann. Statist. 12 : 11511172 .[Crossref], [Web of Science ®] [Google Scholar]), called the posterior predictive p-value (p POST ). The subjective and objective Bayesian p-values in terms of p POST and p PPOST are derived under the beta prior and the (noninformative) Jeffreys prior, respectively. Numerical comparisons among p F , p POST , and p PPOST reveal that p PPOST performs much better than p F and p POST for small to moderate sample sizes from the frequentist perspective.  相似文献   

17.
The 2 × 2 crossover is commonly used to establish average bioequivalence of two treatments. In practice, the sample size for this design is often calculated under a supposition that the true average bioavailabilities of the two treatments are almost identical. However, the average bioequivalence analysis that is subsequently carried out does not reflect this prior belief and this leads to a loss in efficiency. We propose an alternate average bioequivalence analysis that avoids this inefficiency. The validity and substantial power advantages of our proposed method are illustrated by simulations, and two numerical examples with real data are provided.  相似文献   

18.
This article introduces a non parametric warping model for functional data. When the outcome of an experiment is a sample of curves, data can be seen as realizations of a stochastic process, which takes into account the variations between the different observed curves. The aim of this work is to define a mean pattern which represents the main behaviour of the set of all the realizations. So, we define the structural expectation of the underlying stochastic function. Then, we provide empirical estimators of this structural expectation and of each individual warping function. Consistency and asymptotic normality for such estimators are proved.  相似文献   

19.
This article considers the problem of testing marginal homogeneity in a 2 × 2 contingency table. We first review some well-known conditional and unconditional p-values appeared in the statistical literature. Then we treat the p-value as the test statistic and use the unconditional approach to obtain the modified p-value, which is shown to be valid. For a given nominal level, the rejection region of the modified p-value test contains that of the original p-value test. Some nice properties of the modified p-value are given. Especially, under mild conditions the rejection region of the modified p-value test is shown to be the Barnard convex set as described by Barnard (1947 Barnard , G. A. ( 1947 ). Significance tests for 2 × 2 tables . Biometrika 34 : 123138 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). If the one-sided null hypothesis has two nuisance parameters, we show that this result can reduce the dimension of the nuisance parameter space from two to one for computing modified p-values and sizes of tests. Numerical studies including an illustrative example are given. Numerical comparisons show that the sizes of the modified p-value tests are closer to a nominal level than those of the original p-value tests for many cases, especially in the case of small to moderate sample sizes.  相似文献   

20.
ABSTRACT

We consider asymptotic and resampling-based interval estimation procedures for the stress-strength reliability P(X < Y). We developed and studied several types of intervals. Their performances are investigated using simulation techniques and compared in terms of attainment of the nominal confidence level, symmetry of lower and upper error rates, and expected length. Recommendations concerning their use are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号